Automated media ecosystems – a brave new world?

Automated media ecosystems – one of the GDI's 20 major shifts – promise to improve the precision, speed and accuracy of information and to ensure more objective reporting. Conversely, they are also making some existing issues worse and giving rise to new risks.
5 March, 2026 by
Automated media ecosystems – a brave new world?
GDI Gottlieb Duttweiler Institute, Joschka Proksik

The debate on the Federal Popular Initiative '200 Francs is enough' (SBC Initiative), which aims to halve public funding for the Swiss Broadcasting Corporation, is primarily about costs. However, it touches upon another key issue too: how important are publicly funded media services today in a media landscape which has seen a staggering increase in the amount of content and number of services and channels available and where audience attention is becoming an increasingly scarce resource? Accessing information has never been easier. Artificial intelligence (AI) and growing automation of media ecosystems look set to accelerate this development by further increasing the scale and speed of media production and distribution. Automated systems are also determining to an ever greater extent which content media consumers see and which never reaches them.

Joschka J. Proksik

Joschka J. Proksik
Senior Researcher and Speaker at GDI
As a political scientist with a PhD, he analyses the interactions between geopolitics, economy, society, environment, and technology.
More about Joschka Proksik

Widespread use of automation

Automated media systems have been deployed for years – especially in data structuring and for applications where speed is of the essence. Sports, weather and financial news, financial market updates, live tickers and even election reporting have all long been automated. While the concept in itself is nothing new, the advancement in terms of reach, scale and complexity of content enabled by generative AI represents a major breakthrough.

Automation is now being used for increasingly complex tasks – from research and analysis to classification and investigative formats. Large-scale international research projects, on topics such as the Pandora Papers or, more recently, the Epstein Files (1), are using automated text mining processes to analyse vast numbers of documents. Automation is not simply a tool, but absolutely essential to achieve this level of scale in the first place. Journalism is relying on the results of systems whose blind spots or bias can only be verified to a limited extent.

In addition to these established types of automation for research purposes, generative AI is now being used to perform a growing share of core journalistic and media tasks too. Even in 2024, a survey conducted in the UK (2) indicated that journalists use AI not just for routine technical tasks, but also to create content, generate ideas and for fact-checking purposes. A 2025 study by the University of Zurich (3) on the use of AI in the Swiss journalism industry also revealed that around 40% of respondents use AI frequently or at least occasionally for these tasks.

Analysis of all online content highlights this trend even more clearly. In 2025, more content was generated by AI than humans for the first time ever. Nearly fully automated systems are already being used on social media platforms and in advertising.

Hybrid is the new normal

The boundaries between content created by humans and machine systems are becoming increasingly blurred. If content is initially created by humans but then reviewed by AI, or vice versa, it often becomes difficult to clearly establish authorship. This distinction can already only be made to an extent in many areas, for example when content is automatically pre-structured, summarised, linguistically improved or presented in various versions.

Hybrid content and systems are now becoming the norm. As AI takes on an increasing share of tasks, humans remain involved but in evolving roles. Rather than acting primarily as content producers, they increasingly serve as initiators, curators and stewards of quality. In turn, this raises new issues over transparency, identification and trust. What does it mean to cite an “original source” when it is unclear whether that source itself may have been generated by AI? And will verification become even more difficult as an increasing number of AI sources rely on other AI sources? In AI research, a phenomenon known as “model collapse” has already been identified. When models are increasingly trained on synthetic data, they can lose variance and accuracy. Applying this notion to public debate raises the question as to whether our knowledge will also become flatter and more cyclical if increasingly based on AI summaries.

STRATEGIC FORESIGHT

From the GDI Major Shifts Framework, we derived a set of implications and hypotheses for the healthcare sector. This analysis highlights a structural shift from institution-centred systems towards data- and platform-based models of care.

Automated Media Ecosystems are driving a profound, data-driven transformation of healthcare. Health information is delivered continuously, in personalised formats, and increasingly mediated by AI. Patients are becoming digitally informed actors who play a more active role in shaping care pathways, influencing quality, and participating in decision-making.

For the healthcare system, this implies a reconfiguration of care pathways, roles and power dynamics, with greater influence shifting towards platforms, AI-enabled advisory services, and digitally orchestrated patient journeys.

learn more

Acceleration without relief?

Increasing levels of automation mean media content is being produced more rapidly and the output of published content is growing. Around 40% of Swiss media professionals surveyed indicated that use of AI increased their output significantly or at least to some extent. About 30% believe they now have more time for research. However, the majority of respondents say AI makes little difference, if any at all. More rapid output and efficiency gains appear real, but by no means apply across the board.

Even if only about a third of media professionals increase their output, the overall volume of published content rises significantly, while their workload is often not being eased. Media content is also becoming much easier to produce. AI-based tools simplify processes that previously required specialist knowledge or resources. This is making it much easier for more people to create and distribute content.

What fiercer competition for the attention of target groups means for media professionals is reflected by constant growth in (excess) supply of content for consumers. The discussions about dwindling concentration levels and overstimulation show that new media systems are not only providing us with information, but are having an impact on us too. It is important to consider whether this will adversely impact individual and collective orientation and our power of judgement and the extent to which we will retain our ability to recognise and understand long-term developments.

Personalised content – ensuring relevance or creating echo chambers?

Automated media ecosystems allow the scale to be ramped up not just in production and distribution, but to an ever greater extent in terms of variation and customisation too. Algorithms have long since enabled the highly specific customisation of information for users about certain topics, achieving relevance more rapidly. In a crowded information landscape, automated personalisation can make life more convenient for consumers by hiding irrelevant content and making it easier to find relevant information. AI-based information systems can significantly improve this type of selection.  

Yet this prospect contrasts sharply with the dystopian image of the perfect echo chamber and the dangers of increasingly narrow pre-selection. Continually improving the precision of personalisation means people may end up being presented with content and outlooks that completely reflect their existing interests and opinions or what the algorithms deem appropriate. This raises the question of how far automated systems contribute to the further fragmentation of information spaces. Much depends on how individual “fit” is defined and on whether user interests truly take precedence or other logics shape personalised content.

Is AI more objective than humans or simply better at distorting information?

The more AI is used to produce content and automate intellectual tasks, the more questions of quality come to the fore. In journalism in particular, the issue is not just efficiency, but standards, reliability and trust.

AI systems are known to reproduce errors already contained in the data on which they have been trained. Automated media systems risk not only replicating such errors but proliferating them too. At the same time, AI systems consistently guided by journalistic standards may help reduce human shortcomings, such as subjective bias or ideological colouring in evaluation and language. Whether and how this can actually be achieved remains open.

The spread of misinformation and distortions could also be reduced if AI systems rely on robust data. This may sound appealing, especially in an era of polarised information spaces and growing debate about the risks of disinformation. But who controls the algorithmic selection mechanisms and ultimately distinguishes right from wrong and between opinion and fact?. One does not need to be conspiratorial to recognise that this could open up new and potentially unforeseen risks of manipulation and disinformation. Will the battle for narrative control and power over how the world is interpreted shift to AI in the future and those who run it?

Alongside the quality of content, its distribution is also a key factor. Who gets to see which content is at least as important as its quality. Fierce debates are already raging over the role of algorithms in shaping the spread and visibility of information, for example on platforms such as Google, X or Facebook. . Automated systems can also be used to deliberately amplify emotions or sensationalise content. Clickbait and intentionally polarising content have long been a feature of the media landscape. 

It is becoming apparent that automated media systems are very much a double-edged sword. While they can be used to improve quality and relevance, the potential exists for them to be deployed to foster distorted narratives, misinformation and manipulation.  

The search for standards in AI-driven media

In light of growing levels of automation, pressure is building for clear rules on the use of AI to be defined. Editorial teams and media organisations around the world have adopted corresponding guidelines. An analysis of over 50 sets of guidelines (4) from Europe and North America reveals clear convergence: transparency, labelling and human oversight are now considered minimum standards almost everywhere.

Despite this convergence, universal standards have yet to be established. Other key issues also remain unresolved, particularly regarding dependence on major tech companies. . There are structural problems too. There is limited transparency over how AI systems work and embedding journalistic values in the technology is a harder task than laying them down in standards.  

Guidelines provide a point of reference. Whether journalistic standards can be incorporated into automated systems in a sustainable and reliable way remains to be seen.

Automation is resetting the balance

Automated media systems are not only speeding up production and distribution, they are also creating a structural information overload. Generative AI enables scale and variation to an unprecedented degree, shifting the balance of abundance and scarcity within media ecosystems. Substance, orientation and trust remain limited. In an increasingly automated media environment, the ability to achieve relevance through selection and focus is of crucial importance. For media professionals and companies, the challenge is to ensure that content can still break through in an algorithm-driven environment. For media users, attention ultimately becomes the scarce resource that must be managed ever more deliberately.

Reclaiming Focus – How to Stand Out in a World of Constant Noise? The upcoming European Trend Day on 25 March 2026 will explore this issue.

Share this post
Archive