
COMMENT | ANYA SCHIFFRIN & ROBERTA CARLINI | Media outlets worldwide are dying. On top of a persistent lack of funding, they must now contend with AI summaries and chatbots, which are siphoning away audiences. A recent study found that in 2025, online traffic to news sites fell by one-third. This problem should concern everyone, not just journalists and media executives, because democratic societies cannot function without quality information. At a time of polarization, fragmentation, and democratic backsliding, news outlets that provide quality journalism are more necessary than ever.
But to produce it, publishers must be compensated – a perennial problem in the internet age. For years, search engines like Google and social-media platforms like Facebook profited handsomely from news content while paying outlets little or nothing. The same is true of AI firms, which scraped the content they needed to train their large language models without compensating their producers or obtaining their consent. Since then, OpenAI has made deals with a few major publishers (including News Corp, Axel Springer, and Le Monde) to ensure that ChatGPT has access to the latest information. But many others have been left out in the cold.
In a new working paper, we argue that AI firms should automatically pay for the content they use. The current path – allowing essentially unlimited use of news articles without compensation – leads to the demise of original content. But banning generative AI from using all creative output is virtually impossible and would not be in anyone’s interest. The most sustainable policy is to require payment to publishers and creators – known as “statutory licensing” – for use of their output.
Governments must step in to help make these deals, because the current process for demanding compensation, in which groups of authors or individual publications have to sue AI companies, is slow, expensive, and unfair. When relatively powerless authors and news publishers face off against powerful tech firms, the playing field is hardly level, and settlements are often small. In September 2025, Anthropic agreed to pay $1.5 billion to settle a class-action copyright lawsuit, or around $3,000 for each book – a shockingly low sum when you consider that authors can spend decades researching and writing one title.
Some countries have recognized the power imbalance between Big Tech and creators, and the negative effect this has on the perceived value of creative output. In 2021, Australia passed the News Media Bargaining Code, which resulted in search and social-media giants paying media outlets for content shared on their platforms. It should take a similar approach to AI companies. In the United States, by contrast, copyright law has a broad definition of “fair use” that AI firms have relied on to defend their content scraping.
Europe has indicated a willingness to address the issue, with policymakers discussing whether and how to update the European Union’s copyright directive, which currently includes a “text and data mining” exception. In late January, the European Parliament’s Committee on Legal Affairs adopted draft proposals, on which Parliament is expected to vote in March, to ensure that copyright holders are fairly compensated.
The proposals came from a report, commissioned by the Committee on Legal Affairs and released in June 2025, highlighting the ambiguities and shortcomings of applying the current copyright structure to AI training. The report calls for a new system that imposes a remuneration obligation on providers of general-purpose AI models and creates a licensing market that restores rights-holders’ bargaining power. The first steps could include facilitating collective licensing agreements and enforcing the remuneration obligation even before broader reviews or reforms are made. The report also recommends transparency obligations for AI firms, a special regime for news media, and a central register for organizations that want to opt out of scraping.
Many hope that the report will also set the agenda for the European Commission’s upcoming review of the 2019 copyright directive, which could lead to new binding legislation later this year or in 2027.
Of course, questions remain over whether all rights-holders in Europe would be automatically opted in to a compensation framework and what an opt-out mechanism would look like. We support some mandatory payments for publishers and creators, while recognizing that simplified rules – which do not distinguish according to quality as much as desired – may have to prevail. This could look like a fixed, predetermined scale of fees like those used in pharmaceutical licensing and music royalties.
What is essential is that laws ensure fair collective bargaining, which must include outlets of all kinds and sizes. The current system, under which only the biggest news organizations can strike deals with AI firms, does not promote an open information ecosystem or media pluralism.
Preserving journalism in the era of AI will most likely require a system that incorporates prior authorization as the default for training uses and a fixed payment scale. But pressure from Big Tech to shoot down any legislation that requires remuneration for training materials is mounting. Europe must act now, before these firms become too powerful to regulate.
******
Natalia Menéndez, Kayleen Williams, and Aum Desai contributed research to this commentary.
Anya Schiffrin is a senior lecturer in practice and Co-Director of the Technology Policy and Innovation Concentration at Columbia University’s School of International and Public Affairs. Roberta Carlini is an assistant professor in the Centre for Media Pluralism and Media Freedom at the European University Institute.© Project Syndicate 1995–2026
The Independent Uganda: You get the Truth we Pay the Price