More than half of the content in Google’s news recommender is already AI-generated in several countries
The multinational company is transforming its news showcase into a closed circuit that synthesizes media content using artificial intelligence and avoids sending readers to publishers
— Artificial intelligences misrepresent the news in 45% of their responses: “These are not isolated incidents”
Artificial intelligence is advancing rapidly at Google. This is evident both in its search engine— which this year underwent one of the biggest updates in its history to integrate this technology— and in Discover, its news algorithm. Although relatively unknown to the general public, Discover has become one of today’s main digital newsstands, serving as the primary source of readers for some media outlets. It is the content recommendation system that appears on the home screen of many Android phones or beneath the search bar in the Google app. In some countries, most of this content is already generated by artificial intelligence.
The advance of AI at Google and changes to its news algorithm are cornering the media
This has been shown by an analysis published this week by Marfeel, a technology company specializing in digital audience analytics and optimization. According to its data, 51% of Discover entries in the United States, Brazil, and Mexico are already AI-generated summaries of news events, synthesized from media outlets’ content.
If the user clicks on them, the tool plays a video about the same event published on YouTube, a platform owned by Google. This occurs in 77% of cases in the United States, and in 100% of those analyzed by Marfeel in Brazil and Mexico. These are three key markets where Google often tests new features before rolling them out globally. Specialists in other Latin American countries have also issued warnings, although impact data is not yet available for them.
New structure of Discover entries. Marfeel
Until now, the Discover algorithm displayed previews of content published by media outlets and news portals, including the headline and image. Each user saw a different selection based on their interests, inferred from their search history, the websites they regularly visit, the topics they interact with, or their activity on other Google services. Each entry represented a different news story and identified the outlet that had published it. When the user clicked on one, the tool directed them to the outlet’s official website.
Those previews are now being replaced by AI-generated summaries. In these, Google displays the logos of the media outlets whose content its algorithms used to synthesize the information, but the headline and text are artificial. As explained by Agustín Gutiérrez, an Argentine audience analytics specialist, they sometimes even contain odd constructions or grammatical errors.
“The pattern of multiple icons and a single link creates perceived attribution without equivalent traffic,” agrees Xavi Beumala, a Marfeel analyst. This is a change that directly impacts media revenues by reducing the number of readers sent to them by the multinational, despite the fact that it uses their news to generate AI summaries and contextualize current events. “Google Discover is shifting from being a traffic distributor to an attention controller,” adds the specialist.
Google, contacted by elDiario.es, declined to comment on the results of the Marfeel report.
“AI is being tested from the bottom up, with the potential to move upward as Google optimizes click-through rates and satisfaction,” Beumala warns. Currently, most AI-generated Discover summaries appear in the lower results: “In the United States, they account for 21.6% of positions 1 to 5, but 82.7% after position 20. Signaling AI is first used as filler and then tested upward.”
Although Google does not provide data, specialists warn that this situation will spread to other parts of the world. “The United States, Brazil, and Mexico are likely test markets, not exceptions. What works here will expand,” say Marfeel analysts. “Discover is in an active experimentation mode, country by country.”
Google’s move to replace media news with AI-generated summaries clashes with early findings about the reliability of these systems. A large-scale study conducted in 14 countries and 18 languages, coordinated by the European Broadcasting Union (EBU), concluded that major language models contain inaccuracies or misrepresent the news in 45% of cases.
The research shows that the problem is not limited to occasional errors, but rather stems from the models’ difficulty in correctly applying the concepts they process. This leads to the generation of nonexistent quotes or incorrect attribution of responsibility for events— a critical factor when these summaries become the first layer of information users receive.
“When people don’t know what to trust, they end up trusting nothing,” warned Jean Philip De Tender, Deputy Director General of the European Broadcasting Union, during the presentation of the study in October. The study also found that when news consumers detected these AI errors, they blamed both the media outlet and the language model that produced them, “even if the error originates solely from the assistant.”
“In all the countries covered, many of our respondents also say they do not click on the source links when they encounter AI summaries.”
— Reuters Institute, on declining traffic to media outlets
On the other hand, consumption data indicates a gap between technological supply and current user demand. According to the latest report Generative AI and News: What People Think About the Role of Artificial Intelligence in Journalism and Society, published in October by the Reuters Institute at the University of Oxford, only 6% of respondents say they use these tools to consume news or summarize current events.
However, as early as July this year— when the report’s data was collected— 54% of users said they had seen AI-generated responses in their searches during the previous week. “There are already more people encountering them weekly than those actively using the standalone AI tools we asked about,” the authors note.
The study also served as a prelude to the problem of declining readership highlighted by Marfeel and audience specialists. “In all the countries covered, many of our respondents also say they do not click on the source links when they encounter AI summaries,” the Institute reports, estimating that 37% of users “sometimes” visit the media outlets whose content was used to generate the summaries, while 28% do so “rarely or never.”
Researchers also asked users about their impressions of these summaries. “Trust levels are moderate, at around 50% among those who encounter AI responses, and users value their speed and ability to aggregate information,” they note. “Because it’s fast and saves me time, and it’s usually close to what I’m looking for,” one respondent said.
Artificial intelligences misrepresent the news in 45% of their responses: “These are not isolated incidents”
Thus, Google’s former newsstand is being transformed into an algorithm designed to retain user attention within its own ecosystem, rather than distribute traffic to media outlets— even as it continues to rely on the journalistic work those same outlets produce.
Source: elDiario.es
Author: Carlos del Castillo
Picture from Freepik