AI and search engines: “We must learn to reevaluate credibility”
1 Dec 2025
Artificial intelligence is changing internet searches. Can we trust its answers? Interview with LMU communication scientist Mario Haim.
1 Dec 2025
Artificial intelligence is changing internet searches. Can we trust its answers? Interview with LMU communication scientist Mario Haim.
Among other things, he is investigating how AI summaries affect the diversity of opinions offered. | © vzign
Artificial intelligence is changing how internet users search the web, what they find there – and what they trust. In our interview, Mario Haim, Professor of Communication Science with a special focus on Computational Communication Research at LMU, talks about the opportunities and risks of the new AI summaries in search engines.
Search engines such as Bing and Google now provide AI-generated answers even before users have clicked on a link. How is this changing internet usage?
Mario Haim: Quite fundamentally. Over the past 20 years, we’ve become accustomed to link lists – and to evaluating content based on its position and certain cues. We do this subconsciously and in a fraction of a second in order to navigate the vast amount of information available online. As a rule, users would focus on the top search results – often stopping at the first click. In addition, certain domains were considered markers of reliability – for many people this would be a Wikipedia link, say, or a reference to a quality news website.
These signals are missing in AI summaries. Instead of clear source references, we get a ready-made answer in human language, sometimes even in a chat interface – and it looks the same every time: the same style, the same font, neutrally formatted, and smoothly worded. This makes interaction with computers seem more human. Language thus becomes the central tool of communication between humans and computers.
As a communications scientist, how do you assess this change?
Still in its infancy, this development presents both opportunities and risks. If the new summaries combine various reliable sources in a balanced way, this could theoretically offer added value in terms of democratic utility. If, as a user, I don’t just click on the first link, but read a short AI summary, I may get a more diverse picture of the subject in this one answer – different opinions, different perspectives, diverse actors.
What we need here, then, is more insight into data and mechanisms to understand how the major search engines generate their AI responses.Mario Haim, Professor of Communication Science with a special focus on Computational Communication Research at LMU
And what are the risks?
Firstly, there is a risk that people will uncritically accept AI-generated language because it sounds so familiar. And secondly, the answers give the impression of a diversity of views that may not actually exist. Moreover, the sources behind the AI answers are far less transparent and more difficult to verify. In the past, you could immediately see where something came from – the list of links was itself a sign of diversity. Now it’s no longer clear which voices are actually speaking in a response. So, whereas we used to question certain search results and sources more critically, we may now be more easily persuaded by the genial tone of communicative AI – regardless of whether this is justified.
What we need here, then, is more insight into data and mechanisms to understand how the major search engines generate their AI responses.
How would this happen – in terms of the IT?
Google, for example, accesses enormous amounts of data – including from Google News – combines it, and translates it into answers using large language models. However, we don’t know how sources are prioritized and what specific information is included in the AI summaries, because these questions are difficult to answer empirically. In communication science, we’re forever trying to better understand search engines by analyzing tens of thousands of search queries using input-output analyses or data donations from users; yet it remains unclear to a certain extent how strongly things like personalization, chance, and regionalization interact.
One in four people in Germany regularly use search engines to obtain information; and for one in six people, search engines are their primary source of news. This means that Google in particular not only influences what information they find, but also how opinions are formed overall.Mario Haim, Professor of Communication Science with a special focus on Computational Communication Research at LMU
What rights do researchers have in this regard?
The European Union’s Digital Services Act (DSA) now provides a legal basis that should facilitate research into the algorithmic processes behind such AI developments in the future. Among other things, the DSA obliges large platforms to grant scientists access to certain data if there is a specific risk – for example, to public discourse. In this way, the EU is recognizing that research needs more insight in order to understand and evaluate algorithmic processes. This is an instrument that does not exist in any other region of the world – an important lever for research transparency.
In what ways are AI summaries being researched in communication science?
Our team at the Department of Media and Communication at LMU is collaborating with Australian colleagues on a project to investigate how AI summaries affect the diversity of information presented – in other words, whether and how diversity of opinion changes in the presentation.
This is highly relevant because, according to studies, one in four people in Germany regularly use search engines to obtain information; and for one in six people, search engines are their primary source of news. This means that Google in particular not only influences what information they find, but also how opinions are formed overall.
In addition, we must observe how people take AI responses and what information processing mechanisms they will use in the future. The attribution of credibility, which has long been considered a key guidepost for the use of search engines, will in any case be made significantly more difficult by the new summaries.
AI responses will continue to evolve. But in all likelihood they will remain intermediaries between us and those who vouch for credible information.Mario Haim, Professor of Communication Science with a special focus on Computational Communication Research at LMU
Where will we see the economic consequences of these changes?
If people no longer click on the actual search results, providers of journalistic content lose reach – and thus revenue. This can further weaken journalism economically – and with it, diversity of opinion. However, search engines also thrive on journalistic quality. AI can only be as good as the content it draws on. If this quality declines, search engines also lose credibility. Accordingly, it is very much in Google’s interest to be able to draw on reliably verified and balanced information.
Where is this development headed – and what is your advice to users?
AI responses will continue to evolve. But in all likelihood they will remain intermediaries between us and those who vouch for credible information. As online users, we will certainly have to learn to reevaluate credibility online. This may shift attention away from the reliability of the source and toward the reliability of providers such as Google and OpenAI. After all, those who provide answers bear responsibility – and search engines will have to be judged by this standard even more in the future.