News

Democracy needs dialogue: A project to tackle the communication crisis and promote mutual understanding

27 Apr 2026

Why political discussions escalate—and how an AI app aims to help people talk constructively again.

If you ask ChatGPT about typical reaction patterns in political discussions, the algorithm offers quite a selection of them—neatly ordered by levels of escalation. They range from dismissive remarks such as “You clearly don’t understand”, to resignation—“This is a waste of time”—and even statements like “I can’t follow this anymore”, signaling a breakdown in communication and often heralding the end of a discussion. At this point, one person withdraws from the conversation, feeling unheard, overwhelmed or even patronized.

Discussions matter to democracy

Yet political dialogue is crucial if social consensus is to be built. “Conversations that feel positive and give people the sense that ‘I am being heard’ can increase their willingness to participate, whether that means getting actively involved or going out to vote,” explains Katharina Hajek, “because when we talk to each other, we negotiate what we, as a society, want. That is the core of democracy,” says the research associate at LMU’s Institute for Communication Studies and Media Research (IfKW).

Hajek is one of the people who co-initiated a research project funded by the Bavarian Research Institute for Digital Transformation (bidt). In this project, experts in communication science, political science and psychology from LMU and the Technical University of Munich (TUM) are developing an AI-supported tool in the form of an app based on large language models (LLMs). The goal is to help people navigate challenging discussions without dominating or excluding others.

Business meeting conflict with arguing colleagues and mediator, tense discussion at office table with laptop

To prevent discussions from escalating unnecessarily: An AI-powered app based on large language models (LLMs) is designed to help ensure that conversations remain respectful for all participants.

© IMAGO / Pond5 Images

Using large language models

The research team—led by Professor Carsten Reinemann and Professor Alexander Wuttke from LMU, along with Professor Jürgen Pfeffer from TUM—has very concrete ideas about how LLMs can be used in this context.

“The language models are intended to be used in two ways,” Reinemann explains. “First, they serve as conversation partners to whom users can respond.” Users can choose whether do this in writing or via audio, with the latter option providing a low-threshold entry point for people who find it hard to express themselves in writing.

“Second, the LLMs will analyze users’ responses and provide immediate feedback,” says the political communication expert—for example, when users themselves “go too far” and fall into conversational patterns that escalate the discussion unnecessarily. At the same time, the app will offer suggestions on how to prevent such situations.

Reinemann emphasizes that the goal is not to steer users’ opinions, but rather to improve how conversations are conducted so that they remain constructive for everyone involved. This alone would already be a major step forward. Hajek is at pains to underscore this principle: The app is not meant to “fix” users by telling them “You’re communicating wrongly!” Instead, the intended takeaway for users should be: “I’ve learned something about you, and the door to further conversation remains open.”

Social divides have become more visible

A glance at the data highlights the importance of such a tool: Around 70 percent of Germans believe that discussing sensitive political topics has increased the risk of provoking a flare-up. Sixty percent fear being socially excluded if they express their opinions. The result can be withdrawal from discussions or avoidance of people with differing views.

According to Reinemann, it is difficult to say definitively whether the climate of political debates have become harsher overall. “What has changed is that people more often feel they can no longer freely express their opinions.” At the same time, he challenges the widespread assumption of homogeneous digital spaces, often referred to as echo chambers. “Online, people encounter radical and extreme viewpoints more frequently than in the past.”

Social fault lines—for example, in debates about migration—have also become more visible through online media, especially when they are tied to exclusion, emotions and questions of identity. “What’s particularly fascinating is not just the digital space, but also one-on-one conversations in everyday life,” Hajek stresses. While conflicts have always occurred in families or between friends, the accumulation of crises—especially during the Covid-19 pandemic—has made tensions more readily apparent even within close relationships.

A low-threshold approach

The question therefore has to be asked: Who is this app intended for? Should it primarily target highly educated users or also those who prefer simpler explanations?

“Openness to other perspectives is not just a matter of education,” Reinemann insists. “Highly educated individuals often hold firmly established views and are therefore not necessarily more open to alternative opinions.” People who are skeptical of democracy, on the other hand, often exhibit a comparatively strong desire to communicate and interact. The researchers therefore hope that such individuals might engage with the app out of curiosity—or even out of skepticism toward academic projects. “If their experience is positive, there’s a good chance they’ll use it again,” Reinemann adds.

Either way, the app is designed to be as accessible as possible, allowing people with different backgrounds to benefit. “AI, in particular, makes it possible to tailor the experience to individual needs. That’s why the audio component is so important.”

Together with co-initiator Dr. Lara Kobilke, Reinemann has already conducted preliminary studies on the use of LLMs in challenging conversations. The results are promising but the project team evidently still has work to do: While interactions with chatbots reveal patterns similar to those encountered in real-life conversations, they are nevertheless influenced by contextual factors such as trust and perceived authenticity.

Successful conflict resolution with colleagues fist bumping, mediator smiling during positive office meeting outcome

The intended takeaway for users should be: “I’ve learned something about you, and the door to further conversation remains open.”

© IMAGO / Pond5 Images

Toward a better culture of communication

The project, called DemocraGPT, is scheduled to run for three years. Initially, the researchers will focus on developing theoretical foundations and models—for example, by examining the impact of and integrating established communication techniques from fields such as couple therapy and organizational psychology.

Technical questions will be addressed in the subsequent phase, including how conversational formats can be integrated in digital interaction contexts. Different versions of the app will then be tested experimentally. Later, test users will be accompanied over longer periods to better understand the app’s impact.

Ultimately, the app will be launched alongside public outreach efforts. To this end, the researchers from both universities have partnered with media and educational organizations, including the Bavarian State Agency for Civic Education.

A high-risk project

“This is a high-risk project,” Carsten Reinemann admits. “We can’t be certain that everything will work as we hope it will. But we are confident that, whatever the case, the learning effect for everyone involved will be significant—at least in terms of understanding which communication styles make conversations easier or more difficult.”

Preliminary surveys around DemocraGPT, including within the researchers’ own personal networks, indicate that many people are very open to the idea: “People have seen friendships break down. They have experienced growing alienation and the feeling that political conversations are barely possible anymore,” Reinemann explains. At the same time, there is a degree of skepticism about whether such a tool can truly work.

“That’s exactly what spurs us on to keep going,” Hajek adds. “This topic clearly strikes a nerve. The level of interest is high!”

What are you looking for?