“Extreme speech and hate hide in everyday chats”
4 Jun 2025
How does hate find its way into birthday greetings and family chats? Media anthropologist Sahana Udupa explores the dark side of messaging apps.
4 Jun 2025
How does hate find its way into birthday greetings and family chats? Media anthropologist Sahana Udupa explores the dark side of messaging apps.
Why are harmful messages shared by family and friends? Professor Sahana Udupa, a media anthropologist at LMU investigates a previously overlooked channel of disinformation: encrypted messaging apps. In her project Below the Radar,conducted at the Center for Advanced Studies (CAS), she explores how extreme speech travels through trusted networks. Her results contribute to a policy report and a forthcoming book, “WhatsApp in the World: Disinformation, Encryption and Extreme Speech”.
On June 5, Sahana Udupa will give a lecture on this topic at the United Nations Headquarters in New York.
Hateful messages often come mixed in with everyday messages.Sahana Udupa, Professor of Media Anthropology at LMU
hate messages are not always noticed and are often forwarded. | © IMAGO / MiS
How do hateful messages spread in private messaging apps?
Sahana Udupa: Especially on WhatsApp, they often come mixed in with everyday messages – like birthday wishes, festival greetings, religious notes, or local updates like water supply issues. The reason for this is their structure: Messaging apps allow users to communicate one-on-one or in small, trusted circles, in which harmful or misleading content thus feels like a part of the daily digital routine – wrapped in warmth, trust, and familiarity. Being embedded in these gives extreme messages a sense of normality.
The closed architecture makes them powerful tools for targeted, emotionally charged content that can fly under the radar. There are seemingly funny cartoons and memes to deride the “enemy”, all while becoming part of the normal everyday conversations.
What might such messages look like, for example?
We found a cartoon showing a couple on a romantic walk. The boy tells his girlfriend he would do anything for her – hearts blossom around them – and she replies by asking him to vote for the opposition political leader. In the next scene, the boy kicks her off a cliff. The meme frames this as boisterous humour, casting the girl as foolish for supporting the opposition, while embedding the act of violence in visual comedy that draws on implicit tropes of masculine aggression.
Often, it's not about agreeing with the content or believing it to be true. Forwarding messages can be more about maintaining social ties than making ideological statements.Sahana Udupa, Professor of Media Anthropology at LMU
But why do people forward such messages?
Often, it's not about agreeing with the content or believing it to be true. Forwarding messages can be more about maintaining social ties than making ideological statements. Within trusted groups – among family, friends, or local communities – people may feel a social or emotional obligation to share what’s circulating. Challenging a message could be seen as impolite or even disloyal.
I call this phenomenon “deep extreme speech”. When extreme messages circulate in close, trust-based networks, they become detached from abstract ideas of truth or moral judgments. In these settings, it’s often less about whether a message is true or hateful – and more about the emotional or social urge to pass it on and be with the flow.
Many core features of messaging apps – from disappearing messages and quick forwards to group chats and calls – make them powerful tools for amplifying disinformation and extreme speech.Sahana Udupa, Professor of Media Anthropology at LMU
How does the design of messaging apps contribute to the spread of extreme content?
Many core features of messaging apps – from disappearing messages and quick forwards to group chats and calls – make them powerful tools for amplifying disinformation and extreme speech. Recent platform updates, such as broadcast-style channels, blur the line between private and public communication even further.
And although forwarding restrictions on some apps limit how far a message can travel within the platform, content often jumps across ecosystems: A WhatsApp message might reference a Facebook post, link to a YouTube video, or cite a tweet from X. This cross-platform interaction boosts virality despite a platform’s supposedly closed nature.
Political actors and ideological groups use messaging apps strategically, often through a two-step approach. First, they post moderate content on public platforms like Instagram or YouTube to attract new supporters. Then, they invite them into private groups, where more radical content is shared.Sahana Udupa, Professor of Media Anthropology at LMU
How do extreme ideological groups take advantage of these platform features?
Political actors and ideological groups use them strategically, often through a two-step approach. First, they post moderate content on public platforms like Instagram or YouTube to attract new supporters. Then, they invite them into private groups, where more radical content is shared. In this way, encrypted messaging apps become “closed-door” hubs for extreme ideologies.
Telegram, in particular, hosts niche communities that have often been banned from mainstream platforms. A striking example are the “Incels” and the “Manosphere” –online networks marked by misogyny and toxic masculinity. After being banned from platforms like Facebook and YouTube, many of them regrouped on Telegram, where content moderation is significantly weaker. In these spaces, we have found highly explicit and violent content, including non-consensual pornographic alterations of women’s photos. These combined visual and textual messages are an orchestrated digital culture of hate.
Messenger apps also have many good sides. In some countries, they have become an indispensable infrastructure, says Sahana Udupa. | © IMAGO / Zoonar
Which regions are most affected by these dynamics?
Private messaging apps like WhatsApp are, apart from the United States, particularly influential in the Global South. India and Brazil are among its biggest user bases. In Brazil, for instance, pro-Bolsonaro narratives were heavily amplified through WhatsApp networks. They also play a central role in everyday communication across Africa and South Asia. In many of these places, WhatsApp is more than a chat app – it’s a platform for news, education, business, even prayer.
But in these regions, it also shapes extreme speech and disinformation ecosystems: from election manipulations in South Africa and Nigeria to WhatsApp use in Brazilian favelas and among nationalists in India. In the U.S. and parts of Europe, Telegram is gaining popularity, especially among groups that seek platforms with looser content regulation. It's worth noting that, while WhatsApp and Telegram are two of the most important messaging apps with serious political ramifications, there are also smaller platforms, which we are exploring in a new ERC consolidator project.
But how do you even study such hidden dynamics in encrypted spaces?
Researching them comes with major challenges. Just like regulators, researchers have limited access due to end-to-end encryption and the private nature of most chats. For public messaging groups, privacy-preserving data collection tools and automated data gathering techniques have been developed by computer scientists, who have also contributed to our book.
For closed chat groups, we use ethnographic methods – building trust with members and observing conversations in their natural context. Ethnography allows us to observe how extreme speech is shared, received, and interpreted in everyday contexts. It’s especially useful because messaging practices are often messy, full of contradictions, and shaped by emotions and social obligations. We’ve found that only by engaging with this complexity can we begin to understand why extreme speech circulates the way it does.
“Extreme speech” includes messages that spread disinformation, stir up emotions, or manipulate facts – often without sounding hateful at first glance.Sahana Udupa, Professor of Media Anthropology at LMU
Why have you chosen the term “extreme speech” over the more familiar “hate speech”?
This term is broader and more contextually nuanced. While “hate speech” usually refers to openly hostile or discriminatory expressions targeting specific groups, “extreme speech” includes messages that spread disinformation, stir up emotions, or manipulate facts – often without sounding hateful at first glance. The term includes “derogatory” speech, targeting anyone, including powerful figures such as politicians, “exclusionary” speech, marginalizing people based on identity, and “dangerous speech” – language which can trigger real-world violence.
And there’s another reason for using the phrase “extreme speech”: In many parts of the world, laws against hate speech are being used to silence political dissent. So, it’s important not to equate uncivil language with harm too quickly. Sometimes, “impolite” speech is a form of protest. What’s dangerous is when speech targets people based on identity or encourages exclusion.
In many social and regional contexts, extreme speech builds on long-standing patterns of prejudice and political polarization targeting vulnerable communitiesSahana Udupa, Professor of Media Anthropology at LMU
Did extreme speech emerge with social media?
No, extreme speech has much deeper roots. In many social and regional contexts, it builds on long-standing patterns of prejudice and political polarization targeting vulnerable communities – including immigrants, women, ethnic and religious minorities, and advocates of inclusive societies. Our research concept places a strong emphasis on this historical awareness and challenges the idea that it was social media that unleashed a sudden crisis in communication. Digital platforms didn’t create extreme speech – but they are playing a critical role in mediating it.
In your upcoming book you describe messenger apps as part of the “ecosystems of hate and disinformation in the digital age”. Do the harms outweigh the benefits?
While messenger apps do have a dark side, the picture is far from black and white. In several contexts around the world, especially in the global South, they have become an essential infrastructure and sometimes a lifeline. It’s used for everything from political organizing to parent-teacher communication. In regions with authoritarian leaders, messenger apps offer rare channels for dissenters to speak safely.
What can be done to address these challenges?
We shouldn’t rely on simplistic solutions like breaking encryption or building government backdoors. Instead, our research group advocates for smarter, multi-layered regulation: greater transparency, metadata-based tools for moderation that protect user privacy, and clear policies to counter digital manipulation – all of which are outlined in the policy report, which we recently launched at the CAS symposium attended by UN Peacekeeping officials, fact-checkers and WhatsApp’s Global policy director. To support this, platforms must invest in responsible AI, provide clear reporting tools, and avoid shifting responsibility onto users alone.
If users challenge harmful content themselves, it’s especially effective since it comes from within the group.Sahana Udupa, Professor of Media Anthropology at LMU
What can users do to protect themselves and others?
One important strategy is community-based fact-checking. If users challenge harmful content themselves, it’s especially effective since it comes from within the group. In South Africa and India, fact-checkers have even set up WhatsApp “tip lines” where users can verify suspicious messages. And users need to stay alert: Even a seemingly banal joke can pave the way for more hardline views.
is Professor of Media Anthropology at LMU Munich | © Andreas Focke
Sahana Udupa is a Professor of Media Anthropology at LMU and Fellow at Harvard University. Her research focuses on digital and AI politics, global digital cultures, online extreme speech, urban politics, as well as media. In her new co-edited book, “WhatsApp in the World: Disinformation, Encryption and Extreme Speech” (forthcoming, 2025, New York University Press, with Herman Wasserman), she analyses messaging service and the interplay between encryption and extreme speech. Her research group has co-authored a policy report with key recommendations.
Sahana Udupa conducts the project Messaging Apps, Encryption and the Enticement of Extreme Speech at the Center for Advanced Studies (CAS).
Prof. Sahana Udupa discusses WhatsApp in the World at the Berkman Klein Center for Society
6 May 2025