- Artificial intelligence or automated content moderation alone cannot solve the problem of hate speech in online media
- AI systems for content moderation should be developed in dialogue with communities and fact checkers
- Sahana Udupa, Professor of Media Anthropology, and her team at LMU Munich have published the policy brief “Artificial Intelligence, Extreme Speech, and the Challenges of Online Content Moderation”
To solve the problematic of hate speech in digital media, AI models need to be people-centric: “Artificial intelligence or automated content moderation is now being touted as a key means to address extreme speech online. But AI systems for extreme speech detection are not globally applicable at the moment. They are not inclusive. There are vast gaps in language capacities,” says Sahana Udupa, Professor of Media Anthropology at LMU Munich.
Recommendations for content moderation
In their policy brief, “Artificial Intelligence, Extreme Speech, and the Challenges of Online Content Moderation,” Sahana Udupa and the project team have developed recommendations for AI systems and content moderation in online media, especially in social media.
Sahana Udupa says that extreme speech is a social-cultural phenomenon. She has developed a collaborative model in which fact checkers, who have reasonable contextual knowledge of online discourses as well as proximity to professional journalism, enter into a facilitated dialogue with ethnographers and AI developers.
The media anthropologist warns that there is a danger that AI could restrict freedom of opinion: “Misuse of AI systems can threaten the safety of citizens and especially target vulnerable groups. Therefore, we are advocating that these processes should be people-centric and we should channel our energies towards nurturing independent spaces where dialogue can occur beyond government and corporate spaces.”
Prof. Dr. Sahana Udupa
Professor of Media Anthropology