News

Moderating hate speech: AI models need to be people-centric

16 Aug 2021

AI-assisted systems are increasingly used to detect and delete online extreme speech. Sahana Udupa suggests a people-centric model to bring accountability and inclusivity into these systems.

Aufnahme von Sahana Udupa in einem Computerraum.

"We should address it by fathoming its scope more fully and in all possible dimensions," says Sahana Udupa. | © LMU

Sahana Udupa is Professor of Media Anthropology at LMU Munich and leads two ERC funding projects on digital politics and social media. She has also been selected as a Joan Shorenstein Fellow (Fall 2021) at the Harvard Kennedy School, Harvard University.Together with the team members of AI4Dignity, she has just published the policy brief, “Artificial Intelligence, Extreme Speech, and the Challenges of Online Content Moderation”. It is published under the ERC research results and also listed by the European Disinformation Observatory.

In this interview, she highlights some key points from the policy brief and describes how social media companies and governments should institutionalize people-centric frameworks in AI-assisted content moderation by reaching out to communities.

Is hate speech, or extreme speech as you call it, an individual problem that only concerns the persons who are confronted with it? Or is it a broader problem that concerns society as a whole?

Sahana Udupa
: Extreme speech is definitely a societal issue. Different factors shape the impact, circulation and content of such problematic speech forms. We cannot reduce the problem to the individual who is sending it or the individual who is targeted by it. The problem is much larger—it is societal. And therefore, we should address it by fathoming its scope more fully and in all possible dimensions.

Online extreme speech: contextual and dynamic

How can we address its scope? What kind of challenges are there?

Sahana Udupa
: Online extreme speech is contextual and dynamic, and it increasingly comes couched in culturally coded language, suggestive text and seemingly funny epithets. It is therefore important to go beyond the legal-regulatory approaches and see this as a social-cultural phenomenon.

Once we approach this as a social-cultural problem, a number of questions open up. For instance, what motivates people to engage in this kind of behavior? How can we understand online users as historically situated actors and not just victims or perpetrators? This leads on to questions around social media architectures, and the ways in which different kinds of interfaces and incentives that are built into social media discourses can lead to polarizing and dehumanizing language.

There are also vast challenges and gaps in the content moderation practices of social media companies as well as governments. And finally, we need to ask how all of these are shaped by longer historical forces. It is not true that hate speech is a completely new phenomenon. It existed before digital communication.

Collaborative model for AI-assisted content moderation

Artificial intelligence is used to detect extreme speech on the web and remove it. How does this work? Is it successful?

Sahana Udupa: Artificial intelligence or automated content moderation is now being touted as a key means to address extreme speech online. This is largely because of the speed and volume of extreme speech that circulates online. AI systems are expected to bring scalability and reduce costs, and they are also believed to reduce human emotional effort involved in removing objectionable content because identifying such content can be extremely stressful for human moderators. But the assumption that AI systems can be neutral and efficient is highly optimistic and even misleading. What kind of AI systems we need is a big question.


News

Proof-of-Concept Grant for Sahana Udupa

Read more

How are you addressing this challenge in your project?

Sahana Udupa: Our project is trying to intervene to make AI-assisted content moderation people centric in ways that include communities. AI systems for extreme speech detection are not globally applicable at the moment. They are not inclusive. There are vast gaps in language capacities.

We are developing a collaborative model in which factcheckers, who have reasonable contextual knowledge of online discourses as well as proximity to professional journalism, enter into a facilitated dialogue with ethnographers and AI developers. That will lead to what we call a community-based human-machine process model with a curated space of coding.

In our policy brief, we have recommended that social media companies and governments should institutionalize people-centric frameworks by reaching out to communities and incorporating feedback to shape the future development of AI-assisted content moderation. Social media companies cannot think of community engagement as an option or an act of philanthropy, nor can they limit such engagements during critical events like elections. They should incorporate community input as part of the regular job mandate of in-house AI developers and build a transparent system to engage communities and academic researchers continuously.

News

Media anthropology: Defending dignity in a digital world

Read more

Is there a danger that AI could restrict freedom of opinion?

Sahana Udupa: Yes, this is a real danger. Governments and corporates are already investing very heavily in AI systems. There is evidence that repressive governments are using AI systems to clamp down on dissent, to restrict the fundamental rights of participation and expression. Misuse of AI systems can threaten the safety of citizens and especially target vulnerable groups. Therefore, we are advocating that these processes should be people centric and we should channelize our energies towards nurturing independent spaces where dialogue between AI developers, communities and academic researchers can occur beyond government and corporate spaces.

What are you looking for?