News

Proof-of-Concept Grant for Sahana Udupa

27 Apr 2020

LMU media anthropologist Sahana Udupa is developing an online tool that is designed to enable fact checkers to more effectively detect and delete online hate speech. The project is supported by a grant from the European Research Council (ERC).

Professor Sahana Udupa

Professor Sahana Udupa

In 2016 Sahana Udupa, Professor of Media Anthropology at LMU, received a Starting Grant from the European Research Council for a five-year study on digital media politics. Now the ERC has awarded her a Proof-of-Concept grant, which enables her to develop an open source application to tackle online extreme speech based on the findings of ongoing research. With its Proof-of-Concept Programme, the ERC provides financial support for efforts to translate research findings into real-world applications.

In her ERC-funded project “ONLINERPOL”, Udupa and her team are studying digital media use, specifically in India and among the Indian diaspora in Europe. Two international workshops on “Global Perspectives on Extreme Speech Online”, which she co-organized, focused on the impact of online hate speech and disinformation on democratic societies in different regions of the world. Responding to the spread of online vitriol and disinformation, governments, commercial companies and representatives of civil society have launched various initiatives. Due to the growing scale, they are increasingly using artificial intelligence (AI) as a tool to identify and remove abusive comments and false information at a rapid speed.

The studies done so far by Sahana Udupa and her research group have shown that there are significant cultural differences in the way extreme speech is composed and shared online. As she points out, these differences underline that AI has to be grounded in a people centric model. AI systems and their proponents should recognize that meaning and reflection cannot be circumvented for the sake of scalability or efficiency. “AI systems will never replace human moderators in this context, but the two levels of intervention should go hand in hand,” she says. Her latest tool – named as AI4Dignity – is intended to make this possible, with the aid of the new ERC grant.

AI4Dignity therefore turns the focus on the fact-checking role of human moderators. Fact-checkers differ from other anti-hate groups in their professional proximity to journalism. Confronted as they are with huge quantities of fraudulent information rife with abusive outbursts, fact-checkers apply journalistic practices related to the verification and categorization of content. "Therefore, they represent a significant professional community in the debate," says Udupa. "These fact-checking groups, who have vast cultural knowledge of hate speech in specific contexts, lack technical tools.” This is where AI4Dignity comes in. Udupa‘s team plans to develop an open-source tool that eases the burden on human fact-checkers. “We believe it is crucial to support independent fact checkers,” says Udupa. “Owing to the high cost of technology and personnel, fact-checking organizations in many countries are increasingly controlled by large media interests or quasi-monopolistic technology companies. Our goal is to change that.”

What are you looking for?