News

KI Lecture: Data-driven policy decisions

17 Jan 2022

LMU statistician Helmut Küchenhoff will speak in the “KI Lectures” series on 25 January 2022 about how AI can be applied to data analytics.

Artificial intelligence is finding use in ever more branches of civil society. Its potential in political decision-making is also a subject of debate.

Access to data plays an important role in politics, for instance when it comes to climate policy, but also during the coronavirus pandemic. What role is AI already playing in data analytics? What benefits could artificial intelligence hold in this context? Could algorithms be used to make better decisions?

This very topical question will be addressed in the next edition of the LMU’s KI Lectures.

Lecture (in German):

Professor Helmut Küchenhoff: Data-driven policy decisions

Tuesday, 25 January 2022

6:15-7:45 p.m. Register at

Three questions for Professor Helmut Küchenhoff

Prof. Dr. Helmut Küchenhoff

© Jan Greune /LMU

In politics, access to data plays an important role. What benefits could AI hold for political decision-making?

Helmut Küchenhoff: As far as I know, there are no AI systems that could directly be used for important political decisions. But what AI systems can do is take data and use it to compute forecasts or scenarios, for example, which can then significantly aid decision-making. Artificial intelligence systems are superior to humans in that they can compute all possibilities. This is well illustrated with the example of chess computers, whose moves are perceived by humans as surprising because they would not have thought of them themselves.

What are the requirements that artificial intelligence applications need to meet in order to play a role in political decision-making?

Helmut Küchenhoff: There are four crucial criteria. First, the results that AI systems arrive at must be transparent and reproducible. Second, the data basis on which these computations are made must be reliable. Third, there need to be reasonable optimization criteria, meaning that it must be clear what exactly it is that you want to improve with the use of algorithms, what the desired goal is. And fourth, it must be possible to identify errors. After all, even machines can be wrong in their predictions.

What can be done to ensure transparency in AI-based decision-making processes?

Helmut Küchenhoff: There is a new research direction in machine learning called interpretable machine learning. The research groups led by Gitta Kutyniok and Bernd Bischl at our institute are conducting very intensive research in this area here at LMU. The goal is to open the black box of machine learning, so to speak. We are currently unable to see what actually goes on inside artificial intelligence systems. It is not possible to understand how they arrive at their decisions. Interpretable machine learning will make it possible to see the reasons behind decisions made by artificial intelligence.

What roles does AI now play in data analytics?

Helmut Küchenhoff: Artificial intelligence uses statistical methods. The point at which one ends and the other begins is therefore very fluid and it is not at all easy to clearly separate what is AI and what is not. AI already plays an important role in data analytics in climate research. But even in a pandemic, where the data situation is very complex because there are so many different types of data available — on incidence rates, for example, but also on hospital occupancy rates — algorithms can also provide important computations.

Professor Helmut Küchenhoff is Professor of Mathematics at the Institute of Statistics and Head of the Statistical Consulting Unit (StaBLab) at LMU.

More information on the KI Lectures series

What are you looking for?