AI Lectures: Insights into artificial intelligence - Understanding and explaining decisions

7 Oct 2021

LMU mathematician Gitta Kutyniok explains how AI works, how it arrives at its decisions, and what makes it so successful.

Artificial intelligence as a self-learning technology is becoming increasingly important in every area of society and all scientific fields. The digital transformation that accompanies it has a major impact on the world we live in and our working environment, now and in the future.

In her lecture as part of the new AI Lectures series at LMU, Professor Gitta Kutyniok, Chair of Mathematical Foundations of Artificial Intelligence, presents an introduction to these new methods. She explains why they are so extremely successful and discusses to what extent it is currently possible to understand how artificial intelligence reaches the decisions it makes. Gitta Kutyniok also suggests ways in which transparency, explainability and security around artificial intelligence can be achieved from a mathematical perspective.


Professor Gitta Kutyniok: „Einblicke in die Künstliche Intelligenz: Entscheidungen verstehen und erklären” (in German)

Tuesday, 19 October 2021, 6:15 – 7:45 p.m.


More information on the AI Lectures


Three questions for Professor Gitta Kutyniok

The photo shows Professor Gitta Kutyniok, Chair for Mathematical Foundations of Artificial Intelligence in the Faculty of Mathematics, Computer Science and Statistics


What is it that has made AI such a success story in recent years?

Gitta Kutyniok: AI is not actually a new invention. Way back in 1943, Warren McCulloch and Walter Pitts set themselves the goal of developing artificial intelligence. In order to find an algorithmic approach to learning, the two scientists came up with the idea of replicating the functionality of the human brain. The result was artificial neural networks. However, these were not particularly successful at the time for two reasons: 1) large amounts of data were not available on which to train them, and 2) there was not enough computing power to work with deep networks—networks consisting of many layers. We are now living in the data era and have high-performance computers at our disposal, and it is precisely those two aspects that are behind the success story of AI in recent years.

How can we better understand what AI actually does and why it sometimes works so well?

Kutyniok: One successful way of understanding AI decisions is by using so-called explainability methods. These aim to open the black box that AI algorithms still represent today, in the sense that they identify what parts of the input data the AI primarily used to make a given decision. Let me give you an example: When an AI algorithm evaluates an applicant as particularly suitable for a job vacancy, an explainability method can indicate which parts of the person’s prior knowledge primarily led the AI to make that decision. Such methods are an important first step—but there need to be many more steps—in, say, recognizing, understanding and preventing an incorrect decision.

What can we do to understand how algorithms learn?

Kutyniok: The actual learning process is still essentially a mystery. We do know the (algorithmic) rules by which learning takes place. But with the data used for the learning process being so highly complex—it might consist of millions of images, for example—it is not yet possible to predict the success or failure of a learning process, which means that learning is still based primarily on trial and error. To really understand in detail how algorithms learn, it is vital to have fundamental knowledge of the mathematical basis underlying the training process. Indeed, this is an area of highly intensive research activity right now.

Professor Gitta Kutyniok holds the Chair for Mathematical Foundations of Artificial Intelligence in the Faculty of Mathematics, Computer Science and Statistics, which is one of the AI professorships funded under the State of Bavaria’s High-Tech Agenda.

What are you looking for?