“KI-Lectures”: The question of responsibility

21 Dec 2021

LMU researchers discuss the ethical challenges of artificial intelligence as part of the “KI-Lectures” series.

  • The debate centers on the question of who is responsible for AI-controlled actions and what makes AI trustworthy.
  • Responsibility cannot be delegated to AI itself because AI is controlled by algorithms and is incapable of making its own decisions. What we should be talking about instead is the diffusion of responsibility and collective responsibility.

In the LMU’s “KI-Lectures” series of virtual lectures, LMU researchers discussed ethical aspects of artificial intelligence. The key theme was the question of responsibility. To answer this question, it is helpful to consider the difference between man and machine from a philosophical perspective, as Timo Greger, Scientific Coordinator in the Faculty of Philosophy, Philosophy of Science and Religious Studies, explained. “In summary, we can say that we humans are controlled by reasons. Artificial intelligence, on the other hand, is controlled by algorithms,” says Greger. “This categoric difference helps us to be in a position to pin down who bears responsibility in a given case, say in the case of discrimination or mistakes by artificial intelligence.”

In connection with AI, the question of responsibility comes up primarily in the context of the consequences of its use, particularly when mistakes are made. Award-winning graduate Felicia Kuckertz discusses this topic in the context of military robots and other fully autonomous systems. “In order to bear responsibility, a subject must be capable of acting in the classical sense, i.e. capable of making decisions,” says Kuckertz. In her opinion none of this applies to AI, so responsibility for an action cannot be delegated to AI itself. Instead, she sees a collective responsibility that includes different groups of people, such as developers, manufacturers, and political and societal actors and the parties directly involved.

To ensure that as few errors as possible occur when using autonomous vehicles or care robots, for example, AI must be trustworthy. Fiorella Battaglia, Priv.-Doz. (visiting lecturer) at the Chair in Philosophy and Political Theory, explained that this notion is connected with normative expectations: “Trustworthy AI involves three components: It must be lawful, ethical and robust.” However, tensions can arise in practice, say if the system is not transparent or if it leads to discrimination.

The lectures and debate are moderated by Professor Martin Wirsing, Professor of Computer Sciences at LMU and a distinguished expert in programming, software engineering and development, and are available now on LMU’s YouTube channel at

What are you looking for?