News

From care robots to military drones: Ethical challenges posed by artificial intelligence

7 Dec 2021

On 14 December, LMU scholars discuss at the KI Lectures ethical aspects of AI technology.

Applications based on artificial intelligence (AI) are already in everyday use and are increasingly influencing our actions. The use of the technology raises essential ethical questions, such as: Who is responsible for the actions of autonomous AI systems? How should we view the use of AI-controlled robots in areas like nursing? Is it at all possible to design a “moral AI” and what theoretical approaches are being discussed in the research into this?

In the “KI Lectures” series,Fiorella Battaglia, Timo Greger and Felicia Kuckertz discuss their insights and their approaches to the ethical challenges of artificial intelligence. Moderating the discussion will be Martin Wirsing, Professor of Computer Science at LMU and a sought-after expert in the field of programming, software engineering, and development.

Discussion

Prof. Dr. Dr. h.c. Martin Wirsing (Moderation), PD Dr. Fiorella Battaglia, Dipl. sc. pol. Univ. Timo Greger (M.A.), Felicia Kuckertz (B.A.): "From care robots to military drones: Ethical challenges posed by artificial intelligence"

Tuesday, 14 December 2021, from 6.15-7.45 p.m.

Register here

More information on the “KI Lectures” here
Kontakt: ringvorlesung-lmu@lmu.de

Three questions for the scientists

Dr. Battaglia, your work concerns algorithmic recommendation systems for the prediction of actions. Are human beings predictable?

Fiorella Battaglia: As human beings, we have an ambivalent attitude towards our own predictability. We seek and deny it simultaneously. This is now all the more true since science and technology can intervene to make predictions. On the one hand, using information we have in order to generate information we do not have is the very objective of scholarship. On the other hand, human free will does not seem to be subject to any rule by which the number and quality of human actions can be determined in advance by computation. There’s a distinction to be made between situations in which anticipatory work is legitimate and can actually help us predict events that we want to avoid, such as diseases, and situations in which human actions cannot be determined by algorithms. Examples of the latter include reflective actions, such as choosing a course of study, choosing a partner, or even choosing a political party.

Mr. Greger, in the recently released feature film “I'm Your Man”, a humanoid robot simulates the perfect partner. What is the key difference between humans and machines?

Timo Greger: The difference between human and machine can be described as “humanity vs. algorithmicity.” This means that we humans have many traits that a machine does not have: We are sentient, have desires and goals, and free will. A machine does not have all this but merely simulates it.
We should always be cognizant of this difference when deciding which technologies we want to use. For example, so-called “empathic AI” is not itself empathic but merely capable of processing human expressions of emotion or simulating them. However, if we increasingly blur this boundary between human and machine, it may eventually have an impact on our everyday interactions: Is it desirable for us to realize our idea of a perfect partner with an AI robot in the future? Is it desirable for us to realize our sexual needs with a sex doll advertised as allowing you to do with it all the things that a real partner would refuse to do?

Ms. Kuckertz, you have been looking into the moral responsibility for AI-powered actions. Can AI itself make moral decisions?

Felicia Kuckertz: The short answer is no, it cannot. In order to understand why, you first need to be aware of the connection between decisions, actions, and responsibility in humans. A human being carries responsibility for actions. An action — as distinguished from mere behavior —is the realization of a decision, which in turn represents the conclusion of a weighing of reasons. Against this backdrop, the answer to whether artificial intelligence itself can make (moral) decisions is explained by the difference between (AI-powered) machines and humans: The latter have the ability as well as the freedom to be influenced by reasons, to weigh them against each other, and to make a decision based on them. However, all artificial intelligence, regardless of its level of complexity, is controlled by algorithms, which means that it generates an output from an input according to fixed rules. It is neither free nor able to deliberate and thus cannot make (moral) decisions. The fact that we nevertheless talk about AI making “decisions” results from the lack of an alternative word for it.

What are you looking for?