From care robots to military drones: Ethical challenges posed by artificial intelligence
Artificial intelligence (AI) is increasingly influencing our everyday actions – which means it is becoming a key subject of ethical debate. Questions include: Are the decisions generated by AI free of discrimination? Is the use of AI-controlled robots in areas like nursing legitimate, sensible or perhaps even ethically imperative? Who or what entity takes responsibility when a fully autonomous AI system performs the wrong or even a prohibited action? When we say an AI “decides” or “acts”, what does that mean – and is an AI even capable of doing this? And finally, is it at all possible to design a “moral AI” and what theoretical approaches are being discussed in the research into this?
LMU asked two of its scholars and one graduate to share with the public their insights and their approaches to the ethical challenges of artificial intelligence as part of the AI Lectures.
The topic will be discussed by
- Fiorella Battaglia, visiting lecturer at the Chair in Philosophy and Political Theory,
- Timo Greger, scientific coordinator and joint project leader of “AI and Ethics” in the Faculty of Philosophy, Philosophy of Science and Religious Studies, and graduate
- Felicia Kuckertz, who was awarded a research prize for her bachelor’s thesis on “AI-powered military robots and moral responsibility”.
Moderating the discussion will be
- Martin Wirsing, Professor of Computer Science at LMU and a sought-after expert in the field of programming, software engineering and development.
The event will take place online via Zoom. Registration is requested in advance. The registration link will be published approximately 14 days before the beginning of the event. This and further information on the lecture series is available at lmu.de/ki-lectures.