News

Artificial Intelligence and responsibility

30 Sept 2021

Philosophy student Felicia Kuckertz has explored the impact of advances in artificial intelligence on the concept of moral responsibility, in a study that received an LMU Student Research Award.

A military robot makes its way through the undergrowth. Its mission is to eliminate the leader of a band of terrorists. But the autonomous machine kills a group of children instead. Who bears moral responsibility for their deaths? – The robot, its developers, its controller, those who ordered and planned the mission, the government, the politicians – or none of the above? This is the question that Felicia Kuckertz tackled in her Bachelor's thesis in Philosophy.

Technological innovations, especially in the field of artificial intelligence (AI), often raise challenging questions. One of these relates to the attribution of personal responsibility for the undesirable consequences of their use. This issue has come to the public eye primarily in the context of the development of autonomous vehicles. Notably, contributions to the debate surrounding the application of AI have come not only from specialists in informatics and engineering, but also from philosophers.

Where does artificial intelligence meet philosophy?

© Felicia Kuckertz

For Felicia Kuckertz, the intersection between artificial intelligence and her discipline lies in the area of practical philosophy. She therefore chose the field of Computer Ethics for intensive study in her final year. “I wrote various essays and study papers on the subject, and I became more and more interested in the issue of responsibility in relation to new technology.” – She had discovered the ideal theme for her Bachelor’s thesis. “It‘s fascinating to follow how new technologies can complicate the meanings of traditional concepts – such as moral responsibility – which philosophers would have regarded as clearly defined up to now,” she says. This consideration ultimately led her to the question she wanted to answer in her thesis: How can one rationally attribute moral responsibility in cases in which robots or other autonomous entities controlled by AI-based software do harm? “In order to sharpen my perception of the issues involved, I quite deliberately decided to focus on the extreme example of a potentially lethal military robot,” she explains.

Who bears the moral responsibility when an autonomous machine causes harm?

In addition to her own perspectives, Felicia Kuckertz draws on existing approaches developed by other philosophers, in particular the one adopted by LMU's Professor Julian Nida-Rümelin (who has since retired). “Against the background of his viewpoint on the concept of responsibility and the premises upon it is based, I concluded that, in my hypothetical scenario, no moral responsibility can be assigned to the military robot itself.” So where does the responsibility for the fatal outcome then lie? This is by no means an easy question to answer, and the positions taken by different philosophers are strongly divergent. “One of the central texts devoted to this problem, which my argument sets out to refute, postulates that, in a scenario involving an autonomously acting military robot, moral responsibility for the outcome cannot be attributed to anyone at all,” which effectively leaves one with a ‘responsibility deficit.’”

But does the idea of a responsibility deficit not open a path to wholesale irresponsibility? – Kuckertz agrees. “This was actually one of the considerations which prompted me to seek arguments that would refute the notion of a responsibility deficit, which I myself regard as unsatisfactory,” she says.


In the course of her investigation, she develops a number of arguments for the thesis that moral responsibility in her scenario can indeed be attributed to those who participated in the development of the robot, and to those who made it possible for the machine to be deployed in a military operation. Following a discussion with her academic supervisor – a philosopher of politics – she went on to extend the range of her analysis to the political sphere, governments, and ultimately to society as a whole.

“Finally, I was able to show that in this case moral responsibility for the outcome can be attributed to all of these groups, on a variety of grounds,” Kuckertz explains. “So the implication of my analysis is that the philosophical challenge raised by the use of autonomous military robots does not lie in a putative responsibility deficit. On the contrary, the problem arises from the fact that responsibility can be legitimately assigned to numerous actors – both groups and individuals – and is in effect dispersed and diffused.”

What are you looking for?