Highlights of interdisciplinary AI research
11 Oct 2023
Scientists from various disciplines are researching the challenges and opportunities of artificial intelligence.
11 Oct 2023
Scientists from various disciplines are researching the challenges and opportunities of artificial intelligence.
Yes or no? Some decisions can change lives. So what are the fairness implications when such decisions are automated? Christoph Kern, Junior Professor of Social Data Science and Statistical Learning, studies automated decision-making processes that involve machine learning. Examples from other countries show how problematic such approaches can be. This is all the more concerning when we consider that these methods have even been used in the justice system in the United States. And yet ADM (automated decision-making) is being used in Europe in areas such as the administration of unemployment assistance.
Christoph Kern combines know-how from the worlds of social science and information technology. He takes an empirical approach to the problem.
Future AI-based systems may navigate autonomous vehicles through traffic with no human input. Research has shown that people judge such futuristic AI systems to be just as responsible as humans when they make autonomous traffic decisions. However, real-life AI assistants are far removed from this kind of autonomy. They provide human users with supportive information such as navigation and driving aids. So, who is responsible in these real-life cases when something goes right or wrong? The human user? Or the AI assistant? A team led by Louis Longin from the Chair of Philosophy of Mind has now investigated how people assess responsibility in these cases.
The new Konrad Zuse School of Excellence in Reliable AI (relAI) will work with M.Sc. and Ph.D. students on technical aspects of AI as well as issues related to the importance of reliable AI for society.
Aspects such as security and the preservation of the private sphere are key prerequisites for the use of AI. A common reason for a lack of acceptance of AI-based technologies in society and industry is the concern about reliability.
The Zuse School relAI, opened yesterday by Bavarian science minister Markus Blume, will place greater emphasis on this perspective. Along with technical expertise, the school’s educational activities will provide future AI experts with a heightened understanding of the societal importance of reliable AI.
Human augmentation technologies refer to technological aids that enhance human abilities. They include things like exoskeletons, but also augmented reality headsets. A study at the Chair of Human-Centered Ubiquitous Media at LMU has now shown that users have high expectations of the effects of these technologies. As soon as they believe that AI is enhancing their cognitive abilities, they increase their risk-taking. And they do this independently of whether the AI is actually assisting them.
The study points to the possible existence of a placebo effect in technical applications of this nature, akin to the well-established placebo effect for medication. “At a time when people are increasingly interacting with intelligent systems, it’s important to understand a possible placebo effect so that we can build systems that offer genuine support,” says Albrecht Schmidt, Professor of Computer Science at LMU.
Barbara Plank researches natural language processing (NLP) at LMU. She works on language technologies and artificial intelligence with a strong focus on human concerns.
The computer scientist came to LMU last year from the IT University of Copenhagen and researches in the field of natural language processing (NLP) at the Centrum für Informations- und Sprachverarbeitung (Center for Information and Language Processing). She has worked, for example, on improving algorithms for text search in job ads. The goal here was to make the algorithms more robust so that the job notices would display very specific job criteria or requirements more quickly and with greater precision and jobseekers would get precisely matched job ads. For this research, she received data from a variety of sources, including the Danish employment agency.
She emphasizes: “There is a vast number of possible applications out there for NLP.” This includes many potential uses in cultural and social contexts, which brings us to another important area of research for Plank: so-called minor languages in modern language technologies.
How artificial intelligence learns the rich variety of human languages: Hinrich Schütze, computational linguist at LMU, researches multilingual software that can do small languages.
"In our field, we’re living in fascinating times. Suddenly, intelligences have developed that nobody can really explain," says the Chair for Computational Linguistics.
One of several forms of leukemia (cancer of the blood), AML is a deadly disease. Five years after the initial diagnosis, only one-third of patients are still alive. Up to 85 percent of patients appear to be cured after intensive chemotherapy. In more than half of such cases, however, the disease returns within one to two years because the chemotherapy has not destroyed all leukemia cells. In the event of a relapse, a stem cell transplant is the only hope for curing the patient. But even then, the long-term probability of survival is less than 20 percent. New treatment options are therefore urgently needed.
Unlike other forms of blood cancer, acute myeloid leukemia (AML) cannot currently be treated with CAR-T cell immunotherapy. This is due to the lack of specific molecular targets by which certain immune cells could specifically target AML cells. Two research teams led by Professor Sebastian Kobold together with Dr. Adrian Gottschlich from the Division of Clinical Pharmacology at LMU’s University of Munich Hospital and Dr. Carsten Marr together with Moritz Thomas from the Institute of AI for Health at Helmholtz Munich have now succeeded in discovering such targets. Their results have been published in the journal Nature Biotechnology.
Jochen Kuhn, Chair of Physics eduation, and his team – including two junior research groups – study new approaches that seek to change physics classes through the use of multimedia learning environments. “Formulas are not the only way to understand physics. It’s often a very abstract subject, so we try to make it more accessible to learners through various kinds of visualizations, for example.”
One goal of Professor Kuhn’s research is to find out how learners successfully learn physics, plan experiments, and solve physical problems and which visual illustrations can help them do so. It is not only the outcome of learning or problem-solving that counts here, but also the process behind it. Eye-tracking methods can be used, for example, to understand how to optimally support, encourage, and challenge schoolchildren and college students so that they learn successfully.
Professor Kuhn does research today into digital media that will be relevant in the future – in everyday life and in education. “Given the dynamic nature of digitalization, ChatGPT is not the first time education risks lagging behind social imperatives in some spheres. As such, education needs to have the ability to keep step with developments in society,” explains Kuhn.
What can the ChatGPT language model do and what challenges does the technology pose? Scientists from various academic disciplines offer their view:
Enrique Jiménez, Professor of Ancient Near Eastern Literatures at LMU Munich, is employing a digital database and artificial intelligence as tools to make lost texts of ancient world literature readable again. 300,000 lines of text and complete digital editions of important texts of world literature are now set to be published.
It is the largest publication of texts to date in the history of cuneiform studies. In ancient Mesopotamia, people wrote in cuneiform characters on clay tablets, which have survived in the form of countless fragments. Enrique Jiménez has been working with his team in the Electronic Babylonian Literature project to digitize all surviving cuneiform tablets. The team has developed an algorithm to piece together fragments that have yet to be situated in their proper context.
“It’s a tool that didn’t exist before, a huge database of fragments. We believe it can play a vital role in reconstructing Babylonian literature, allowing us to make much faster progress,” says Enrique Jiménez. Already, the algorithm has newly identified hundreds of manuscripts and many textual connections.
Teaching machines to see is one of Professor Björn Ommer's goals. That said, seeing is only a step of learning along the road to a different and greater challenge: that of autonomous understanding. “I have a keen interest,” the informatics expert says, “in discovering how we as humans make sense of what we see.” He also wants machines to learn the same thing.
Since fall 2021, Ommer has held the newly established Chair of AI for Computer Vision and Digital Humanities/the Arts at LMU. His position is attached to both the Faculty of History and the Arts and the Faculty of Mathematics, Informatics and Statistics. His working group conducts basic research into computer vision and machine learning, focusing in particular on how they can be applied within the digital humanities.
“Deep learning has come on in leaps and bounds in recent years,” Ommer explains. “We suddenly find cars that really do drive autonomously. Artificial intelligence (AI) is assisting with medical diagnostics … Many of the things we have been researching for years are now springing up as prototypes and are there for the public to see.” Yet there are plenty of new questions that keep him busy in his research
Structures in the fog
: Are machine works creative?
Interview: “We are not taking part in the scaling race with our algorithms”
Algorithms and big data are influencing our modern communication – and raising many new questions for communication science: How does constant smartphone use affect human wellbeing? What role do huge platforms play in the world of news? And what are suitable methods for researching such questions? Professor Mario Haim, a new member of staff at LMU, studies such topics – and combines traditional communication science with IT methods.
Since February of this year, Mario Haim has held the newly established Chair of Communication Science with a special focus on Computational Communication Research at LMU. One topic that Haim researches at LMU is how algorithms influence public communication. In his most recent publications, he investigated the “platformization” of news, the variety of Google hits, and questions such as how search engines can help prevent suicides. He has also published papers on stereotypes and sexism in user comments about journalists and people’s susceptibility to fake news in social media depending on their political orientation.
His second major area of study is methodological research. Like the social sciences in general, communication science has to develop and improve its methods. “We need a repertoire of methods with which we can research these new aspects of our subject in the first place.”
Although tumors in the nasal cavity and the paranasal sinus are confined to a small space, they encompass a very broad spectrum with many tumor types. As they often do not exhibit any specific pattern or appearance, they are difficult to diagnose. This applies especially to so-called sinonasal undifferentiated carcinomas (SNUCs).
Now a team led by Dr. Philipp Jurmeister and Professor Frederick Klauschen from the Institute of Pathology at LMU and Professor David Capper from Charité University Hospital as well as the German Cancer Consortium (DKTK) ), partner sites Munich and Berlin, has achieved a decisive improvement in diagnostics. The team developed an AI tool that reliably distinguishes tumors on the basis of chemical DNA modifications and assigns the SNUCs, which the methods available before now have been unable to distinguish, to four clearly distinct groups. This breakthrough could open up new opportunities for targeted therapies.
Which tasks will algorithms perform in the future? How will this affect the employment market? And which jobs will be left over for humans to do? Economist Ines Helm investigates questions such as these. “The upheaval of the labor market by artificial intelligence (AI) has only just begun, and so labor economics research into its effects is still in its infancy.” Among other things, Helm explores what conclusions can be drawn from earlier upheavals caused by technological change – such as automation, computerization, and robotization – that could be relevant for the AI transformation.
Since October of last year, the expert in labor economics has been Professor of AI in Economics at LMU. In addition to labor economics, her main research interests include regional economics, public finance, and applied methods.
Professor Stefan Feuerriegel has been head of the new Institute of Artificial Intelligence (AI) in Management at LMU Munich School of Management since August 2021. Being also affiliated with the Faculty of Mathematics, Computer Science and Statistics, he researches the use of AI in business, public organizations, and healthcare.
"Many companies in this country already have some touchpoints with AI and are trying it out on one or two projects. But full implementation is usually lacking, as is a comprehensive AI mindset among the workforce.
Businesses in Germany, and in Europe in general, are lagging behind those in the United States and parts of Asia. This is partly due to the fact that there is no ready-made AI business product, to put it bluntly, that managers can open, like Word or Excel, with a few clicks.
AI applications first have to be tailor-made for companies, which is why graduates with AI expertise are desperately sought after.
Another major obstacle is that many processes in German companies and institutions are still paper based. There just isn’t the digital data with which to use AI."
Although the AI boom seems to be unstoppable, it is currently being slowed down by a lack of emerging talent with suitable top-quality training. This is where the new Konrad Zuse School of Excellence in Reliable AI (relAI) comes in. The school is being set up by the Technical University of Munich (TUM) and LMU in conjunction with more than 20 partners from research and industry. It will build up a network from the worlds of science and enterprise so as to attract excellent emerging talent in the AI field from every corner of the globe.
The project has been awarded funding by the German Academic Exchange Service (DAAD). Stephan Günnemann, Professor of Data Analytics and Machine Learning at TUM is spokesperson for the school, while Gitta Kutyniok, Professor of Mathematical Foundations of Artificial Intelligence (Bavarian AI Chair) at LMU is co-spokesperson.
Initially granted temporary funding as a project, the Munich Center for Machine Learning (MCML) has successfully established itself and will receive permanent funding jointly from the German government and the state of Bavaria. As a result, regional research into artificial intelligence (AI) and particularly machine learning will gain considerable traction within the knowledge hub of Munich and beyond.
MCML is a joint undertaking by Ludwig-Maximilians-Universität (LMU) München and the Technical University of Munich (TUM). Its goal is to further advance basic research in the field of artificial intelligence (AI), with a strong focus on practical applications. MCML was founded in 2018 as one of six AI centers of excellence throughout Germany and has been funded since then by the German Federal Ministry of Education and Research (BMBF).
It now consists of more than 50 successfully operating research groups both in basic research and in the domain of application-oriented machine learning. For the centers now definitively established after their successful evaluation, BMBF and the respective state governments will jointly provide up to 100 million euros annually in total. MCML is set to receive 19.6 million euros every year.
How we can learn more from what we know about ourselves: Statistician Frauke Kreuter wants to improve the quality of Big Data and is using artificial intelligence to do so.
Kreuter has no doubt that the enormous amounts of data that have been collected over many years in various places can be used to improve people’s lives. She wants to use the latest artificial intelligence (AI) methods to do this and train the algorithms with the best possible data. At the same time, she is aware of the concerns the field of data science can evoke. When she talks about the smartphone, she calls it the “monitoring device we all carry around with us all the time.” To her it's an excellent example of where the big opportunities—and risks—of data science lie.
Ever-improving telescopes and measurement technology in astronomy could help researchers unlock the secrets of the universe. However, the huge amounts of data bring their own challenges, as Professor Daniel Grün explains.
At 1,838 meters above sea level, LMU's highest site towers above the cloud cover: the Wendelstein Observatory. Now part of the Institute of Astronomy and Astrophysics, the telescope has been helping LMU scientists take a closer look at the universe since 2011. Daniel Grün, holder of the Chair of Astrophysics, Cosmology and Artificial Intelligence at the Faculty of Physics, also relies on data from the Wendelstein to observe the sky.
But one observatory is no longer enough today. Meanwhile, giant telescopes around the world generate millions upon millions of images to unlock the mysteries of the universe. Researchers are in danger of drowning in the flood of data.
Is AI the solution? In the video, Daniel Grün talks about the challenges, problems and possible solutions of modern astronomy.
Daniel Grün: Algorithms to peek into the universe
Gitta Kutyniok holds the Bavarian AI Chair for Mathematical Foundations of Artificial Intelligence at LMU, one of the AI professorships funded under the State of Bavaria’s High-Tech Agenda.
Her research work addresses precisely these types of questions: when systems automatically learn to make decisions based on numerous training examples, when they gain experience, systematize it, and use it to derive rules, we want to know how they arrive at these decisions. Kutyniok wants to find out what the main criteria are in these processes. Conversely, she also aims to identify the most important components that artificial neural networks and the resulting algorithms need in order to reach the “right” decisions, and what these machines need in the way of learning material.
She also aims to unravel what guidelines need to be drafted and what the “ideal setup” for neural networks is, as well as how to ultimately maintain a practical understanding of what self-learning machines do—that is, how to maintain “explainable AI.”
Researching Artificial Intelligence at LMU: LMU Munich is a hotspot for highly innovative research in Europe, with Artificial Intelligence playing a crucial role in many disciplines.