News

“AI has become transformative”

1 Sept 2025

Deciphering evolutionary relationships, helping with surveys and in medicine – how artificial intelligence is changing research.

LMU scientists from various disciplines give insights into how AI is shaping their everyday research:

Natural intelligence remains indispensable

Prof. Dr. Kärin Nickelsen

Prof. Dr. Kärin Nickelsen

sees quality assurance as a major challenge with the use of AI. | © LMU/Oliver Jung

“Artificial intelligence is neither the Messiah nor the Apocalypse for science. The changes are considerable – in terms of methodology, subject-matter, and social effects – but not without precedent. Think of the impact of book printing, for example, or lens grinding, electricity, personal computers…

The generation of functional texts is already being delegated to AI tools: translations, text correction, routine correspondence. However, for the generation of knowledge, for research design, hermeneutics, writing as thinking, and so forth, natural intelligence remains indispensable, even if the repertoire of methods is expanding – for example, when dealing with large databases (numbers, images, texts, sounds, etc.).

Universities need therefore to impart the requisite knowledge and skills and invest in the development of transparent algorithms in order to become independent of commercial interests. Perhaps the greatest challenge is in quality assurance, especially in subjects with strong pressure to publish. There is an urgent need for action here, in the search for strategies.”

Prof. Dr. Kärin Nickelsen is Chair Professor for the History of Science at LM

Reconstructing ancient works with AI

Prof. Enrique Jiménez stands in front of a picture of a tablet with cuneiform writing

Prof. Dr. Enrique Jiménez

is Creator of the Electronic Babylonian Library and employs algorithms to make lost works readable again. | © LMU

“Artificial intelligence is revolutionizing research into ancient written cultures. It is doing so by helping researchers overcome, among other things, two key problems of cuneiform research: the extreme fragmentation of clay tablets and their lack of context. Tens of thousands of cuneiform tablets are being made available on digital platforms like the Electronic Babylonian Library (eBL).

Building on this, AI methods facilitate the automatic comparison of texts. Algorithms for text comparison are able to systematically search thousands of fragments in museums and identify matching pieces in order to reconstruct ancient works.

Future projects are going a step further: Over the next few years, researchers at LMU will be developing an AI tool as part of an ERC project. This tool will be able to chronologically order undated tablets based on handwriting (paleography). AI is thus helping researchers open up a substantial, previously inaccessible part of human history.”

Prof. Dr. Enrique Jiménez is Chair Professor of Ancient Near Eastern Literatures at LMU’s Institute of Assyriology and Hittite Studies.

For more on AI and the reconstruction of ancient works

Hymn to Babylon discovered

Electronic Babylonian Literature: Playing with the source of world literature Weltliteratur

AI deciphers principles of life

Prof. Dr. Karl-Peter Hopfner

Prof. Dr. Karl-Peter Hopfner

thinks that AI could fundamentally change scientific approaches. | © LMU/Jan Greune

“AI has become transformative in virtually all domains of molecular life sciences and is instrumental in deciphering the molecular principles of life. AI supports the analysis of experimental data, the detection of patterns, and the classification of molecular and cellular complexity across temporal and spatial dimensions.

AlphaFold exemplifies the power of multimodal learning and the fundamental correlation of amino acid sequence and structure by accurately predicting, most of the time, the three-dimensional structures of proteins and macromolecular complexes, illuminating biological functions, genetic variations, and disease mechanisms. Large-scale structure prediction deciphers complex evolutionary relationships, while generative AI methods enable the design of novel proteins and enzymes capable of performing sophisticated reactions, such as plastic recycling or enhanced genetic engineering.

Moreover, AI integrates diverse multi-omics datasets – genomic, proteomic, and metabolomic – revealing comprehensive cellular networks, identifying disease biomarkers, and guiding targeted gene therapies. Looking ahead, AI’s potential to generate novel hypotheses may fundamentally transform scientific approaches, particularly in areas requiring interdisciplinary expertise.”

Prof. Dr. Karl-Peter Hopfner is Chair Professor of Structural Molecular Biology and Director of the Gene Center Munich at LMU.

Which literary questions can AI usefully analyze?

Prof. Dr. Julian Schröter stands in front of the Philologicum library and looks into the camera.

Prof. Dr. Julian Schröter

investigates what AI can do. He has discovered, for instance, that large language models are better at recognizing metaphors than at identifying metrical or phonetic qualities. | © LMU / LC Productions

“I’d like to begin my answer with a dichotomy: You can either investigate no literary question with AI or else pretty much any conceivable question. To which extreme you tend depends on what you understand ‘literary question’ to mean. In some traditions, also known as hermeneutical, literary questions are always questions that humans ask themselves while reading and are associated with an esthetic experience. If you address the matter in this way, then you cannot answer any literary question at all with AI, because the answer requires a subjective reading act. What is correct about this view is that the use of AI cannot replace reading in the sense of a reading experience.

If we are talking about research into the qualities contained within texts, however, things look quite different. For a long time, people have been working on the recognition of textual features and structures with the aid of computers. This includes narrative forms (such as perspectivization), reported speech, rhyme, metaphorical use of language – everything, then, that literary studies have arduously put into concepts. Large language models have proven to be highly effective, especially for complex structures. Such approaches are useful, among other things, for studies that cover long periods with a volume of materials that cannot be mastered by individual researchers.

These capabilities are growing so rapidly that I do not suppose LLMs will be able to recognize textual characteristics only up to a certain complexity level in the future. In a current study, however, we have discovered that LLMs are considerably better at the moment at analyzing metaphors than at recognizing metrical and phonetic qualities. ‘Thinking’ in semantic analogies still comes a good deal easier to large language models than recognizing and counting supposedly simple forms. These results are interesting in their own right. But they become even more interesting if we return to the question about reading and consider how such findings relate to our assumptions about the simplicity and complexity of human understanding.”

Prof. Dr. Julian Schröter is Professor of Digital Literary Studies at LMU’s Institute for German Philology.

Drawing conclusions with the help of AI

Prof. Dr. Daniel Gruen

Prof. Dr. Daniel Gruen

uses artificial intelligence to investigate the influence of dark matter and dark energy on the universe. | © LMU

“The data volumes we can collect about the universe with telescopes are growing so rapidly that not even the progress of computer hardware can keep pace. Things like processors and cameras are always based on semiconductor technology, but in the case of telescopes, the optics, mechanics, and the number of them are also constantly increasing. Accordingly, we need faster algorithms to cope with the flood of data.

Artificial intelligence solves almost perfectly in seconds what would have taken hours to calculate with previous methods. However, it is not only the quantity, but also the quality of data that is growing. The Euclid space telescope delivers images at higher resolutions than we’ve ever had before. Spectrographs collect detailed ‘fingerprints’ from tens of millions celestial objects. With this information content, we’ve far exceeded the capabilities of human theorists and modelers. How to draw the decisive conclusions from the observed details is something that AI is now best placed to teach us.”

Prof. Dr. Daniel Gruen is Professor of Astrophysics, Cosmology, and Artificial Intelligence at LMU’s Faculty of Physics.

For more information about the research of Prof. Gruen:


Algorithms to peek into the universe

AI in medicine

Prof. Dr. Frederick Klauschen

Prof. Dr. Frederick Klauschen

researches artificial intelligence in medicine and the integration of histological and proteogenomic imaging techniques for cancer research and diagnostics. | © LMU

“Machine learning methods have been used in medicine for many years for a huge variety of applications, such as the classification of gene expression patterns and image analysis. Newer AI techniques based on deep neural networks can carry out these analyses today with ever higher precision and not only classify data qualitatively, but subject it to comprehensive quantitative evaluations.

Furthermore, so-called explainable AI (xAI) makes it possible to interpret even more complex data. In this way, we can not only predict the course of diseases and responses to medication, but also identify pathological features that are relevant for the prediction.

Whereas experimental model systems were pretty much the only option available for this task before now, xAI methods are able to ‘search’ for clinically relevant cellular or molecular features in large patient cohorts. This allows the generation of hypotheses about causal disease mechanisms, from which we can derive new target structures for drug development.”


Prof. Dr. med. Frederick Klauschen is Director of the Institute of Pathology at LMU.

Generating Data with AI

Prof. Dr. Frauke Kreuter

is an expert in data quality and sees the benefits of AI. | © Fotostudio klassisch-modern

“Soon, no one will seriously question whether AI should be ‘allowed’ to assist in academic writing or programming. As these capabilities become embedded in everyday tools—much like spellcheckers—using them will feel entirely natural. Why wouldn’t you?

Today’s language models go well beyond writing or coding support; they are fundamentally transforming research in my field. We now use AI to generate data, collect it automatically, and even interact with respondents—broadly defined. Together with colleagues from the LMU Klinikum, we developed Ava, an AI bot designed to support vaccine information campaigns. Of course, this mode of data collection raises new questions about the quality and reliability of the information gathered.

Overall, I see the use of AI as similar to driving a car: I don’t need to build one to drive safely—but I must understand the rules of the road, and more importantly, I must know where I want to go. The same applies here. A solid understanding of methodological foundations remains essential, as does the ability to judge whether results are fit for purpose. In statistics, clearly communicating analytical findings is also critical—and if AI can assist with that, it’s a step forward. But the responsibility for any written output still lies with the author. In teaching, we continue to assess students’ true understanding effectively through oral exams and in-person assessments.”

Prof. Dr. Frauke Kreuter is Chair Professor of Statistics and Data Science in Social Sciences and the Humanities at LMU.

AI efficiently evaluates large amounts of text

Alexander Wuttke stands in front of several trees

Prof. Dr. Alexander Wuttke

is happy with the constantly available, reliable assistance provided by AI. | © LC Productions

“Some might think of scientists as spending all day in a solitary search for the big idea that forever solves important questions for humanity. According to this picture, we ruminate in a quiet room in splendid isolation. At some point, a light bulb appears above our head and the world is understood. Sadly, our reality is much more mundane. In actual fact, we spend much of our time on administrative jobs, proofreading, drawing up travel expenses claims, and dealing with press inquiries.

This is where the immediate benefits of AI in science can be found: AI takes over many routine activities and acts less as a superhuman genius than as a reliable assistant that is always on hand.

The same goes for the core area of research, where the added value of AI is manifested not in revolutionary breakthroughs, but in practical support. In political behavior research, for example, we employ AI to efficiently evaluate large amounts of text. Let’s say we want to analyze all speeches in the German parliament for populist linguistic elements – AI accomplishes this task just as reliably as a human assistant, but faster and more cost effectively.

This gives us researchers valuable time for more challenging, creative tasks. At the moment, AI is not advancing science by dint of superhuman genius, but because it can emulate average human capabilities.”

Prof. Dr. Alexander Wuttke is Professor of Digitalization and Political Behavior at LMU’s Geschwister Scholl Institute.

How AI is transforming weather forecasting

Prof. Dr. Mirjana Sakradzija

Mirjana Sakradzija researches how small-scale weather phenomena affect global processes. | © LMU/LC Productions

"AI models are transforming weather forecasting, running alongside traditional numerical weather prediction, and are significantly faster, accurate and cost-effective.

In climate modelling, the use of AI is also evolving rapidly. A promising approach is, for example, the use of hybrid models, which include the main physical equations of standard climate models while employing faster and more accurate AI methods to represent small-scale processes, such as clouds, precipitation, and turbulence. These are uncertain processes for which AI can reveal new relationships or knowledge that we previously lacked.

The training of AI climate models involves extensive historical datasets and requires considerable time and resources, but once trained, AI models are more efficient and reliable. Increasing the speed of computations and reducing costs are essential for climate models that are run globally and into the distant future.

To fully realise the benefits of AI applications in climate modelling, further work is needed. Key questions remain, such as how reliable these models trained on past climate are for predicting future climate states that have not been previously observed. Or, how to estimate uncertainties of AI models, either resulting from imperfect input data or due to the chaotic nature of the weather and climate."

Prof. Dr. Mirjana Sakradzija is Professor of Physical Geography and Land-Atmosphere Coupling at the Department of Geography at LMU.

What are you looking for?