News

The Interpreter: ”Machine learning“

8 Dec 2020

Thomas Seidl on ”machine learning“

Server room at LMU | © LMU

Some scientific terms manage to make their way into everyday speech. Here, we ask LMU researchers to tell us what they mean – to define them, and to outline how they became popular.

“In a formal sense, learning can be understood as creating a mathematical function. Situations, observations, questions, tasks, operations drawn from the real world – the input – can be mapped onto, or transformed into, the appropriate answers, decisions and actions – the output. How do computers learn to do this? In the early days, particularly in the context of speech recognition, they were manually programmed to follow specific sets of grammatical rules. But natural speech is full of exceptions and shortcuts – and is far too complex to be captured by even highly refined sets of rules. In contrast to this model, the systems now in use for speech processing automatically learn the correct functions by analyzing carefully selected passages drawn from a wide range of texts, which serve as training examples. These days, this is usually done using deep-learning algorithms that are based on artificial neural networks.

The success of this approach led to the rise of machine learning. For example, many tasks in the services sector can now be partially automated with the aid of chatbots. Machine learning means that the computer learns from examples – gains experience, so to speak – by systematizing information and deriving rules from it. The essential model is the human brain. However, the decisive factor is the quality of the data supplied to the machine. What sorts of training data should be fed into the system, and in what order? As a result, the nature of the engineering has radically changed, and focuses not on the processing functions, but on the composition of the input. The core of machine learning is the principle of self-reinforcement – learning by trial and error. The reinforcement effect is realized on the basis of the evaluation of feedback, which signals the success or failure of each processing step, and enables the system to adapt accordingly.

This raises the question of whether it will always be possible to fully comprehend what autonomous learning systems are actually doing. To ensure that this remains the case, many researchers are now working towards what is referred to as “explainable AI”. After all, this is a very important issue in applications such as the maintenance of machinery, or the choice of a course of therapy in medicine.

Machine learning has a greater capacity to transform all areas of production and services than any previous technology. In all sectors of the economy, efforts are underway to find ways of automating data processing using these strategies. The potential range of applications remains vast.

The advent and application of artificial intelligence has spurred hopes of rapid progress. With this wave of optimism, its technological basis – machine learning – suddenly became more popular than ever before. Discussions on AI have been going on for decades. But periods of spring-like growth have regularly been followed by the onset of winter, as expectations proved to be too optimistic to be fulfilled in practice. In my judgment, we are now in a phase in which much of the technical side of things works well. Of course, there are many unsolved questions. But the problems no longer concern just the basic functionality of AI. They have more to do with how we can best make use of it in an open democratic society. Recorder: math

Prof. Dr. Thomas Seidl holds the Chair of Database Systems and Data Mining at LMU, and is one of the Directors of the Munich Center for Machine Learning (MCML).