News

“We are not taking part in the scaling race with our algorithms”

13 Nov 2023

Why the development of artificial intelligence is at an inflection point: An interview with AI scientist Björn Ommer.

Portrait of Prof. Dr. Björn Ommer.

“AI will soon be popping up in almost all areas relevant to society“, says Prof. Björn Ommer. | © Fabian Helmich / LMU

Large language models (LLMs) are the algorithms that form the basis of artificial intelligence (AI), but they require vast quantities of training data to be able to generate texts, images, and audio files. There is currently a marketing battle raging in which the size of the models is supposed to indicate their performance: the bigger, the better. What this overlooks, though, is the fact that LLMs require not only a lot of data but also a great deal of computing power and a huge amount of energy to operate the server farms they run on — levels of terabytes, teraflops, and terawatts that only the digital giants can afford to pay for, or have any wish to. However, the computing power used for training and applying AI is anything but a green technology.

Professor Björn Ommer,Chair of AI for Computer Vision and Digital Humanities/the Arts at LMU and known for his lean text-to-image generator called Stable Diffusion, finds more than just sustainability reasons calling into question the current battle of the LLMs as they each seek to outdo one another.

Professor Ommer, you are a pioneer in the development of AI applications today. Your name is associated with the Stable Diffusion app that anyone can use to create images from a text prompt. What is special about your development is that the trained model can run on PCs and even smartphones rather than requiring a supercomputer as previous approaches. Compared with all the hype around the large models trying to be the ‘greatest,’ the direction of your research points more towards ‘small.’

Björn Ommer: Apparently, the ‘large’ in large language models has become a measure of quality. People think that artificial intelligence will become more and more powerful simply by getting bigger. But the development of AI is currently at an inflection point: the models are no longer getting more intelligent simply by being scaled up with even more computing power and even more training data. This is the thought that was guiding us as we were developing Stable Diffusion, and now, the future development of this technology can be taken forward by more scientists with lower hardware and data requirements.

Recently, there seems to have been a rethink of the developments to date. People are asking: do we want to pay for the gigantic amounts of resources we are using here, and who actually runs the server farms? In other words, what does it all cost and who benefits from it?

Artificial intelligence always comes at a price. In the beginning, training an AI demands a lot of resources. But after it has been developed, AI is now also being used to save energy. AI is becoming the enabler of new processes, which in turn are more efficient than the old ones. But there is something else: usually buried in the operators’ small print is the fact that any data entered by users is going to be used for the operators’ own interests, for example to train the LLMs. Companies with extremely sensitive data, like in the medical sector, cannot tolerate that. That’s another reason why we didn’t want to take part in the scaling race with our algorithms.

A question of computing power

How do you keep your models smaller?

As I said, at first the large models were indispensable. But then they weren’t. For a model to work across the breadth of the visual world, for a text model to work across the breadth of what can be said in language, it has to have ‘seen’ and ‘read’ a lot. Yet on that same basis, the models can be adapted to specific use cases and scaled down accordingly.

But the big players have already taken on the role of gatekeepers here: they determine who is allowed to access their technology and who is not. They are also gatekeepers in other respects, too, and that is something that we should all be paying much more attention to. That’s because training and operating an AI not only requires vast amounts of data but also demands computing power from dedicated processors (GPUs). While the costs of AI have so far been measured mainly in terms of the energy and water consumed in operating and cooling the servers, and increasingly also in terms of how data hungry the technology is, the actual computing requirements of the GPUs have not been sufficiently considered. This enormous computing power is a crucial competitive factor for the further evolution, widespread use, and monetization of AI. This became clear when the United States banned exports of the GPU manufacturer Nvidia’s top models to China. So, you can see that computing has increasingly become the determining factor in the AI race.

How to deal with sensitive data

As an AI user, this makes you doubly dependent: on the data that’s been inputted and on the computing power of the companies that process every AI query.

Exactly. And it’s important to point that out. If you’re using AI to help you in the medical sector, for example, you’re no longer using training data from the internet — you’re using very specific patient data with which you make a diagnosis. But if you can only do that by accessing the computers run by the big players, you are handing over sensitive data. And if AI only works if you serve the big players with their computing power; that makes you dependent. That’s why I’m a strong advocate for local companies with lots of proprietary data developing their own smaller, independent, and autonomous solutions to meet their specific needs.

Yet even then, companies will still need gigantic amounts of computing power to make even smaller apps run.

The need is on several levels. The lowest level is a tangible one, and by that I mean the data centers full of GPU power. This leads to the dependencies I already mentioned. That’s why one of our goals in developing Stable Diffusion was to democratize what we call ‘foundation models,’ that is, generalist models trained across the board on text or images — in other words, to make them open source and lean enough to run on ordinary hardware after being trained. The advantage of it being open source is that the complex initial training only has to be carried out once. Subsequent fine-tuning — adaptation to specific use cases — can then be done by users with their own data, without having to share it with companies from overseas.

News

Highlights of interdisciplinary AI research

Read more

But you are still not completely independent.

And that is the crux of the matter. Germany needs a centralized computing infrastructure so that AI can be tailored to individual needs here. AI will soon be popping up in almost all areas relevant to society. The demand is already growing enormously. As we know, Germany recently experienced what it means to be dependent on a single external supplier in the energy sector. So, where will we be if Germany makes its growing computing needs dependent on private sector companies overseas? A decision needs to be made to upgrade the data centers as needed in this respect, to expand and build them quickly. This problem exists now, it’s not a thing of the future. It’s about not missing out on the exponential growth. There’s no stopping the evolution of AI across the whole world now.

For more information, see:

Harvard Business Review: How to Make Generative AI Greener

Forbes Magazine: Bigger isn't always better when it comes to generative ai

What are you looking for?