Interview: “It’s well worth it for companies to experiment with AI at an early stage”
25 Mar 2026
Economist Florian Englmaier on the use of generative AI in companies
25 Mar 2026
Economist Florian Englmaier on the use of generative AI in companies
© IMAGO / Shotshop
Generative AI, which produces texts, images, or analyses, has long become established in the world of work. Florian Englmaier, Professor of Organizational Economics at LMU, is researching how companies can introduce this technology in a meaningful way without getting bogged down in all the complexity. In this interview, he explains why it’s crucial for companies to adopt a structured approach, what mistakes they should avoid making – and how employees can benefit from working with AI.
Why should companies embrace generative AI – if they’ve managed well without it up until now?
Florian Englmaier: Generative AI is a new general-purpose technology, like electricity once was. When electricity was first discovered, you couldn’t foresee what products would go on to emerge from it. Today, we’re once again faced with a technology whose potential we’re only just beginning to understand. What this means for companies is it’s well worth them learning how to use it at an early stage and, above all, in a structured way – and clarifying exactly what role AI can play in helping them to create their own value.
How is generative AI already being used today?
Its use is particularly prevalent in software development, where programming tasks can be performed much faster with support from AI. New possibilities are also emerging in production: A large tech company, for example, is integrating AI interfaces into production machines that can be operated using natural language. This means that employees can now solve problems that would previously have required an expert – this enhances traditional jobs in production and makes collaboration more efficient.
There’s also lots of potential in internal organization. A major manufacturer of branded products compared the performance of traditional brainstorming teams with human/AI pairs, which produced results that were at least as good. Some of the participants actually felt more comfortable working with the AI because they were more likely to have the confidence to ask what they perceived to be simple questions.
AI can perform routine activities, creating space for people to focus on more demanding tasks. At the same time, you can have flatter hierarchies if less experienced employees can use AI to enable them to perform specialized activities.Florian Englmaier, Professor of Organizational Economics
What changes do you think this will bring about in work and organizational structures?
AI can perform routine activities, creating space for people to focus on more demanding tasks. At the same time, you can have flatter hierarchies if less experienced employees can use AI to enable them to perform specialized activities. In addition, AI makes knowledge more visible within a company. That’s because in many organizations crucial know-how only exists in the brains of a few individuals. If this knowledge is organized in a structured way and made accessible, this will improve learning, collaboration and the quality of decisions that are made.
Your research is exploring how companies can systematically experiment with generative AI. How can they make sure they’re successful?
The use of generative AI should be understood as an ongoing, structured system of experiments. It’s not just about examining the question of whether AI works, but also how it works, for whom, and under what conditions. This process always starts with a clearly defined problem: What improvements are expected – and how can you tell if they’ve been achieved? In software development, for example, it’s easy to measure any changes in processing times or error rates that occur when developers work with an AI copilot.
This is followed by the design of the methodology. Companies need testable hypotheses, suitable comparison groups and a timescale that also highlights medium-term and longer-term effects. If it’s difficult to implement formal control groups, it’s helpful to adopt staggered rollouts. For example, production lines at a technology company can be equipped with new AI interfaces one after the other – and that will automatically create opportunities to compare the results. This will reveal any changes in faults, downtimes or the number of required interventions from experts.
Lastly, the effects should be recorded systematically – with hard key performance metrics, but also by conducting surveys of employees, asking them how satisfied they are with the new AI tool, for example. This kind of feedback supplements quantitative data and shows the technology’s actual impact in people’s daily working lives.
One frequent mistake is underestimating what’s known as the productivity J curve. It describes the phenomenon where productivity initially dips when a new technology is first rolled out before it then starts to rise – because processes get more complex and employees take a little time to get to grips with them.Florian Englmaier, Professor of Organizational Economics
What typical mistakes should companies avoid?
One frequent mistake is underestimating what’s known as the productivity J curve. It describes the phenomenon where productivity initially dips when a new technology is first rolled out before it then starts to rise – because processes get more complex and employees take a little time to get to grips with them. If companies don’t consider this aspect, they may quickly experience disappointment and frustration.
Unless all employees receive systematic training, tensions can also arise – between those who are keen users of AI and those who remain skeptical. And without suitable comparison groups, it’s very difficult to assess whether a supposed success is in fact attributable to the new technology or simply down to early adopters who are more motivated than the average person.
If companies don’t have secure in-house AI systems or copilots, there can also be data protection risks – for example, if employees upload internal documents to AI systems that are open to anyone. And last but not least, there’ll be a growing level of uncertainty if companies fail to take seriously their employees’ concerns that AI will replace them and leave them without a job.
To what extent do you think this concern is justified?
In the short term, I think the risk of this is frequently overestimated. It’s often only once people experiment with AI for themselves that they realize it’s a tool to support them rather than replace them. In many areas, it firstly reduces the workload, especially where there’s a shortage of workers. AI can perform documentation tasks, make technical information accessible more quickly or answer routine inquiries – these are all tasks that many people find burdensome and take up a great deal of time.
But in the long term, there’ll probably be more changes than we expect today. Not in the form of sudden job losses, but because AI will gradually take over certain activities – from routine legal assignments and preparing information to standard administrative processes. At the same time, humans will be tasked with new roles that demand greater judgment, teamwork and technical understanding. This means there’ll be a gradual shift in some aspects of the way the world works.
In small and medium-sized enterprises, a lot depends on the mindset: The crucial factor is not so much having spectacular technology but being willing to systematically clarify which problem needs solving, what success looks like and what data is needed to achieve this.Florian Englmaier, Professor für Organisationsökonomik an der LMU
Can companies introduce generative AI on their own or do they need support from experts?
Many companies are certainly capable of taking the first step themselves. Large organizations often already have their own data science departments, even if there’s not always an awareness of how important it is to conduct structured experiments. In small and medium-sized enterprises, a lot depends on the mindset: The crucial factor is not so much having spectacular technology but being willing to systematically clarify which problem needs solving, what success looks like and what data is needed to achieve this.
Nevertheless, cooperations can be useful – for example, with academic partners to ensure they adopt a sound method or clarify any issues surrounding data protection.
At LMU, we’ve set up the Center for Predictive People Analytics,in which we support companies by helping them to establish the methodological basis to facilitate sustainable transformation. We’re also planning to adopt an even more systematic approach in future to transferring scientific findings from the field of organizational economics. To help us do this, we’re establishing the cross-faculty Center for Organizational Research and Evidence (CORE). By offering things like workshops or providing close support with solving problems that arise from conception through to implementation, we want to help organizations make effective, evidence-based decisions that are rooted in science. For 2026, our focus topic is skills-based organizational design, which is closely related to AI adoption and addressing changing skills requirements.
Which developments will cause companies to change the most in the coming years?
Decisions will be based more heavily on data than on gut feeling – and will need to be judged against the data. At the same time, generative AI will increasingly make its way into the physical world and transform production processes, further blurring the boundaries between industrial enterprises and tech companies. But one key task that will remain is making human knowledge available in a structured form. That’s because companies need to know what they can do today if they want to make sensible plans to ensure they have the skills they’ll need tomorrow.
© privat
Florian Englmaier is Professor of Organizational Economics in the Department of Economics at LMU. The core topics of his research are in the field of organizational economics.
He studies primarily agency problems within organizations, from both a theoretical and empirical perspective. 2021-2024, he was Chairman of the Committee for Organizational Economics in the German Economic Association and is on the Executive Board of the Society for Institutional and Organizational Economics.
Since fall 2025, Florian Englmaier has been leading the “Future of Bureaucracy” research focus at the Center for Advanced Studies at LMU
Harvard Business Review: A Systematic Approach to Experimenting with Gen AI