AGI without clear customer benefits and direction
An imaginary conversation between Peter Drucker and Sam Altman, CEO of OpenAI
This is the third part of a series of fictional conversations. In it, Peter Drucker meets well-known figures and people like you and me. People with different roles in organizations who embody technological optimism, pragmatic application, historical caution, and economic reality. Today, he is talking to Sam Altman at OpenAI’s headquarters in San Francisco. The room with floor-to-ceiling windows is flooded with light and offers a beautiful view of the bay. Huge screens hang on the walls with live dashboards showing API requests per second, training metrics, and growth curves.
The fascinating technical ambition
Sam Altman: “Mr. Drucker, welcome to OpenAI. Let me show you what we’re building here. In recent months, we’ve massively increased the performance of our models. We’re on the way to AGI, or Artificial General Intelligence. This is a system that can take on and even surpass any intellectual task performed by a human being. We may achieve this before the end of this decade.“
Peter Drucker: ”That’s a fascinating technological ambition. I see an enormous amount of activity and computing power here on your screens. May I ask you an important question right at the outset: What exactly are you building this for? What social or economic purpose does this tool serve?”
Sam Altman: “We are building it to give humanity an incredibly powerful tool. Imagine a system that solves complex scientific problems in climate research, diagnoses serious diseases early on, and provides every person on Earth with a completely personalized education. It is a brain for the world that anyone can use.”
Peter Drucker: “A brain for the world sounds like an impressive vision. In management practice, however, we need to ground this vision. Who exactly is your customer? And what does this customer buy from you?”
Sam Altman: “Our customer is all of humanity. We are democratizing access to intelligence. Our free offering has hundreds of millions of users worldwide. We don’t take money from the masses, we don’t show ads, and we don’t pursue hidden monetization strategies. We give people the tool for free, and they figure out for themselves how they want to use it.”
The myth of all of humanity as customers
Peter Drucker: “Let’s look at this from an economic perspective. Users who don’t pay anything are not customers in the traditional sense. A genuine customer relationship is created through a transaction. The customer voluntarily spends money because they believe that the service they are purchasing is worth more than their money.
This willingness to pay alone forces a company to exercise absolute discipline. It forces you to ask the tough question of what is really valuable to this specific person. Without paying customers, you lack the most important feedback system in the free economy.“
Sam Altman: ”I have to disagree with you here, Mr. Drucker. That’s a view from the industrial age. We live in a platform economy. The traditional paying customer is just one model among many. Through their constant interaction, our free users provide us with the most valuable data in the world. They train and refine our models with every sentence they enter. In the modern software industry, reach is the hardest currency. Whoever controls the basic system and retains the most users defines the entire future market.
I would also like to note that we do have paying customers. Large corporations use our programming interfaces and millions of people pay for premium subscriptions. But the basis must remain a good that is accessible to all.”
Peter Drucker: “You talk about a publicly accessible good. However, a public good is a political category. Such goods are provided by governments and legitimized and controlled through democratic processes. But you run a private company with investors who provide billions and expect huge returns. You can’t be the savior of humanity and a profit-driven company at the same time. Either you are a commercial enterprise, in which case you must focus on concrete customer benefits. Or you are a public infrastructure, in which case you must submit to external democratic control. Mixing these two spheres inevitably leads to massive governance problems.”
The new economy and the question of responsibility
Sam Altman: “We have created a completely new type of company specifically for this problem. We are a company with capped returns. Our investors know that our mission to develop artificial intelligence for the benefit of all always takes precedence over maximum profit. We need to scale quickly, but we do so with a firm ethical compass. If we slow down, we leave the field open to authoritarian regimes. This is about hard geopolitical realities. If American companies hesitate, other nations will dictate the rules of artificial intelligence. The time advantage is absolutely crucial.“
Peter Drucker: ”Geopolitics is very often used as an argument to justify risky growth. I don’t dispute international competition. But economic reality shows that blind exponential growth is dangerous. Just because a technological innovation is feasible doesn’t automatically mean it’s useful or safe in its current form. Innovation must not be purely an end in itself for engineers. It only unfolds its value when it purposefully solves real problems.”
Sam Altman: “We solve countless problems. Our models help millions of professionals be significantly more productive. Software developers write error-free code in half the time. Doctors receive valuable second opinions on complex diagnoses. We dramatically increase the intellectual capacity of our users.”
Peter Drucker: “The question is whether these people are really becoming more effective or whether they are just more efficient at producing more information. If a company does not clearly define the specific value it delivers, it will run into difficulties. A doctor doesn’t buy artificial intelligence. He buys a reliable solution to his diagnostic problem. A student doesn’t buy access to machine learning, he buys a deep understanding of complex mathematics. What the company believes it produces in its laboratories is secondary. The only thing that matters is what the customer perceives as value.”
The lessons of history
Sam Altman: “But we are seeing unprecedented demand. Every major company in the world wants to integrate our technology immediately. Our infrastructure is needed everywhere. Think about the electrification of the world. Thomas Edison didn’t know in advance that his power grid would one day power server farms, global communications networks, or ventilators in hospitals. He created a universal infrastructure. That’s exactly what we’re doing. We’re building the intellectual power grid of the twenty-first century.“
Peter Drucker: ”Technologists like to cite the example of Thomas Edison. But it’s worth taking a closer look. Edison didn’t start out with the abstract goal of building a universal power grid. He had a razor-sharp customer problem in mind. He wanted to light up private households and enable factories to continue producing safely after sunset. The problem was extremely concrete and the customer benefit was immediately obvious. The solution was universal, but the entry point was absolutely focused.
Let’s look at another historical example. During the dot-com bubble of the late 1990s, telecommunications companies invested hundreds of billions in fiber optic cables. They thought this infrastructure would be a sure-fire success. Demand seemed limitless. Nearly all of these companies collapsed because the mere existence of infrastructure is far from a guarantee of a viable business model. If you just build capacity without being responsible for the specific use case, you’re building a solution in search of a problem.“
Sam Altman: ”Our use cases are evolving every day. We’re investing billions in the safety of the models. We’re looking very closely at what outputs our systems are generating. We’re measuring everything we can control.”
Peter Drucker: “But what exactly are you measuring on those screens around us? You’re measuring parameters such as model size, training speed, and server utilization. These are purely input metrics. They merely document that your machines are running. That doesn’t answer the question of social contribution. A healthy company is defined by its contribution to society, not by its mere activity.”
Systematic waste collection in research
Sam Altman: “Mr. Drucker, you emphasize strategic focus. But how can we know in advance what the right focus is in completely unknown territory? What specifically would you do in my place to make this contribution measurable?”
Peter Drucker: “I would recommend three specific strategic steps:
- Stop treating AGI as an isolated goal, because it is merely a technical specification. You need to define a specific customer base and prove that you can solve a measurable problem there in a sustainable way.
- Apply the principle of systematic garbage collection. Go through the list of your various research projects and ask yourself honestly for each project whether you would start it again with today’s knowledge. If the answer is no, you need to end it immediately and focus your talents on the truly critical tasks.
- Give up the illusion that your technology is completely value-neutral.”
Sam Altman: “Let me address your second point. The principle of systematic garbage collection makes perfect sense in traditional goods production. In basic research, however, it is extremely risky. Innovation cannot be planned on the drawing board. The development of artificial intelligence requires extreme diversity of approaches. A project that looks like a dead end today may deliver the decisive mathematical breakthrough tomorrow. If we cancel projects too early, we may be cutting off the very branch that leads us to true intelligence. We must afford to keep unprofitable or seemingly aimless projects alive for a long time.“
Peter Drucker: ”Diversity in research is important, but it must not be an excuse for arbitrariness. When mediocre projects continue out of pure habit or for internal political reasons, they tie up valuable resources. Nothing destroys the motivation of brilliant employees faster than having to watch resources being wasted on unproductive traditions. Strong management is characterized by making courageous decisions and also making painful breaks with old projects.”
Scaling and the Manhattan Project
Sam Altman: “That’s a legitimate point about resource allocation. But your third piece of advice concerns our neutrality. As a platform, we must remain neutral. We have users all over the world. If we start explicitly encoding our own moral or political values into the models, we will immediately become a pawn in global conflicts. We cannot possibly install an ethics committee with veto power. That would set our development back years and render us incapable of acting in global competition.”
Peter Drucker: “The claim of neutrality is the greatest intellectual fallacy in the technology industry. A machine that processes language, weighs arguments, and sorts knowledge can never be neutral. The very decision to filter out certain harmful content is a profound value judgment. Your models inevitably reflect the worldview of the engineers who program them in California.
That’s not bad per se, but it requires brutal honesty. You need to make these value judgments transparent and open them up to broad social scrutiny. A committee that you select yourself does not offer democratic legitimacy.“
Sam Altman: ”True democratic control over our source code would mean de facto relinquishing control of the company. Our investors finance us because they believe in our leadership. If we have to publicly debate every strategic move, we lose our ability to act.“
Peter Drucker: ”History is full of brilliant technologists who believed their inventions were too important to be slowed down by external controls. Consider the Manhattan Project. [1] The brightest minds of their time built the atomic bomb under extreme time pressure because they feared the enemy would be faster. They solved a tremendous technical problem. Afterward, many of these scientists spent the rest of their lives regretting the consequences of their creation and warning against it because political governance was completely lacking.
Or consider social media in our recent past. The founders’ promise was to connect the world and democratize information. The result, in many ways, has been enormous social polarization, the erosion of public discourse, and a mental health crisis. All of this happened because the scaling of the platforms happened infinitely faster than the assumption of responsibility for the social consequences. Do you really believe that artificial intelligence, of all things, will be the historical exception to this rule?“
Sam Altman: ”We are very aware of these dangers. That’s why we are working intensively to align artificial intelligence with human values. We don’t want to repeat the mistakes of the social media era. But development cannot be stopped. We must lead it in order to make it safe.”
Peter Drucker: “If you want to make it safe, you have to prove that your mission takes precedence over your growth. A company that wants to save the world must be prepared to sacrifice market share if safety requires it. You can’t want to be both the uncompromising savior of humanity and the undisputed most valuable company in the world. One of the two will have to give way in times of crisis. The question is not whether you will win the race. The question is whether you are truly prepared to take on the historic responsibility for what awaits you after the finish line.“
Sam Altman: ”You demand radical strategic honesty. You have given me a lot to think about today.“
Peter Drucker: ”That is the real job of a consultant. I don’t provide you with ready-made algorithms. I challenge you to ask the right questions. The true test of your leadership will be whether you have the courage to take the answers seriously.”
Notes:
This fictional conversation is based on the theoretical principles described in the first article in this series – Peter Drucker meets AI. Among other things, it deals with the true value of knowledge work.
The imaginary conversation with Sam Altman highlights the dangerous gap between technological omnipotence and a lack of social anchoring. Innovation without clear customer benefits and without an ethical compass threatens to become an end in itself. Those who build tomorrow’s infrastructure must take responsibility for the consequences today and abandon the illusion of neutrality.
But what happens when these gigantic AI infrastructures meet the impatient world of young founders? In the next conversation, Peter Drucker meets Marc, an ambitious startup founder. Between burn rate, scaling pressure, and AI hype, the fundamental question arises: When does technology become a real business model?
Dierk Soellner supports specialists and managers in current challenges through professional coaching and offers useful training courses on AI.
Here are some German quotes from Peter Drucker.
Would you like to discuss AI from Peter Drucker’s perspective as a multiplier or opinion leader? Then share this post in your network.
Dierk Söllner has published more articles on the t2informatik Blog, including:

Dierk Soellner
Dierk Söllner’s vision is: “Strengthening people and teams – empathically and competently”. As a certified business coach (dvct e.V.), he supports teams as well as specialists and managers with current challenges through professional coaching. Combined with his many years of comprehensive technical expertise in IT methodological frameworks, this makes him a competent and empathetic companion for personnel, team and organisational development. He runs the podcast “Business Akupunktur“,has a teaching assignment on “Modern design options for high-performance IT organisations” at NORDAKADEMIE Hamburg and has published the reference book “IT-Service Management mit FitSM“.
His clients range from DAX corporations to medium-sized companies to smaller IT service providers. He likes to tweet and regularly publishes expert articles in print and online media. Together with other experts, he founded the Value Stream initiative.
In the t2informatik Blog, we publish articles for people in organisations. For these people, we develop and modernise software. Pragmatic. ✔️ Personal. ✔️ Professional. ✔️ Click here to find out more.


