The future of human autonomy

Guest contribution by | 20.04.2026

An imaginary conversation between Peter Drucker and historian Yuval Noah Harari about human autonomy and machines that make decisions

This is the fifth installment in a series of fictional conversations. In it, Peter Drucker meets well-known figures and people just like you and me – people with different roles in organizations who embody technological optimism, pragmatic application, historical caution, and economic reality. Today, he speaks with historian Yuval Noah Harari.

At the heart of their in-depth conversation is the future of human autonomy in the age of artificial intelligence. Both thinkers view the world through the lens of broad historical arcs. While Drucker focuses on social institutions and individual responsibility, Harari warns of the unprecedented power of a new intelligence. A profound debate unfolds regarding the tension between the infinite availability of data and the dwindling influence of human wisdom.

The nature of artificial intelligence

Peter Drucker: Welcome to my home, Professor Harari. Please have a seat. We have a very pressing topic before us today. You view human history in epochs and are now warning us about a completely new form of information processing. In your book ‘Nexus’ [1], you write very aptly that artificial intelligence is not merely a tool. Rather, you describe it as an actor in its own right, one that processes information completely independently and makes far-reaching decisions. How do you assess our current situation with regard to human autonomy?

Yuval Noah Harari: Thank you very much for the invitation. Humanity is indeed facing an absolutely unprecedented challenge. Computers with their own goals are drastically altering the fundamental structure of our information network. Until now, all our tools – such as the printing press or the radio – were purely passive. They could never generate their own ideas. Artificial intelligence, on the other hand, is an alien intelligence. When algorithms on social media, for example, are optimized purely for attention, they completely fail to grasp that truth and compassion are essential for human well-being. We are currently witnessing the potential loss of our autonomy, as algorithms create a new social order that often merely panders to our basest instincts.

Peter Drucker: That is a deeply unsettling prospect. It reminds me of historical upheavals in which tools became masters. We tend to view technological progress as a neutral force. Yet every technology inevitably reshapes the structure of society. When we are dealing with an alien intelligence that pursues its own goals, we must seriously ask ourselves whom this intelligence actually serves. A system without a moral compass cannot produce a healthy society. It merely optimizes parameters that have been set for it, often without regard for the long-term human consequences.

The danger of algorithmic totalitarianism

Yuval Noah Harari: This is precisely where the great danger lies. We are creating systems that understand us better than we understand ourselves. Ultimately, organisms are just complex algorithms, and there is absolutely no rational reason to assume that organic algorithms can do things that inorganic algorithms will never be able to surpass. Once the machine has analyzed our biometric data, our preferences, and our fears, it can manipulate us perfectly.

Peter Drucker: As you may know, in my book, ‘The End of Economic Man’ [2], I analyzed in detail the rise of totalitarianism in the twentieth century. This total control over humanity is the final stage of tyranny. I wrote back then that totalitarianism was successful only because it offered the masses a parallel system of life when traditional economic promises failed. In their desperation, people willingly surrendered their personal responsibility to totalitarian rulers. They were desperately seeking security in a completely bewildering world. Today, you apparently fear a new and much more subtle form of algorithmic totalitarianism.

Yuval Noah Harari: That is exactly the historical parallel I see. The crucial difference, however, is the scalability and the subtlety of the control. A dictator in the twentieth century could not possibly monitor every single citizen around the clock. A modern system with artificial intelligence can certainly do that. Most of the time, it doesn’t even happen under duress. We hand over our data entirely voluntarily because we receive convenience and tailored services in return. If this system of alien intelligence monopolizes and controls our information flows, we lose control over our society. According to this compelling logic, Homo sapiens could become a completely obsolete algorithm.

Peter Drucker: The concentration of power without genuine legitimacy is the core problem of every conceivable tyranny. I have always warned that totalitarianism takes precisely this place as soon as the management of society and the economy fails. Power must necessarily be bound to fundamental civilizational convictions. Otherwise, it loses all legitimacy and inevitably becomes mere violence. We now face the crucial question of how we can maintain this necessary legitimacy in the digital realm.

The future of work and the useless class

Yuval Noah Harari: When we talk about the loss of autonomy, we must turn our attention to the world of work. In ‘Homo Deus’ [3], I issue a stark warning about the emergence of a so-called useless class. When artificial intelligence performs cognitive tasks better, faster, and more cheaply than human experts, many people lose their economic and military significance. Even today, an algorithm can review legal contracts, compose creative texts, or make medical diagnoses. If the system displaces the experts, people lose not only their income but also their social status. The system then simply no longer needs us to function smoothly.

Peter Drucker: The concept of a useless class is absolutely disastrous from a sociological perspective. Work is not merely a means of earning a living. It gives the individual a clear identity, structure, and an indispensable role within the community. Decades ago, I coined the term ‘knowledge worker’ to describe these people. They contribute the most important capital of a modern economy: their specific knowledge and their human judgment. If we strip people of this function, we create the very breeding ground for extremism, despair, and tyranny.

Yuval Noah Harari: Nevertheless, we cannot close our eyes to reality. Artificial intelligence is constantly learning. It is also becoming increasingly competent in areas such as emotional intelligence and creative problem-solving. The persistent illusion that humans possess an unattainable monopoly on empathy or creativity will soon prove dangerous. We cannot rely on the magical creation of ever-new jobs that absolutely require deep human understanding.

Peter Drucker: That is why an organization’s management must never prioritize mere efficiency over human dignity. We must completely redefine tasks for humans. Humans must focus on what machines, despite all their computing power, cannot do. They must ask the right questions, contextualize historical events, and assume genuine moral responsibility. A machine can calculate probabilities precisely and recognize complex patterns. However, it can never assume genuine responsibility for the consequences of a far-reaching decision. Responsibility requires character, integrity, and the courage to face one’s own mistakes. These are not calculable metrics.

Counterforces and the role of civil society

Yuval Noah Harari: What counterforces do you see in a world where machines might decide as early as tomorrow who we marry or what career we pursue? Many systems today are already completely opaque black boxes, and not even their developers fully understand how the systems arrive at their results. Once these networks take absolute control of our social and economic interactions, human free will becomes meaningless. The algorithms will always argue that their decisions are more statistically sound than our error-prone intuition.

Peter Drucker: As a social ecologist, I look for pragmatic counterforces during such phases of historical upheaval. A functioning society absolutely requires mechanisms of self-correction. These counterforces must come directly from civil society. We cannot possibly leave this enormous responsibility to technology corporations or to the state alone. We need strong and completely independent institutions. Non-profit organizations, churches, universities, and local communities serve as a crucial buffer against the total dominance of the state and the economy. I call this the third sector.

Yuval Noah Harari: That is undoubtedly a classic approach in political theory. However, these institutions must adapt extremely quickly in order to survive at all in the new reality. Artificial intelligence is currently permeating all other information processing networks. If civil society simply ignores or condemns the technological reality, it will become completely irrelevant. We can already clearly see today how algorithms are fragmenting public discourse and weakening established institutions. The danger lies in the fact that algorithms are already undermining the mechanisms of our social self-correction long before civil society can respond appropriately.

Peter Drucker: I completely agree with you. Institutions must not remain stuck in romantic nostalgia. They must remain highly innovative and actively use technology for their own purposes. A functioning society is essential to protect people from manipulation. Management is a liberal art, not a pure science. It is about treating people with dignity, living out values, and deep cultural understanding. If universities and nonprofit organizations learn to use artificial intelligence as a tool to strengthen their mission, they can effectively counter the dominance of purely commercial algorithms. They must define the ethical boundaries and cultivate the human wisdom that is entirely absent from mere data.

Self-management and psychological flexibility

Yuval Noah Harari: We are in a frantic race against time. I warn against the unconditional belief in pure dataism becoming a new religion. Dataism teaches that the universe consists exclusively of data streams and that the value of every phenomenon is determined by its contribution to data processing. If we accept this ideology uncritically, we degrade humans to mere tools of machine data collection. We must actively defend the value of human experience and wisdom. This requires a global educational revolution.

Peter Drucker: Comfort has always been the greatest enemy of freedom. Freedom requires immense effort, constant vigilance, and an absolute willingness to patiently endure conflict. If we shy away from this effort, we will gradually become slaves to our own tools. This is precisely where I see the primary task of modern education. We must not primarily teach people how to program software. We must empower them to think critically, evaluate ethical dilemmas, and resolutely resist manipulative systems.

Yuval Noah Harari: I wholeheartedly agree. Education must place psychological flexibility and emotional balance at the very center. People must learn to completely reinvent themselves time and again, as accelerated technological change will regularly disrupt their life plans. Above all, we must learn to better understand our own brains and weaknesses before algorithms have fully deciphered and taken them over.

Peter Drucker: Self-awareness and consistent self-management are the absolute key competencies of our century. Every knowledge worker must become their own manager. They must know exactly where their true strengths lie and what moral values guide them. Only those who possess firm inner values can withstand the massive external storms. We now face the enormous task of adapting our institutions, our educational systems, and our own behavior to this new reality. Under no circumstances must we fall into passivity. We must actively shape the change in order to permanently preserve our human autonomy.

Yuval Noah Harari: Fortunately, the outcome of this story is still open. We can still prevent algorithmic totalitarianism if we understand the true nature of artificial intelligence and set clear global boundaries for it. However, time is of the essence.

Peter Drucker: Then let’s not waste any more precious time. This discussion must be carried directly from academic circles into boardrooms, parliaments, and living rooms. Thank you for this profound conversation, Professor Harari.

 

Notes (partly in German):

Dierk Söllner supports professionals and executives in addressing current challenges through professional coaching and offers useful training on AI.

This fictional conversation builds on the theoretical foundations described in the first article of this series – Peter Drucker meets AI. Among other things, it addresses the true value of knowledge work.

[1] Yuval Noah Harari: Nexus – A Brief History of Information Networks from the Stone Age to Artificial Intelligence
[2] Peter Drucker: The End of Economic Man – The Origins of Totalitarianism
[3] Yuval Noah Harari: Homo Deus – A History of Tomorrow

Both Peter Drucker and Yuval Noah Harari view artificial intelligence as a profound historical turning point. While Harari warns of an alien intelligence that could drive humanity into algorithmic totalitarianism and create a useless class through automation, Drucker relies on pragmatic counterforces. He sees the active strengthening of civil society and profound self-management as the most effective tools for preserving our human autonomy.

Following this philosophical look at our human autonomy, Peter Drucker meets Chief Medical Officer Dr. Rashid in the next post. Hospitals are drowning in bureaucracy. Dr. Rashid hopes AI will handle diagnoses and medical reports. But Drucker warns that medicine is relationship-based work. Does AI efficiency serve only to cut costs, or does it finally give us back time for genuine human care?

Would you like to discuss the future of human autonomy in the AI age as an influencer or thought leader? Then share this post within your network.

Dierk Söllner has published additional posts featuring Peter Drucker on the t2informatik Blog, including:

t2informatik Blog: When AI takes over your craft

When AI takes over your craft

t2informatik Blog: AGI without clear customer benefits and compass

AGI without clear customer benefits and compass

t2informatik Blog: Solution seeks problem

Solution seeks problem

Dierk Soellner
Dierk Soellner

Dierk Söllner’s vision is: “Strengthening people and teams – empathically and competently”. As a certified business coach (dvct e.V.), he supports teams as well as specialists and managers with current challenges through professional coaching. Combined with his many years of comprehensive technical expertise in IT methodological frameworks, this makes him a competent and empathetic companion for personnel, team and organisational development. He runs the podcast “Business Akupunktur“,has a teaching assignment on “Modern design options for high-performance IT organisations” at NORDAKADEMIE Hamburg and has published the reference book “IT-Service Management mit FitSM“.

His clients range from DAX corporations to medium-sized companies to smaller IT service providers. He likes to tweet and regularly publishes expert articles in print and online media. Together with other experts, he founded the Value Stream initiative.

In the t2informatik Blog, we publish articles for people in organisations. For these people, we develop and modernise software. Pragmatic. ✔️ Personal. ✔️ Professional. ✔️ Click here to find out more.