top of page
stars_edited.jpg

How to Stay Human in the Age of Algorithms: Dalibor Vavruška on Freedom and Artificial Intelligence

We live in a time when technology shapes our world faster than we can keep up. Artificial intelligence promises comfort, efficiency, and progress—yet it opens questions that touch the very essence of being human. Where is the line between what serves us and what begins to control us?


That’s exactly what I discussed with the next guest of the Talks 21 podcast, Dalibor Vavruška—trained mathematician, MBA holder, and technology analyst who spent more than 25 years in leading global financial institutions such as Citigroup, ING, and Credit Suisse.


In our conversation, we went beyond economics and technology: why society has reached the limits of materialism, what role spirituality plays in the era of algorithms, and what it means to retain “humanity” in a world governed by data. We talked about creeping technocratic totalitarianism, the health of the human brain, and why it’s more important than ever to cultivate freedom, faith, and critical thinking.

 

Key Takeaways from the Conversation



Why We Need a New Enlightenment


Karel:

Dear friends, welcome to another episode of the Talks 21 podcast. Today’s guest is technology analyst Dalibor Vavruška.


Dalibor is a trained mathematician and MBA holder who spent more than 25 years as an investment analyst and strategist at leading global financial institutions—such as Citigroup, ING, and Credit Suisse. He led international teams focused on investment research in telecommunications and digital technologies and received numerous prestigious awards from institutions like Institutional Investor and Thomson Reuters.


As an analyst, he participated in major IPOs of telecommunications companies in Europe, Asia, and Africa. He’s also among the pioneers of the idea of separating telecom infrastructure—first implemented globally by O2 in the Czech Republic—and later adopted into European legislation and industry practice.


In recent years, he has focused on consulting and research in digital strategy, artificial intelligence, and data regulation, collaborating, among others, with Oxford’s Regulatory Policy Institute. He is also the author of Life in the Age of Robots: How to Keep Power over AI and Preserve a World for People.


Dalibor, welcome to the podcast.


Dalibor:

Thank you for the invitation.


Karel:

To start, I’d like to open our discussion with a historical-philosophical view of faith versus materialism—how this relationship evolved over time. I know this is a topic that interests you even outside AI. And perhaps it makes sense, because AI is in some way a product of our evolution.Personally, I see it this way: people have always had faith—deep faith. Over time a church formed, which as an institution eventually got abused. People lost freedom, and excesses and injustices followed.


The Enlightenment then came, bringing what we could call a materialist causal philosophy—a philosophy of connectedness based on cause and effect, on logic and rationality. This shift brought enormous progress: science arose, the ability to think rationally developed, to judge arguments and knowledge on the basis of logical proof.


A huge step forward.But like any system, it too was eventually abused and degenerated. Science became another power institution—similar to the church before. We saw this during COVID, when science was used for manipulation and political goals.Today the world continues to evolve, and I sense that people are moving away from a purely materialist and causal approach and are rediscovering spirituality and faith. At the same time, we realize that even materialist philosophy has faith in the background—just of a different kind: faith in causality, in the randomness of events.And this faith in randomness, in my view, doesn’t work.


The world is extremely divided, and if we continue to grant absolute truth only to science and causality, we end up in a dead end—exactly where we are now.So the question is: what next? How do you think this continues? And what’s your “playbook” for our strategy?


Dalibor:

Since you mentioned the Regulatory Policy Institute—an Oxford think tank that builds on ideas from the Scottish Enlightenment—let me bring up one example we discussed there a year or two ago.


We talked about how the Enlightenment and the materialism that flowed from it are now going too far. At the institute I met one of today’s most prominent British philosophers, Iain McGilchrist, who also inspired me to write my book Life in the Age of Robots.


As a scientist, psychiatrist, and neurologist studying the human brain, McGilchrist tries to bring spirituality and faith back into public debate—without using words like “God,” which don’t resonate in a materialist society.


He revived the theory of the right and left hemispheres and argues that the right hemisphere represents holistic thinking that cannot be reduced to algorithms. It’s a capacity evolution developed in us—but one whose activity cannot be described or imitated purely algorithmically.


That has major consequences. If we want to further develop AI, we must ask how we relate to it—what we consider it to be, what powers we grant it, and what role we allow it to play in society.


If we reject McGilchrist’s theory and remain pure materialists trying to emulate all human thinking with algorithms, we would suppress precisely the part of the brain he—along with others—considers crucial for our survival.


I think as a society we’re hitting a ceiling of where materialism got us, and the situation—ideologically and philosophically—is starting to turn.


Why Algorithms Will Never Replace Human Consciousness


Karel:

So the “other” hemisphere is the hemisphere of intuition and feeling. And from my experience—when a person opens up to intuition, they can perceive and see things they feel are true, even without strict scientific proof. If one follows that feeling, it often turns out right over time—perhaps for unexpected reasons. That, I think, lies outside the purely materialist paradigm of randomness and causality.


Dalibor:

It’s a bit more complicated. The two-hemisphere theory isn’t new—first references appear in the 19th century and it was revived in the 1960s. The current scientific consensus rejects the theory as it was popularly presented then.


The brain doesn’t operate on a simple emotion-vs-rationality principle. The hemispheres are to some extent complementary, yes—but McGilchrist speaks of them in a different context. It’s not about emotion and intuition as commonly understood; it’s about modes of perceiving reality.


Animals have hemispheres too—most species with brains have them split in two. Example: a cat. When it hunts—as with other predators—it uses reductionist thinking, e.g., to estimate distance to prey. It works like a biological computer, evaluating situations based on experience and instinct.


At the same time, it senses the entire environment—risks that might appear (like another predator hunting it) and opportunities that might not repeat. That broader sensitivity is ensured by the right hemisphere.


And that’s exactly the part we cannot algorithmically describe or imitate. It evolved, and we cannot yet decompose consciousness in a way that lets us translate it into algorithms.


By the way, the same is said by Professor Vladimír Mařík, one of the leading Czech experts on AI. I recently shared a stage with him at the Faith in the Age of Robots conference, where he spoke on this. He’s the author of two books on AI, and the second focuses on consciousness. His main thesis: consciousness cannot be reduced to algorithmic thinking. And he clearly says AI is, at its core, only algorithmic thinking—nothing more. Not everyone agrees, which is why the debate is worth having.


Karel:

At least with today’s computers, AI is a purely computational process—zeros and ones. If there’s no error in the system, it’s fully deterministic—no randomness. Only with quantum computers might a degree of randomness enter, since quantum processes work with probabilities. But to me—no matter how amazing and powerful AI looks—it’s still mathematical computation.


Dalibor:

There are various views. Of course, when things get out of control and we can no longer verify what an algorithm does, it becomes more complicated. Alan Turing noted we can’t know in advance whether a given algorithm will halt with a result or not—whether it produces something or gets stuck in a loop.


Karel:

So the output can be so complex that the only way to evaluate it is to actually run it in practice.


Dalibor:

Exactly. My point was there’s a certain element of “chance” in that unpredictability.


ree

How an analyst became a guide to the world of artificial intelligence


Karel:

What was your path from finance—from being a financial analyst predicting global market developments—to artificial intelligence?


Dalibor:

I wouldn’t put it quite like that. If we agree AI is programs and algorithms, then I started long ago. I’m also a product of two worlds—my father was a bohemian, an opera singer and artist; on my mother’s side I come from academic science.

My grandfather was a structural engineer who designed some of the most notable buildings of his time—like Ještěd and the Nusle Bridge.


So I was technical and materialist from the start. Until university I had a purely technical view of the world and rejected things beyond it.


In the 1980s I had my first programmable Hewlett-Packard calculator. It was difficult then because of embargoes on US tech imports. Still, I got one—and I was thrilled. In high school I did geodesy digitization and could convert measurements to data thanks to the calculator. I cooperated with a Swiss company that was a global leader in measuring instruments.That was a huge experience—my first taste of how digitization increases efficiency and changes how we work.


Later I studied mathematics—fascinated by the beauty of mathematical theorems, how science and logic let us understand the world’s structure. I realized it goes beyond a purely technical, mechanical view.


Then I started working—first on expert systems for investing. That brought me into investment. I covered various sectors—telecom, oil, digital tech—and in each, after some time, I began to understand differently. Something would “click” and I’d see connections that weren’t just materialist or causal.


That’s when you start perceiving the invisible—something you can’t fully explain but that helps you get better outcomes. It’s similar in art—say in singing. You engage the subconscious or parts of the brain a logic-only materialist can’t activate.


I found this applies to investing, too. We didn’t do short-term algorithmic trading; we made medium- and long-term trend forecasts—first in telecom and later in digital tech generally. And I realized human perspective—maybe even a spiritual dimension—lets you forecast much better.


Then two important things happened.First, in my personal life I hit the limits of healthcare. I had health issues and doctors literally said, “We don’t know.” I discovered the solution existed—but outside the materialist system. I was lucky to find experts who thought differently—holistically—and that helped me profoundly. Once you experience that firsthand, it changes your worldview.


Second, I began to see the extremes of the materialist approach—in tech and in finance. Hidden dogmas started appearing, pushed further under the banner of materialism. At some point I said: this isn’t the way.


So I started writing—and my book Life in the Age of Robots is the result of that effort to open minds. I wanted to share insights I used to reserve for investors or corporate executives with a broader audience.


Where Creeping Technological Totalitarianism Begins


Karel:

In your book you formulated a “ten commandments” for life in the age of robots—how to live with AI while keeping freedom and personhood.


I see everything has two sides. Technology can be a tool of totality, taking our freedom and exerting absolute control—which would be a disaster. A technological dictatorship could be eternal—unlike a human dictator, technology doesn’t die.


On the other hand, technology can be our ally and shield protecting freedom—say through asymmetric encryption, a beautiful mathematical principle that allows us to preserve objective information and privacy in the digital world.


So—if you had to pick the most important points from your AI decalogue to focus on, so we can take the positive from technology—what would they be?


Dalibor:

I think we’re largely aligned here. The first point I stressed in my book is the health of the human brain.


When writing and talking to people, the frequent question was: “Where will AI help us and where will it harm us?” The publisher even wanted the title to reflect a “both sides” view.I focused instead on where it could harm us. There are apocalyptic scenarios—worth not underestimating. Then there are scenarios of technological control and totality, which I find perhaps even worse than apocalypse. Because living in slavery is, to me, worse than extinction.


Worse still is creeping totalitarianism.It’s insidious because we gradually get used to it. It begins subtly—by disturbing our subconscious. That’s a problem because our conscious mind is perhaps only five to ten percent of brain activity. The rest is subconscious, outside our control.


If AI started to interfere with these deeper layers—thus disrupting holistic thinking—we could lose control over our lives.I’m not saying we ever had full control. External influences always existed—once the church, later marketing. Marketing, like organized religion, plays to the right hemisphere, to the subconscious.


But if there were a major breakthrough in AI enabling manipulation of the human brain on a level previous civilizations never reached, I think that would be among the greatest risks—if not the greatest.


Because we wouldn’t know it. If we know totality exists or apocalypse threatens, we can defend, prepare. But if it’s something we can’t recognize by nature—and only a small group knows and can manipulate others—that’s the greatest possible risk for humanity.


Why We Need to Remain Imperfect


Karel:

So how do we… I don’t want to say “defend ourselves,” but how do we perceive reality so we can prevent it? So we don’t succumb—and ideally so no one even attempts it, if they can?


Dalibor:

Someone will try—that’s clear. To me it’s crucial to define who we want to be—what it means to be human.


Yuval Noah Harari opens this too. He says we should move toward a “godlike human,” homo deus. His books contain many stimulating ideas, especially about the history of homo sapiens. But I don’t fully share his view of the future. He speaks of homo deus, I proposed sapiens roboticus. The key is sapiens—that we remain rational humans who don’t submit to systems we deem intellectually superior.


How to achieve that? I think—and this connects with our discussion of Enlightenment and spirituality—that we need certain axioms. I don’t want to use “dogma,” but it tends that way. We must axiomatically place the human above technology.For example, in Christianity—and not only there—it’s said humans are made in the image of God. I long wondered why this is emphasized—what’s so unique about humans? Maybe it didn’t make as much sense then, but now, as we build technology, it gains a whole new meaning.


If we as a society don’t agree on basic values—or axioms—that place the human above technology, we risk losing our way. One such axiom should be: the survival and development of the species homo sapiens.


Karel:

Exactly—in a certain independence. Not by implanting chips in the brain or anything irreversible. For me it’s simple: any technological aid is fine if it’s reversible. Don’t interfere irreversibly with the brain. That would cross the line.


Dalibor:

This is thin ice. It’s different when someone is ill—that’s understandable. But we could reach a point where someone declares these technologies will save us. That, to me, is the real danger.


The current trend toward technocracy and data centralization breaks barriers never broken before. Think human body function, organ replacements, implants, even artificial organ creation—that’s one area.


Another concerns immunity and vaccination. The idea can be good, but when…It’s not only about abuse, not even deliberate abuse—it’s about becoming dependent on technology like computers on Windows updates. If we didn’t get the next update, we couldn’t function. I’m saying this as an extreme example, not that we’re there—but it’s the philosophy.


If we view the world—and health—reductionistically, through the lens of individual diseases and diagnoses, the logical solution is to prepare for specific problems, which might mean vaccination.


Karel:

Or increasing immunity and a healthy lifestyle.


Dalibor:

Exactly. And that’s where the holistic view comes in—like yours.


Medicine finds it hard to define “healthy life” precisely, but as humans we can have an intuitive sense of what it means.


Karel:

Versus the reductionist approach—solving each aspect separately. The problem is we don’t know the cumulative effect. Fixing each part in isolation can have consequences we can’t predict.


Dalibor:

That’s always been true with technology. Today we’re “driving out a wedge with a wedge.” In the 20th century we had cars, chemicals, pollution—then we did desulfurization, catalytic converters, etc. Not necessarily bad—this is technological progress. Without making mistakes and then improving, we wouldn’t move forward. There’s no “perfect solution.” It’s evolutionary.


But we dogmatically assume the next technology will fix the problems of the current one. That’s the main risk. If it turns out the new tech doesn’t solve but worsens problems, the system—which I wouldn’t call a market economy anymore even if it came from one—can start to shake and potentially collapse. It rests on a dogmatic assumption that innovation always solves our problems.


I’ll add a personal experience. During COVID I realized something fundamental. I’d always been a tech enthusiast—the best gadgets, computers, audio. Later I did IPOs mainly for mobile operators worldwide. I saw capital flows bringing real benefits. We even had analyses showing mobile network expansion raised African GDP by 5%. “Fantastic—this truly helps people,” I thought.


But by the end of last decade—especially during COVID—I saw a shift. Technology is less and less about a free market.We see it with mobile OSes—not just control but imposition. Companies don’t just say, “Buy a new phone.” They say, “To use our banking services, you must have a certain OS version or device.” That device contains technologies the user can’t easily remove—and if you want to function in society, you must adapt. Without them you can’t use a bank, call an Uber, or pay online. Everything interconnects.Another key moment: governments became part of the game. I noticed this 15 years ago in London when social networks like Twitter emerged. The government ran campaigns on social media—with public money—to promote their use. It wasn’t a purely spontaneous process but a deliberately managed communication channel.


As time went on, big transformations—like green tech and clean energy—were no longer driven by markets or demand, but ideological and political goals. To be clear: I don’t downplay the problems addressed; human impact on ecosystems is vast. But I think there are other, less technocratic paths that may be just as effective—or more so. Since COVID, technocratic solutions have been pushed extremely hard, often at the expense of alternatives that might deliver the same outcomes more effectively.



Karel:

And above all—there’s always the assumption the technocratic solution will work. But by nature, because these are new things, they include many variables beyond our sight. It’s logical to expect it can’t work perfectly.


COVID opened my eyes most—seeing how easily science can be abused. A crazy experience. Looking back, there’s something fascinating about it.


We started as small tribes, then larger communities, and finally a globalized, massively interconnected society. Then technology arrived. And what happened? Massive manipulation. We now have technologies like deepfakes that are so perfect you can’t tell truth from fake. You can’t be sure of anything. On the surface it’s scary, but there’s a positive effect too: if a thoughtful person wants to be sure whom they’re dealing with and what to trust, they must meet in person.


That brings us back to personal contact, human communication, and community. Meanwhile, economic pressures push people to be active, cooperate, and build communities. Paradoxically—in the age of maximum globalization—people refocus on the local, on close relationships and surroundings. I see that as extremely positive. We can return to greater humanity—natural, not technological. Humanity linked to nature, lived experience, and mutuality. If we can do that—treat technology as a tool, not a master—we can connect it with the human side. That’s where I see great hope.


Dalibor:

Exactly. One argument I often make lately is that AI will accelerate development—but that also means it will lead us into dead-ends faster. Or into points where we realize things that would otherwise take much longer to grasp. That’s why it’s crucial for people to think—to keep their critical and, to a degree, emotional, but mainly natural thinking.


That’s what I aimed to stimulate in my book. I know not everyone likes thinking. If we give people too many stimuli, they often flee to something simpler, entertaining, comfortable. But evolution is about overcoming obstacles. We are homo sapiens—thinking beings. Our task isn’t to go around obstacles but to overcome them with thought. The “sapiens” signifies human thinking, not robotic. AI will push us forward—showing the limits of knowability and our own limits. If it’s well programmed by people of good intent, it might even reveal its own risks. Still—I don’t believe in the illusion of total good. Passive individuals waiting for good to arrive won’t change much.


Karel:

It can’t be otherwise. To even perceive what is good, one must be able to recognize what is bad. We live in a world of duality—and by experiencing the bad, we can truly appreciate the good. The question is how to make sure people recognize the bad, understand it, appreciate the contrast, but don’t linger there too long. To minimize the impact and more easily open to the good.


Dalibor:

That depends on how we define good and evil. A recurring idea—even Elon Musk mentions it—is that people must undergo a degree of suffering. It’s part of religious traditions—people should overcome obstacles and the reward must be deserved. If we start seeing technology, including AI, as a way to skip obstacles and reach goals without effort, we stop functioning naturally. That’s why I devote so much to natural evolution in my book. Preserving evolutionary principles means preserving homo sapiens.


We have technological means, but we’re a living species. We should function like other organisms—with the added layer that we’re sapiens. As Harari notes, we can share abstract ideas, stories, and beliefs—about God or even the value of Bitcoin. The key is that we can agree on what we believe and collectively recognize those values. That sets us apart from other species. Now we differ in another way—we created a technology that can “think.” But we must make sure we never place it above us.


Karel:

Right. And it’s no coincidence ancient civilizations had rites of passage into adulthood. The danger of modern civilization is the pursuit of absolute comfort—no worries, no discomfort. That’s not how it works. I’m convinced we must expose our children to discomfort—sports, challenges, effort. Effort is crucial. Let me stress this again—it’s one of the key ideas: we cannot want a comfortable life full of certainties. That’s the worst possible “objective function”—the most dangerous path we can take.



ree

How Comfort Can Become a Trap for Civilization


Dalibor:

Let me put it bluntly—that comfortable life with certainties is exactly what big tech companies promise. It’s their main selling point. And that’s where the deceit lies.


A similar debate happened in the 1920s. Karel Čapek delved deeply into it—not only in his famous works, but also essays and shorter texts. He wrote exactly about this. I quoted him several times in my book. Some criticized me for citing him too much—but I think he was brilliant here. He saw the essence, even when technology was only starting to shape the modern world.


So the question is: if technology won’t give us real comfort or certainty, what does it give us? As someone who spent a life in tech, I ask myself this again and again.


Karel:

To fully enjoy life’s beauty. And part of that beauty is effort—sweat, exertion. That’s part of it. It’s important to accept discomfort. I’m convinced a shared human vision could be to learn to accept discomfort and pain with an open mind—and even joy. To treat it as a great school that helps us grow. If we can do that, it won’t seem so terrible. On the contrary—it opens the positive side of life, the ability to truly enjoy it rather than comfortably survive.


Dalibor:

We might not be typical. After surgery I didn’t take painkillers because I thought it’s something one must go through. But the world operates differently.


Today the economy plays a key role—money, finance, markets. A market economy should rest on balance: a person exerts effort, overcomes discomfort, and then gains a reward—comfort or profit. That’s natural. But here we hit a critical problem—or breaking point. People like Elon Musk talk openly about it: the economy as we know it is starting to fail. It’s seizing up.


As we said—there are many regulations, and technologies that disrupt or destroy the economy. Whoever creates the best technology creates a monopoly. Others disappear, and one platform dominates other markets. Huge profits flow one way while the rest of the system weakens. I think the solution is to rethink the very principle of the economy and try to create something I call an “economy of broader rationality.” It will be a different type of economy—but still one where human work and individual contribution have real value for most people. If not, we reach a state where the economy becomes a privilege of a small group and others get “crumbs” in the form of universal support—and that’s not sustainable.


Think of it this way: social order has always rested on faith, military power, and the economy. In the 20th and early 21st century, the economy clearly dominated. But if it starts to fall apart, if it’s overregulated and constantly needs redistribution to maintain balance, we’re in a dangerous situation. We have a society built on a system that’s falling apart under our hands. I won’t speculate too much, but I think the return of spirituality—or even the rise of wars—may be a direct result of an economy that no longer functions as it could and should.


When the Economy Starts to Collapse in on Itself


Karel:

That basically matches theories of civilization cycles. Every prosperous civilization and growing economy eventually brings side effects—bureaucracy, division, rising conflicts. They often lead to some kind of collapse. That collapse can even be war.


Dalibor:

Exactly—Joseph Schumpeter said as much. Miroslav Bárta also offers an excellent view of civilizational development.


Schumpeter fascinated me because his ideas became popular in tech circles—even Silicon Valley. He spoke of cycles—around 70–80 years—in which the economy exhausts itself, stops being truly market-driven, the system corrupts, and needs “cleansing.” The problem today is we’d probably all agree the system needs cleansing—but what if those who should be cleansed are the very ones controlling the process? That puts us in a very difficult situation. It may not end in a small disruption and redirection—but a deeper crisis, a collapse affecting not just the economy but society itself.



Karel:

Into some new totality? Because if the “cleansing” is directed by those who control the system, they’ll, of course, do it to tighten their grip.


Dalibor:

That’s what anyone tries—those with the means seek to control the system. But I’m speaking mainly about the economic and financial system built on certain assumptions—like the priority of material well-being for consumers. If a large part of consumers decided to live differently—to step out of the consumer system and prefer another way of life—the entire economic system, including banking, would be under huge pressure.


Several trends are already underway—at least two. First, people are becoming aware of this. A return to spirituality is happening to some degree. Second, technology is concentrating, meaning many people cease to play a meaningful role in the economy.


I recall many debates—often with like-minded people—about bureaucracy, e.g., in the Czech context. Lots of officials, dysfunctional systems; people say, “Let’s clean it up.” Fair enough—but then comes the key question: what will those people do? If we clean the system and technologies simultaneously remove jobs for large parts of the population, how does society function? That’s a deeper philosophical question spanning all civilizations. Not simple.


I remember the first AI conference in Rome, where representatives of the Catholic Church, Judaism, and Islam met. This dilemma was intensely discussed. One speaker noted that older civilizations, e.g., in the Islamic world—in Turkey—faced similar issues. When a new technology appeared that could fundamentally disrupt society, they decided—even despite its potential and abundance—not to use it in that form.


Karel:

What technology was that?


Dalibor:

I’m not sure—perhaps textiles—I can’t recall precisely…


There are always two levels: abundance and risk. After the industrial revolution, the 20th century brought not just abundance, but new risks—nuclear weapons and accidents. Later we realized we’re drastically impacting the biosphere—today an openly acknowledged topic. Not just ecosystems, but also interventions in human immunity—not only through vaccines but various treatment protocols.


A Czech hospital head physician once told me that more than half of those who die there don’t die from the original illness—but because more and more drugs are added, and the last one kills them. I’m not saying everything is wrong—certainly not. But it shows how fragile the balance between progress and responsibility is.


Karel:

We need to minimize the consumption of such drugs and interventions—minimize as much as possible.


Dalibor:

Which is why we need a reset—a systemic intervention. The current system is so intertwined that business, government, and investor interests overlap. Even spiritual or independent organizations that should be outside the frame can be influenced. It’s often hidden under “cooperation” and “addressing civilizational risks.” This narrative—that we face civilizational risks and must all cooperate—has become a tool of power. It’s called stakeholder capitalism. With it comes a unified agenda posing as global good. The side effect is that a real system reset can barely happen because everything is interlinked. Change then often happens only through disruption—just as you said.


ree

The Role of Spirituality in an Age of Abundance


Karel:

On the economy of abundance—fascinating topic. I’m convinced some technologies already exist but are hidden—for example, the free-energy theories and the like. I don’t think that’s fiction but a real possibility. Imagine access to free energy. Much of today’s work would become unnecessary. And here’s the core—it’s a question of the philosophy of human activity. What is this about? That a person has meaning, does things that fulfill and inspire them, is creative. That can be anything—gardening, creating all kinds of value. It needn’t have economic significance in the sense of consumption or profit.


If we imagine a model where basic needs are met—even with free energy—then the key is that people are motivated to learn, develop, discover, and create. The danger I see is degeneration. If everything is free, the system collapses. People lose motivation, meaning, drive.


But I believe we can set it differently—ensure everyone’s basic needs while keeping motivation. For example: those who want support can get it—but they must do something: study, work on something. Keep motivation and meaning. That’s the path to prosperity and defense against degeneracy.


It’s also a natural continuation of our evolution—we used to fight for survival and scarcity; now we must learn to live with abundance without being destroyed by it. The spiritual path—knowledge and creativity—is endless. That’s the true world we live in—a fractal where we can keep discovering deeper and wider layers of being.


Dalibor:

And we’re not discovering something entirely new. Cultures long ago—perhaps in ancient Greece or Rome—reached similar conclusions. They didn’t have robots but used slave labor. The principle was similar—surplus enabling reflection on the world and meaning.


Karel:

But conflict always existed. If the situation isn’t monitored, sooner or later there’s a revolt. It wasn’t a long-term balanced system.


Dalibor:

Technology has never worked purely in our favor—and I think that’s not new. Čapek warned of it—in R.U.R., and especially War with the Newts and The Absolute at Large. The latter beautifully shows how limitless energy and technical progress link with religion, power, and governance.


Čapek understood this well. He warned that the modern world offers abundance—excess abundance. And as you said, a person needs what’s necessary for life—but not excessive abundance. I fully agree.So the solution isn’t simple. It’s about motivation. Let me add a personal experience from London’s City. Over time, I formed a perhaps controversial view: a large part of the brain’s subconscious processes might not need to exist—and technology could take them over in many areas.


I remember the financial crisis period. Trading and research rapidly digitized. It lacked a holistic element, but the entire financial sector was digitizing work. What happened? Two things.


First, work was outsourced—to India, for example—creating a cushion against job loss in Europe or the US. Second, after the banking crisis, the number of regulators and regulations surged. So did the number of lawyers in each institution dealing with them. But what did that lead to? It was supposed to bring more competition and weaken big players—but the opposite happened. Big players could afford vast legal teams—and thus strengthened their position. A system within the system emerged. It works like this in finance, real estate, and elsewhere. You bring a strong ideology, build regulations atop it, those attract people, and a new artificial economy appears. We can even legislate that in some areas people must work and robots can’t decide. That moves people from the real economy—where survival matters, e.g., growing food—into an artificial, closed system. I don’t think that’s sustainable


.Nature—or a higher power—will intervene. Artificial constructs will collapse and real problems return. We should prepare for those—invest in people’s ability to solve real problems through their own work and effort.


The biggest problem, as we discussed, is that our brain—our thinking—could degenerate. That’s why I’m glad initiatives are emerging that lead people back to thinking, independent judgment, and community life. That’s how we keep the brain balanced—left and right hemispheres—and keep training it. Yet there’s a difference between artificial training and the real thing.Our body has evolutionary processes—like adrenaline under real stress. The question is how much these can be transferred to artificial environments. We already do this partially—video games create adrenaline situations. But how far are we willing to go in creating artificial worlds that redirect natural bodily responses to pseudo-problems?


That’s a major risk. Among others, one fundamental risk is that an artificial system can be centralized. If the central idea proves wrong, we hack ourselves—our instincts, bodily processes, natural reactions. Then we won’t be able to defend ourselves—and won’t even realize we’ve gone the wrong way.


Karel:

I think what you’re saying is an argument for real brain training. I believe it’s already happening—a return to greater naturalness. Together with technology, we can find a better relationship with nature—go for a walk in the forest, climb a rock, jump into the water. Returning to a more natural way of life is, in my view, the right path that answers all this.


Dalibor:

Exactly. But here’s the problem. If we decided to destroy or not use certain technologies, we’d return to a more natural life—but someone else could exploit that.


There’s also a part of the global population completely dependent on technology, without the option of the “natural life” we describe. We’re in a beautiful place now, with everything we need, but we must think of people in megacities, concentrated on small areas without such options.


I think this should gradually change. Population concentration and extreme urbanization should reverse. In some places it already is—during COVID people learned they could work outside cities and saw the advantages.


On the other hand, it goes against your argument that people should meet and share more. That’s another tension—cities make meeting easier, while dispersion increases distance—physical and social.


Karel:

Living completely in isolation is one thing, and having access to nature is another—they’re not mutually exclusive. And it doesn’t rule out community life.


But that’s another wonderful topic. We could talk for two days—or twenty-one.Let me close by opening one more of your themes—individual freedom. I see it as a core value we must protect like the apple of our eye, especially alongside technology, so it doesn’t begin to control us. A big warning—or a great lesson—was COVID. Some people said: “We want to save lives; freedom is irrelevant.” In my view, such attitudes could lead society down a completely wrong path—to total control and loss of personal freedom.


So finally—freedom. How do we protect it in connection with AI?


How to Protect Freedom in a Digital World


Dalibor:

A very difficult question—maybe for another twenty-one minutes.


To speak of freedom, we must first define it. We’re made of cells; cells have functions and are made of particles. None of that is entirely free—each part has a role in the whole. Likewise, we likely have roles in larger systems we don’t even know about. We can imagine being cells of a kind of global organism.


To make the debate sensible, let’s move it to free will. That’s a topic Yuval Harari doesn’t acknowledge. Materialists claim everything is deterministic—causal. And it’s not just semantics.


Various experiments are now underway that partly confirm this. Most people—even in poorer countries—have mobile phones. These enable massive data collection. Commercial companies use the data for targeted sales. Public infrastructures—like transport systems—want to know when, who, and where people want to travel to operate more efficiently. So projects arise worldwide showing human behavior can be influenced—directly and indirectly, often very deeply.


The key question: where are freedom’s boundaries? Is it only when police knock on your door and arrest you? Or already when you’re influenced without knowing?


Karel:

I’d say transparency is crucial and decisive. If people are influenced in some way, fine—but it should be transparent, so everyone can freely decide whether to participate.


Dalibor:

And that’s the problem. I can’t recall the name, but a speaker at last year’s H21 conference said very aptly—you need to monitor people without their knowledge. Because once they know they’re monitored, they behave differently.


Karel:

For testing purposes?


Dalibor:

Not just testing. Cameras on streets and highways help catch serious criminals. So yes, everyone says: install cameras. Today they’re mounted every few hundred meters.


The problem is many people don’t realize what happens to the data. They see cameras on the highway but don’t care what exactly happens to the recordings. If camera systems were used only in edge cases and didn’t capture people into identifiable faces for commercial or manipulative purposes, fine.


But if the system also aims to detect offenders and has other uses, then to be effective it often requires people not to know—not to feel they're being watched. That’s the complication.


Karel:

Sure, but some criminals will escape. That can happen. I’d rather accept that than lose freedom and freeze everything in the name of safety. That’s not how it works.


I see it clearly—we cannot seek perfection. The world will never be perfect—and that’s fine. Our free will is the key element we must protect.


Dalibor:

Agreed, but society needs order. Without order, we’d revert to species we claim we’ve surpassed. Today’s main debate is whether that order should be technocratic or theocratic.


These days theocracy is deemed bad and technocracy good—but from a holistic perspective, it’s not so clear. The real question is whether we as humans are willing to accept certain shared principles—moral ones, for example—and treat them essentially as dogmas. We don’t need to prove everything scientifically, nor have a logical proof for all things. If we accept these principles across society—in education and daily life—we’ll trust each other more. Because we know what to expect from others: what they believe, what values they recognize. Then we don’t need to control everything or argue as much.


But we need agreement on these principles. I don’t want to say “impose,” but agreement is key. The question is what to do when someone refuses to agree. That’s fine—the system won’t be perfect—but we should strive for the broadest possible social consensus.I see that as the essence of Western civilization—what brought us to freedom. And freedom is one of the main reasons the West has thrived globally. Unfortunately, many people—even experts—now think tightening technocratic screws will help us advance tech faster or gain advantages. Short term, maybe—short-term effects differ from long-term ones. But it always comes at the expense of the future—risking that, over the long run, such a system stops working.


I won’t make big predictions, though I spent most of my life making them. But I believe systems heading toward hard technocracy—as seen in China—may gain short-term advantage, but won’t be sustainable long term.


Karel:

And slavery is the worst imaginable form of society. For me, the worst possible life scenario.


Dalibor:

When I talk to US investors, they often say: “Yes, we see all this, but the reality is Chinese tech is competing with us.” If we ignore it, saying our system is better, China might gain dominance—militarily and economically. Before our system returns to former dominance, it could collapse.


The pressure in recent years—especially since COVID—is enormous. It’s a global competitive pressure where players don’t cooperate but seek unilateral advantages. That’s natural from an evolutionary perspective—nature doesn’t work on perfect cooperation—but here it’s extremely dangerous. It’s one of the West’s main challenges: how to face the Chinese model that may offer short-term technological and economic advantages but poses great long-term risk.


So the question is: how to protect ourselves so that a short-term advantage of one system doesn’t destroy ours before it adapts? If it did, it could lead to the end—or a fundamental reconstruction—of our civilization. We don’t want to be victims of that change.


I’m optimistic, though, because McGilchrist’s ideas—spiritual and philosophical principles—are spreading in China too. That’s good. Perhaps China will realize that hard technocracy isn’t the only solution. That could help us—so we could focus on true growth and transformation rather than fearing a competitor who could destroy our civilization.


Karel:

To close, I’d like to express a wish I think we share: that humanity opens to the core ideas of freedom, cooperation, and the spiritual dimension of life. If we manage to transform our economy so it’s not overly restrictive, I believe our next step as a species can unfold on a galactic level. We have a chance to connect with other civilizations and enrich ourselves with their experience. Once we do our homework here on Earth and pass through this birth of human civilization, another level awaits us—another “game” in the fractal of existence.


Dalibor:

That’s a big thought.


Karel:

It would also solve the motivation questions.


Dalibor:

It would indeed solve many things—in a grand sense. And I think it would even surpass Elon Musk—he’s trying to get to Mars; we’d go straight to the galaxy.


Karel:

Exactly. Today, hyperspace travel isn’t just fantasy. Physics is gradually heading there.


Dalibor:

Absolutely. But I also think it’s important to start with each individual. Freedom isn’t just an abstract concept—it’s about each person truly wanting it and understanding what it means. One must be able to answer what free will and personal responsibility represent. Humanity has known these principles for hundreds or thousands of years—we bear consequences for our actions, even if they don’t manifest immediately. It’s not that if we do something wrong, the system instantly “disconnects” us. The world is more complex, with delays. Once a person realizes this and starts reflecting, they understand that in some situations it’s perfectly fine to think materialistically—solve a specific problem with focus, like a cat hunting a mouse that can’t be distracted by everything around.


But the spiritual component of our consciousness must remain present. Just as the right hemisphere never sleeps, our consciousness must remain active—awake or asleep. It keeps us alive and connected to the meaning of existence. Everything else follows from that.If we accepted pure determinism and considered ourselves machines without will and causality, we wouldn’t deserve freedom.


Karel:

Yes, I agree. We are not determinists. I believe we are beings with a deeper purpose than the purely materialist view of life offers. And I’d like to recommend to everyone watching us the book Life in the Age of Robots, which develops many of these themes in depth. Dalibor, thank you very much for a highly stimulating and inspiring discussion.

 
 
 

Comments


  • White Facebook Icon
  • White LinkedIn Icon
  • X
  • White Instagram Icon
  • Youtube

Subscribe to the newsletter

bottom of page