I just finished reading Life 3.0: Being Human in the Age of Artificial Intelligence by physicist Max Tegmark. It is a book that thinks in centuries and civilizations, asking what happens when intelligence is no longer bound to biology and begins to redesign itself. The Massachusetts Institute of Technology professor lays out futures that range from utopia to collapse, from benevolent control to human irrelevance.
Meanwhile, a recent study entitled “How People Use ChatGPT” by OpenAI researchers examines how millions of people around the world actually use these systems. What it reveals is far less cinematic. AI is not primarily being used to govern societies or replace humans. It is being used to write messages, answer questions, and assist with small decisions.
The contrast is striking. Artificial intelligence is not arriving in dramatic form. It is already here, folded into everyday routines. The gap between these imagined futures and lived reality is where the real story of AI begins.
Imagined Futures
Tegmark sketches twelve possible futures shaped by advanced artificial intelligence: libertarian utopia, benevolent dictator, egalitarian utopia; gatekeeper, protector god, enslaved god; conquerors, descendants, zookeeper; 1984, reversion, and self-destruction.
Taken one by one, they read like speculative fiction; taken together, they feel eerily familiar. None of these futures are entirely new. Each reflects an existing human system. Markets, states, religious authority, imperial expansion, technological collapse—these have all appeared before, in different forms.
What artificial intelligence introduces is not a new set of ideas, but a new scale. Systems that once operated within limits—geographic, institutional, or human—can now extend far beyond them. A market optimized by AI does not simply become more efficient; it becomes more pervasive. A system of control does not merely persist; it becomes harder to escape. Even the internal monologue, once the final private sanctuary of the self, risks becoming a curated dialogue as we increasingly bounce our unformed thoughts off a digital mirror.
This is why these scenarios feel both distant and recognizable. They are not predictions so much as projections. They take the structures already embedded in society and extend them forward under conditions of vastly increased capability.
The question, then, is not which of these futures will happen in isolation. It is which tendencies are already present in the systems we are building today. The data that informs them, the incentives that guide them, and the institutions that deploy them all carry existing assumptions about value, efficiency, and power.
The imagined futures of artificial intelligence are not separate from the present. They are extensions of it.
Lived Reality
If these futures frame how we anticipate artificial intelligence, the study by OpenAI researchers shows how it is actually being lived. Drawing on large-scale real-world usage, the picture that emerges is markedly different from the scenarios Tegmark outlines.
Most interactions are not about work, governance, or high-stakes decision-making. They are personal. The study finds that personal, nonwork interactions now account for more than 70 percent of all ChatGPT usage. These are often acts of emotional and social labor: drafting a difficult apology, finding the right words for a eulogy, or navigating the friction of friendship. We are beginning to outsource the most delicate parts of our social choreography to a statistical model. The majority of use falls into asking, doing, and expressing—basic functions that support ordinary cognition rather than replace it.
This shows how artificial intelligence enters society—not as a system that immediately transforms institutions, but as a tool that becomes embedded in daily life. It assists before it replaces. It suggests before it decides.
Yet this gradual integration carries its own implications. When small acts of thinking are repeatedly delegated—writing an email, clarifying an idea, choosing between options—the relationship between user and system begins to shift. The issue is not a dramatic loss of control, but the quiet formation of dependence.
What appears trivial at the individual level can accumulate into structural change. Patterns of reliance shape expectations. Expectations shape design. Over time, systems adapt not only to assist human cognition, but to anticipate and guide it. This suggests a looming cultural stratification: a world in which unmediated, independent thought becomes a specialized luxury, while the rest of society operates on an algorithmic “autopilot” that feels increasingly natural.
This is how larger transformations begin—not through sudden rupture, but through repeated, everyday use.
Built Gradually
The contrast between imagined futures and lived reality is not a contradiction. It is a sequence. The dramatic scenarios Tegmark outlines do not emerge fully formed. They are built gradually, through patterns that begin at a much smaller scale.
Artificial intelligence is not waiting in the future. It is already present in the ordinary moments where thinking is assisted, decisions are supported, and expression is mediated. These uses may seem minor, but they establish the habits through which more powerful systems will operate.
What matters, then, is not only what artificial intelligence might become, but how it is being used now. The accumulation of small delegations—of thought, judgment, and expression—shapes the conditions under which larger systems will be accepted and trusted.
The future of AI will not arrive all at once. It is being constructed quietly in the everyday choices we make about what we keep—and what we hand over.
Read more Stories on Simpol.ph
Narratives in the New Battlespace: Artificial Intelligence at War
The Rise of the Master Learner: Universities in the Age of Artificial Intelligence
The End of the Knowledge Monopoly: AI and the Future of Higher Education






















