We are facing today at a profound paradox: surrounded by unprecedented technologies, yet often paralyzed by frameworks inherited from centuries past. In an era where artificial intelligence redefines what is possible, our instinct is still to seek comfort in the linearity of cause and effect, of past shaping present and present predicting future. Yet the very phenomena we now face—generative AI, ubiquitous automation, and the algorithmic transformation of creativity—defy these comfortable narratives. They call for new perspectives, new languages, and new questions.
If we are to understand the futures emerging before us, we must first accept a radical proposition: that the future is not a mere extension of the past, nor a straightforward projection of the present. It is a space we must learn to inhabit with imagination, not simply analyze with inherited methods. Our traditional educational systems, professional pathways, and even our concepts of work and meaning are misaligned with this shift. As we move from an age of repetition to an age of recombination, the assumptions that once offered certainty now risk becoming liabilities.
It is precisely this dissonance that has guided my work for decades. From my early explorations of design as a medium of thought, to recent engagements with artificial intelligence as a collaborator in creativity, I have sought to question the obvious and probe what lies beyond the comfort of precedent. This journey has not been one of mere academic curiosity, but a commitment to discovering how we, as individuals and societies, might prepare ourselves for futures that break free from the linearity of our expectations.
Before we can grasp why the future resists any straightforward continuity with our past or present, it is essential to understand the intellectual journey that has brought me to this point. Over the past decades, I have dedicated my work to exploring ideas with no precedent—endeavors that required years of immersion and direct experimentation, rather than relying on the interpretations of others.
My recent book, Transcending Imagination, Artificial Intelligence and the Future of Creativity, exemplifies this approach: it emerged from nearly two years of intense engagement, during which I generated thousands of images to probe the realities of generative AI. One cannot write with integrity about what one has not experienced firsthand.
Much of academia, by contrast, rests upon literature reviews, building interpretations upon existing scholarship. While a legitimate path, it is not one I have chosen, as my goal has been to advance thoughts that expand beyond established references. This ambition has, at times, placed me at odds with conventional publishing processes, especially in the field of business literature, where peer review demands prior frameworks that my work intentionally lacks.
This orientation towards ideas over forms took root early in my career. After years spent designing hundreds of products, I realized that true, lasting impact comes not from creating objects, but from shaping ideas. My 1995 book, Tool Toys: Tools with an Element of Play, demonstrated the transformative power of concepts, sparking international exhibitions and drawing tens of thousands of visitors across eight countries. This experience revealed that intellectual and professional success is found in constructing frameworks that challenge existing paradigms.
This realization led me to what I called the design of design: identifying subtle signals in human behavior and weaving them into new constructs, creating methodologies for strategic foresight where none previously existed. It was through this work that I authored The Imagination Challenge, published by Pearson in 2006—a foundational text that laid out the original methods of foresight, built on rigorous research connecting tangible signals that, while already present, had yet to be integrated into coherent patterns.
The greatest mistake we make when thinking about the future is the fundamental belief that it is connected to the past—or to the present. The future does not share this connection. Reflecting on our lives over the past thirty years since the introduction of the internet—because that was the decisive moment, not merely when the internet first existed but when it became equitably distributed around the year 2000—we see this clearly. I personally had internet access as early as 1991-92, but it was not yet widely available; I had a terminal, but others did not, which meant it was impossible to judge where this new behavior would lead or how profoundly it would reshape society.
From around 2000 to 2004, there was an extraordinary explosion during which every organization and individual had to understand where society was heading. You had to imagine that the internet was no longer an option; it had happened, it was here to stay, and we—whether individuals or companies—had to respond in one way or another. Even companies producing seemingly unrelated products, like toilet brushes, soap, or toothpaste—Unilever, for instance—suddenly found themselves needing to ask, “What do we do with this?”
That period witnessed the emergence of entirely new professions, and new methodologies previously unimagined: interaction design, experience design, user journeys, user experience, interface design—all disciplines that simply did not exist before. We can draw an analogy between the advent of the internet then and the emergence of AI today. Yet there is a crucial difference: AI—particularly generative AI—is fundamentally different. This is AI that does something for you: it generates text or images based on your intention as a designer. AI, in various forms, has existed for seventy years, especially in strategic command or military applications, where many decisions have long been aided by AI technologies. These decisions were often presented to us as collective conclusions reached by people behind closed doors, when in reality they may have been the output of AI systems.
Now, AI has stepped out of the shadows, leaving behind its stealthy existence, and become an everyday consumer technology—catalyzed by tools like ChatGPT but foreshadowed even before it. The difference between AI and prior technologies such as the internet, transistors, mechanization, or electricity is stark: those earlier technologies emerged as responses to explicit problems we had identified. Each represented a clear solution: we asked a question, identified a problem, and developed a solution—whether it was a manual eggbeater upgraded to electric or mechanical versions. We understood the benefit: the output was the same—beaten eggs—but achieved with less physical effort. This is the essence of traditional technological evolution: it saves us labour, yet the outcome remains unchanged.
In contrast, AI arrived not as a response to a defined question but as a question itself: What do you want to become? What do you want to do now that this exists? That is the new frontier—both professionally and personally.
These are two different questions, so let us return for a moment to the earlier discussion about the connection between past, present, and future. I will offer a concrete example: when Wi-Fi was introduced and people began adopting it, no one thought about cables anymore. There wasn’t a gradual transition—this was a disruptive shift. Disruptors and disruptions do not occur through deliberate, conscious choice; no one forced you to switch to Wi-Fi, but the benefit was immediate, almost instantaneous and measured in tenths of a second. People intuitively understood the solution to a problem they hadn’t even fully articulated: cables, wires, clutter. They grasped instantly that what they wanted was simply to be connected. Their fundamental behavior—wanting connectivity—did not change. What changed was how they achieved it: suddenly, they could arrive at the same goal, but wirelessly.
When a true disruptor appears, it does not merely tweak an existing process (wired vs. wireless); it provokes you to become something you might not yet be prepared to become. It forces questions into every aspect of your life: time, place, actions, identity—who you are, where you are, who you might become. What you were no longer matters; the disruptive force redefines everything. This is the essence of a powerful technology: its capacity to transform society itself. And here lies the problem: people expect technologies to offer solutions to clear problems. People—whether individuals, organizations, or governments—do not expect technologies that transform society so profoundly that the very fabric of life is altered.
When such a technology arrives, our first instinct is to apply it to tasks we already perform, just as we did with earlier inventions. For example: writing letters, answering phones, sending emails. We see what it can do; we admire how it improves our existing work, making it faster and more efficient. All seems well—until, one day, a supervisor or leader says, “You need to learn how to use this in entirely new ways—ways that let you do things you couldn’t even imagine before.” That is when the real challenge emerges: understanding, both as individuals and as organizations—or even as entire nations—that everything has changed.
Let’s make this makes it clear: there can be no resistance to this shift. Or rather, resistance will exist—rooted in anxiety—but it is futile in the long run, because we are in a period of transition toward a fundamental societal transformation. Everything we thought we knew about life and work—everything tied to our sense of self, our “I,” our ego—is in flux.
Of course, we see countless articles highlighting initial resistance to change. Resistance always wins the first round. But we must move beyond that moment, because, as I said, AI did not arrive simply to solve repetitive tasks (although it does that brilliantly at first, generating impressive images, writing, or calculations). AI’s real impact dawns when you realize it could replace you at your job—yet work itself is not something innate to us; it is something we were taught. We weren’t born working; we were conditioned to believe work is central to life.
Consider childhood: you are born, you play for the first five or six years, and when you refuse food, your parent pretends the spoon is an airplane to coax you. But around age seven, everything changes: suddenly you’re told not to play with your food, because now you must go to school, where education tells you what you must become. So, when we talk about the future of education, we must recognize that the very phrasing of the question—“What is the future of education?”—is flawed. There is no future of education that can exist in isolation from the future of everything else. Education, work, identity—all these elements evolve simultaneously, though not at the same speed across every domain. Society moves unevenly: some parts adopt new technologies faster, others lag, but the entire “ocean floor” of our civilization is rising together, bit by bit.
Therefore, when we discuss the future of education, we must simultaneously address the future of work. AI is replacing humans in many tasks, and since education has historically existed to prepare people for work, these two domains are inseparable. We taught generations to grow up not playing, but working diligently until retirement—only to find themselves at 60 wondering, “What do I do now? How do I play again?”
And even if I remain “educated,” I still might not be able to adapt. Let’s set aside the word education for a moment and instead speak about knowledge, because knowledge is fundamentally different from education. Education is designed to take you, as a human being, and shape you into something you are not yet; it equips you with a profession or capability—teaching you how to practice engineering, how to perform surgery, how to carry out specific tasks in specific places. Education produces the engineer, the construction worker, the ophthalmic surgeon—professions for which we traditionally educate people.
Knowledge, on the other hand, is something else entirely. None of those professional educations teach you what to do if you’re caught in the forest during a rainstorm—how to react, whether to run, why you should or shouldn’t let the rain soak you. Knowledge encompasses situational understanding, adaptability, and the ability to respond meaningfully outside narrow professional confines.
I believe the immediate future—which, in fact, is already upon us—is shifting us toward this very need for knowledge rather than rigid education. I try to avoid the word future, even though my company specializes in future-proofing organizations, because the changes we are preparing for are no longer on the horizon—they are happening now.
I am convinced that our first essential capability in this unfolding reality is unlearning: we must unlearn many of the structures we have long believed to be indispensable—structures of work, structures of authority, structures like the rigid hierarchy between teacher and student, where the teacher is positioned here, and the student down there.
Therefore, the future of education will not look like the past. It will be a future where we know exactly what we need to know, precisely when we need it—learning on demand, in the moment, rather than through rigid, preordained pathways.
The question will not be What job do you have? but rather What contribution did you make today? Just like in nature, you are part of a system where you know many things because you chose to learn them—and you can offer contributions to your society in these areas each day. Ultimately, however, everything seems to boil down to money. The real fear is not simply losing your job; it’s about losing income. We’re not afraid of AI taking our jobs—we’re afraid of what happens to our livelihoods if that occurs. It’s the same fear we experienced a few years ago with COVID: suddenly, everything stopped, nobody could leave home, yet governments around the world provided people with money.
This raises a fundamental question: where did that money come from? Let’s return to the purpose of technology itself. Technology exists to free human beings from menial tasks. We invent technologies to sew, to weave—activities we once did by hand but now don’t have to. Over the past 250 years—or more accurately, over the past 50, even 10 years—everything has changed. The arrival of machines, of harvesters, sparked fears of job losses, but these machines performed the work faster and better than we could. As a result, we could plow more land, produce more, sell more, and make more money, which in turn could be distributed more widely.
This is the essence of universal basic income (UBI), an idea few believed in—until COVID made it suddenly real. Let’s think practically about what happens when a machine or robot replaces a human: it can do a hundred times more work, which means products become cheaper due to greater supply, and more of them are sold. Imagine if governments or companies decided to direct the resulting profits—before simply storing them in a bank—into a UBI system, or if governments taxed the robots directly. This isn’t far-fetched: the money flowing into the economy would be far greater than before, while goods would be more plentiful, cheaper, and easier for people to buy, because everyone would have more money.
Unfortunately, in a market economy, very few people believe this can work, because our entire upbringing revolves around going to school, getting a profession, and earning money that way. The very foundation of society is being transformed, and since it changes everything, we must think deeply about the true purpose of life itself. This is the real challenge: most people have no life purpose beyond their profession.
The professions most affected by AI will be those where tasks are highly repetitive. Retail, for instance, was transformed by the internet long before AI entered the picture—and AI has only accelerated the efficiency of this transformation. For example, Amazon turned a single button click into a process where an item is picked from a warehouse, loaded onto a truck, and delivered to your doorstep.
No one stops to consider just how efficient this system is. Consumers don’t realize the scale of efficiency: no more need for physical stores, no more burning coal to power countless locations, and a single truck now serves entire neighborhoods, saving massive amounts of energy. This energy saving at a societal level comes from automation, yet it’s easy to criticize it simply because it threatens traditional jobs—this fear stems from entrenched mental models.
One of the areas I’ve studied extensively for many years is precisely this: how the mind organizes reality, and how frameworks organize the mind. The way you build your internal frameworks determines how you perceive the world; you don’t see the world as it is, but as you believe it to be. And in the world we live in now, our true role as humans is to contemplate what happens to us—this has nothing to do with any profession.
Aristotle articulated this perfectly: The purpose of action is contemplation. This idea drives what is, in fact, the largest industry in the world: museums. There are over 50,000 museums globally. What do you do in a museum? You don’t eat there; you contemplate. You pay money to look at something—not to consume it in a material sense, because the consumption has already happened when the artwork was created, often over many years. Painting, in purely utilitarian terms, seems absurd—it’s unsustainable from a productivity standpoint. Yet we cannot live without paintings, because contemplation itself is a unique form of consumption. When I make coffee, I consume electricity but receive coffee. When I look at a painting, what do I receive? Contemplation.
Agency and Curiosity in a Rapidly Changing World
When we think about the future, the question often becomes: what priorities should we focus on? In foresight practice as a methodology, it is essential to look at tactical agents—the people who will actually have the agency to adopt or implement emerging developments. Everything connects to the relationship between trends moving toward the mainstream. Once a trend reaches the mainstream, it’s often too late to capitalize on it effectively; the right time is the moment it begins gaining momentum—exactly where AI is today, moving toward mainstream adoption.
From there, you must identify the driving forces in society, understanding that each society’s trends diffuse differently. Trends don’t spread uniformly because they emerge from distinct socioeconomic and cultural backgrounds. Legislation, local infrastructure, and social norms all shape this. For example, the internet spread in a unique way in China, a different way in Korea, and yet another way in other countries. Trends never expand at the same speed everywhere.
The best method developed over the last twenty years involves visualizing trends and contextualizing the behaviors and technologies appearing as signals, which are mapped in what’s called a signal map. These maps help track how a disruptor spreads through society and what it changes, creating data sets. With these, you can identify which points on the map are relevant for a specific field—say, policing or education—by asking: which trends apply to my organization, and how are they influenced by local or global driving forces?
In global organizations, this work becomes even more critical because they must account for driving forces in every country where they operate. Building larger maps from these datasets helps organizations decide systematically. A simple example is Unilever: a global company with a house of brands in food, bathroom, kitchen products, and more. When the internet emerged, what advantage did it bring to a company selling shampoo? The answer: it allowed faster and more personalized, one-to-one conversations with consumers. Mass media became micro media, enabling direct engagement. But how long did it take a global organization to learn that it needed a digital strategy?
This was the work I did for eight years—guiding organizations to build strategies and helping their people understand those strategies. Yet employees often weren’t prepared; they went to school for entirely different skills. Meanwhile, the ones intuitively grasping the tools were young people using TikTok or similar platforms—individuals organizations might need to hire, but who don’t fit traditional job descriptions. At the organizational level, this creates profound transformation: if an organization doesn’t adapt quickly enough, it risks being overwhelmed by innovation. Every organization has internal mitigation strategies designed not to foster innovation, but to neutralize it—because innovation disrupts everything.
Dystopia in Organizations
What concerns me deeply is that many future-oriented decisions are made by people who don’t actually use these technologies themselves. They speculate based on fear, imagining dystopian scenarios. It’s always easy to write the story of “a dog bit a man,” but no one writes, “today, no dogs bit anyone.”
This brings us to the crucial distinction among data, information, knowledge, and meaning. Data gives you raw facts; information points you in a direction; knowledge helps you navigate; and meaning tells you why it matters. For instance, a man holding a sharp object in a building is one thing, but a man holding a sharp object in a bank has a very different meaning.
This tension between speculation and lived experience underscores a vital need: to reconnect decision-making with direct engagement. It is not enough to interpret the world through abstractions; one must encounter it intimately. The shift from data to meaning is not linear—it is contextual, human, and temporal. This transformation becomes particularly evident when technologies begin to fold into the body itself.
Let us now turn to health. With the rise of wearable devices such as the Fitbit around 2021, a new kind of embodiment emerged—one mediated through continuous streams of personal biometric data. Suddenly, the human being appeared not as an anonymous subject within a waiting room, but as a quantified self: a narrative composed of rhythms, spikes, plateaus. Here, we witness the materialization of meaning—where data begins to speak not only of function but of intention and possibility. A doctor no longer hears merely a heartbeat through a stethoscope; they interpret a pattern, a story, perhaps a warning. And yet, even as patients become data-rich, the healthcare system often remains epistemologically poor, unable to metabolize this new influx within its outdated diagnostic frameworks. The structure of meaning, once again, lags behind the flow of information.
This kind of transformation isn’t isolated to one sector; it’s happening across every domain, driven by technological convergence. Textiles will have embedded sensors because technology already exists, though it may currently be expensive. Over time, these costs will fall, making everyday objects capable of monitoring our condition and offering personalized advice. As our environment becomes increasingly intelligent, we will see advice systems evolving into catalysts that educate us about our physical and emotional states, providing actionable insights—like a fridge advising on nutrition or a shirt warning of health risks.
All of this leads us to a crucial question: what qualities must remain uniquely human? I believe curiosity must endure—AI cannot be curious. It never asks questions without a clear answer. Humans, however, must continue posing questions that don’t have immediate or obvious solutions. That’s why I challenge AI with questions beyond its training, forcing it into new territories of thought.
I recall an early project at Motorola in 2003-2004, where we developed foresight methods that asked questions sounding strange at the time but were vital for engineers designing the future. Questions like: What will happen when everyday objects, from toothbrushes to toilets, become connected? Even trivial-seeming scenarios—like a toothbrush detecting oral disease—are fundamental because they address problems early, before they escalate.
These once-speculative inquiries served not merely as exercises in imagination but as catalysts for reconfiguring our understanding of technology’s trajectory. By daring to interrogate the seemingly mundane, we laid the groundwork for anticipating patterns of convergence and disruption that would soon redefine entire industries. It is through this lens of purposeful foresight that we must now examine the devices we hold most dear, questioning not their incremental improvements, but the very assumptions of their permanence.
Considering the evolution of devices like smartphones, I believe their lifespan is limited. No transitional technology—from fax machines to CD players—outlives its purpose. What will replace smartphones? Perhaps wearables or immersive displays. We already have ambient computing: environments understanding us and responding to our needs. The display and interface will dissolve into our surroundings.
As we contemplate the dissolution of physical interfaces into ambient systems, we must also recognize that technology’s evolution extends beyond hardware alone; it reshapes the very fabric of our identity. For if our environments become attuned to our presence and intention, the boundaries between who we are and how we appear will blur even further, compelling us to navigate an ever more fluid landscape of self-presentation.
This speaks to Erving Goffman’s insight in The Presentation of Self in Everyday Life: humans curate different personas for different contexts. The internet supercharged this, enabling us to present ourselves as we wish to different audiences—tourist, teacher, friend—adjusting our self-presentation dynamically.
Ubuntu, a Zulu expression meaning I exist because you see me, perfectly captures this. We use social media and technology to construct a digital aura of how we want others to see us. As work becomes less central, playful and creative engagement with this self-construction will occupy a larger part of our lives.
Ultimately, the question becomes: would you rather decisions be made by a person with experience or by a system with comprehensive data? Consider the experience of landing in a dense fog: when the pilot announces the plane will land on autopilot, passengers feel reassured—because autopilot doesn’t panic; it doesn’t suffer stress. It simply acts based on objective data. AI has nothing to lose—only accuracy to gain—analyzing situations purely objectively and, ideally, aligning with what philosophy calls the form of the good. While today’s algorithms are built on our datasets, they’re evolving beyond our initial combinations, opening a path to a radically transformed society.
I constantly find myself reflecting—and we all do—on how deeply attached we are to the tools we identify with and feel we need constantly. These tools represent and amplify us; we extend ourselves through them, willingly and without compulsion. In this process, we rediscover ourselves in a new world, even our recent history—like realizing how good we looked just 30 seconds ago after taking a selfie. Consider how photography has changed in recent years because of digital technology: what was once something for albums with photos slipped under plastic sheets has become something instantaneous and omnipresent.
Authoring the Future
When we think about the future, we must also think about new authors of its narrative. The question is: how could we change the story we tell? To do that, we must change the authors themselves. The core problem is that we plan the future from the perspective of the present, projecting forward based on today’s view—which gives us a destination, but extrapolates only the present.
From this outdated perspective, we try to craft a new story, but if the author carries an old perspective, the story cannot truly be new. Changing the narrative requires changing the authors—choosing those who are optimistic and who hold poetic visions, because the future needs more optimists who can convey hope in ways that inspire. When you do that, you shift the perspective entirely.
Where you begin is crucial: if you start here, in the present, you accomplish nothing. We must start from a future that has not yet happened. We need a vision—one that begins there, then reaches beyond it. For that, we need intuition and inspiration. That’s the difference between Braque and Picasso: both looked at the same bottle of wine and piece of bread, yet each painted a different reality. Why? Because each had a different source of inspiration. Their perception was the same—a bottle, a glass, a baguette—but they created divergent interpretations.
As humans, we possess inspiration and intuition, and we strive to transform reality into something that does not yet exist—something that represents us. This is why I find immense value in these two paintings before me: they depart from perceptual reality, creating entirely new worlds.
Ultimately, the idea is this: because the future is unwritten, it is our greatest source of imagination. What we need now is more optimism—optimism that helps us envision futures we would want to inhabit or contribute to creating. We might not arrive exactly where we imagine, but we can get closer.
Interestingly, many people believe I am overly optimistic simply because I do not speak in dystopian terms. I end my books with hope, which some call naïve. But on what basis do they label it naïveté? Do they know the future with certainty, have they lived it? To dismiss optimism as naïve is to offer an opinion formed from nothing more than unfounded assumptions.
In the end, we must recognize that our greatest challenge is not technological, but philosophical: can we embrace the future not as a linear extension of what we know, but as an invitation to imagine what could be? The disruptions we witness today remind us that continuity is a comforting illusion. It is in discontinuity—in the moments when old patterns dissolve—that new possibilities emerge.
Our task, therefore, is to cultivate curiosity, intuition, and a fearless optimism that defies the cynicism so easily mistaken for wisdom. We must become authors of futures that inspire, rather than prisoners of inherited narratives. We must learn to see beyond the frameworks that once served us but now constrain us. For it is only when we dare to begin not from the present, but from the visions we wish to inhabit, that we unlock our true capacity to shape the world. The future demands from us not certainty, but courage; not prediction, but participation. And it rewards those who approach it not with resignation, but with the audacity to imagine—and the resolve to create—the futures we long to see.
©2025 Alexander Manu