Intelligence, Displacement, and the Question of Agency

I was fascinated by technology long before I understood what the word meant. I must have been six or seven years old. My parents' library had a shelf that felt different from the others-thicker books, stranger titles, covers that hinted at worlds just beyond reach. That's where I found Jules Verne. Twenty Thousand Leagues Under the Sea. Journey to the Center of the Earth. From the Earth to the Moon.

Verne wrote most of those books in the 1860s and 1870s, at a time when submarines, space travel, and advanced machines were not just unavailable but almost unthinkable. And yet, there they were-fully imagined, described in technical detail, embedded in stories that assumed humans would build them sooner or later. What struck me, even then, was not the accuracy of the predictions, but the confidence behind them. The future, in those books, was not a threat. It was a continuation.

That early sense never left me. Long before I became interested in forecasting or long-term technological change, I internalized a simple idea: technology does not arrive out of nowhere. It grows out of human needs, human imagination, and human frustration with limits. And just as importantly, it always forces us to reorganize ourselves.

Which is why I've never been particularly alarmed by robots, automation, or artificial intelligence. Not because the changes aren't real-they are-but because none of this is new in the way we like to think it is.

If you step back far enough, machines have been "taking jobs" for as long as we've been inventing them. The agricultural revolution dramatically reduced the number of people needed to produce food. Where once nearly everyone worked the land, mechanization allowed a small percentage of the population to feed entire societies. That wasn't just an economic shift; it was a civilizational one. It freed human time and attention for cities, science, art, administration, and eventually industry.

The industrial revolution did the same thing at a different scale. Textile machines replaced hand weavers. Steam power replaced muscle. Assembly lines replaced craft production. Each transition came with fear, resistance, and very real suffering. But over time, societies adjusted. Not because they were forced to accept technology, but because they reorganized around it.

Even in our private lives, automation has been quietly reshaping human labor for generations. Washing machines automated one of the most time-consuming and physically exhausting household tasks in history. Dishwashers did the same. Vacuum cleaners, elevators, refrigeration, indoor plumbing-none of these were framed as "robots," but all of them replaced human effort with mechanical systems. They didn't eliminate work. They changed what work was worth doing.

Later came office automation. Calculators replaced rooms full of people once called "computers." Spreadsheets collapsed entire accounting departments into software. GPS eliminated the need to memorize geography. Email restructured communication. None of this sparked existential panic at the time, but in retrospect, each shift rewired how societies functioned.

This is the first thing that often gets lost in today's conversations about disappearing jobs: job disappearance is not a bug of technological progress. It's the mechanism. Tasks that can be standardized, optimized, and repeated eventually are. What replaces them is not always obvious in advance, and it is never evenly distributed, but the pattern itself is deeply familiar.

That doesn't mean the current moment is trivial. It isn't. What distinguishes this phase of technological change is not that machines are replacing human labor, but that they are touching parts of cognition we once assumed were exclusively ours. Previous waves automated muscles. This one automates fragments of thinking: pattern recognition, prediction, optimization, classification. That feels closer to the core of who we are, and so the reaction is stronger.

But even here, perspective matters. Tools that extend cognition are not unprecedented. Writing externalized memory. Printing democratized knowledge. Mathematics abstracted reasoning. Software accelerated calculation. Each time, humans worried that something essential would be lost. Each time, something different emerged instead.

The real question, then, is not whether jobs will disappear. They will. Some already are. Others will be transformed beyond recognition. The more important question is how organizations and societies interpret that change-whether they cling to familiar categories or rethink what they are actually building.

Which brings me to Tesla.

At some point over the last few years, I stopped thinking of Tesla as a car company. Not because it no longer makes cars, but because cars ceased to be the most interesting thing about it. What caught my attention wasn't quarterly sales figures or new models, but where the company was placing its long-term bets.

When Tesla decided to wind down legacy luxury models like the Model S and Model X, many observers read it as a retreat or a financial adjustment. I read it differently. Luxury, by definition, doesn't scale well. It emphasizes finish, customization, and emotional signaling. Automation emphasizes repetition, data, and learning curves. Those two logics don't sit comfortably together.

At the same time, Tesla was pouring resources into software, autonomous systems, and a private artificial intelligence venture designed to centralize learning and decision-making. Robotaxis were framed not as cars, but as fleets. Optimus, the humanoid robot, appeared not as a novelty but as a continuation of the same trajectory: intelligence first, physical form second.

The clearest evidence of this shift lies in how Tesla structures its engineering priorities. Traditional automakers optimize for manufacturing efficiency within a fixed design. Tesla optimizes for data collection within a learning system. Every car on the road feeds information back to a central neural network. Edge cases-unexpected pedestrian behavior, unusual weather conditions, road configurations the system hasn't seen before-become training material rather than isolated incidents.

This matters because autonomous systems improve through exposure, not through specification. You cannot write rules comprehensive enough to cover every scenario a vehicle might encounter. But you can build systems that learn from millions of encounters across thousands of vehicles simultaneously. Tesla's advantage, if it has one, is not better hardware or more sophisticated algorithms. It's more miles driven under more varied conditions, all feeding into the same learning infrastructure.

The Optimus robot extends this logic into a new domain. Humanoid form is not accidental. It allows the system to operate in environments designed for human bodies-stairs, doorways, standard tools-without requiring those environments to change. But the deeper continuity is architectural. The same approach to learning from distributed experience, the same emphasis on generalizable intelligence rather than task-specific programming, the same assumption that capability emerges from accumulated data rather than predetermined rules.

This is why debates about Tesla's vehicle delivery numbers or manufacturing bottlenecks, while financially significant, miss the structural story. The company is not competing primarily on production volume or cost efficiency, though those matter. It is competing on the rate at which its systems improve-and on whether that rate is fast enough to stay ahead of rivals who are now investing heavily in similar approaches.

What makes this trajectory significant beyond Tesla itself is what it reveals about how companies reorganize around artificial intelligence. The shift is not from one product category to another, but from product-centric to system-centric thinking. Instead of asking "what can we build," organizations increasingly ask "what can we teach a system to do, and how quickly can it learn to do it better."

Seen through this lens, Tesla's strategy becomes easier to interpret. The company isn't pivoting away from cars toward something else. It's revealing that cars were never the point. They were simply the first large-scale deployment of a system designed to sense, decide, and act in the physical world.

A self-driving car is a robot with wheels. A humanoid robot is the same system expressed differently. The unifying element is not transportation or hardware, but the accumulation of experience-data, edge cases, failure modes-and the ability to learn from them faster than competitors.

This is why Tesla feels less like a manufacturer and more like an automation platform. And it's also why debates about whether it is "really" a car company tend to miss the larger signal. What we're witnessing is not a corporate identity crisis, but a preview of how organizations change shape when intelligence becomes a central input.

That shift has implications far beyond one company. If firms increasingly define themselves by the systems they build rather than the products they sell, then workers and societies face a similar challenge. Roles tied to specific outputs become fragile. Capabilities tied to interpretation, coordination, and judgment become more valuable.

This is where much of the anxiety around artificial intelligence is misdirected. The fear is framed in terms of replacement-humans versus machines-when the more accurate frame is reallocation. Machines absorb tasks that can be formalized. Humans are pushed, sometimes uncomfortably, toward roles that cannot.

Those roles tend to involve context: understanding situations that don't repeat cleanly, mediating between competing goals, making decisions under uncertainty, and dealing with other humans. They also involve responsibility. When systems act at scale, someone must decide what they should optimize for, what trade-offs are acceptable, and what errors are intolerable.

But there is a darker side to this reallocation that deserves attention. The roles that remain human are not automatically good roles. They may pay less, offer less security, and carry more emotional labor than the work they replace. The shift from factory floor to service counter, from secure employment to gig economy, from middle-class stability to precarious flexibility-these are not abstract reallocations. They are lived experiences, often marked by loss of dignity, community, and predictability.

The question is not just what humans will do, but under what conditions they will do it, and who captures the value generated by the systems that replace them. History offers no reassurance that technological progress distributes its benefits fairly or that displaced workers smoothly transition into better opportunities. More often, gains concentrate while disruption spreads, and the time lag between displacement and adaptation can span generations, not quarters.

This is where the confident assertion that "job disappearance is the mechanism" of progress requires a companion truth: mechanism and outcome are not the same thing. That jobs disappear is predictable. That societies manage the transition well is not. The difference lies in choices-about education, about safety nets, about how gains are shared and losses are buffered.

At the societal level, this means adaptation matters more than optimization. A society built around stable career paths and long institutional memory experiences disruption differently from one accustomed to reinvention. Neither is inherently superior, but they respond to technological acceleration in distinct ways.

This is why I find Bulgaria such an interesting lens through which to think about the coming decades-not as a success story, not as a prediction, but as a context.

Over the past several decades, Bulgaria has lived through repeated structural shifts: political, economic, and technological. Entire industries disappeared. New ones emerged. Career paths were interrupted, rewritten, or abandoned altogether. Stability was not assumed; it was negotiated. Skills were not inherited; they were reacquired.

The collapse of the socialist economy in the early 1990s forced an entire population to rethink assumptions about work, value, and security that had held for generations. State enterprises that once employed thousands shut down or privatized within years. Engineers became taxi drivers. Teachers opened small shops. Academics learned new languages and left the country. The social contract that promised stable employment in exchange for modest prosperity simply ended.

What followed was not orderly transition but improvisation. People retrained, often multiple times. They learned to navigate markets that did not exist a decade earlier. They built small businesses without institutional support. They emigrated, sent remittances home, and returned when opportunities shifted. This was not a planned adaptation. It was survival that eventually became a form of competence.

A generation later, Bulgaria experienced a different kind of disruption: technological. Internet penetration grew rapidly. Software outsourcing became a major industry. Digital tools reshaped commerce, communication, and access to information, all within a population that had already learned, through necessity, not to depend on continuity.

The result is a society where reinvention is not exceptional. Young professionals expect to change careers, often radically. Education is understood as continuous rather than front-loaded. Older workers, having already navigated multiple disruptions, approach new technologies with wariness but not paralysis. The assumption is not that institutions will protect you, but that you will need to adapt, and that adaptation is possible because it has been necessary before.

This creates a particular form of resilience-not the resilience of strength, but the resilience of flexibility. Bulgarians are not more skilled or better educated on average than their Western European counterparts. But they are, in many cases, more comfortable with uncertainty. They have seen systems collapse and rebuild. They have watched entire sectors vanish and new ones emerge. They know that expertise can become obsolete and that security is provisional.

This experience shapes how artificial intelligence is likely to be received. When automation threatens jobs, it does not violate an implicit social contract, because that contract was already renegotiated by force. When career paths become less predictable, it is an intensification of a familiar pattern, not a betrayal of expectations. When institutions prove inadequate to manage change, it confirms what many already suspected rather than shattering a trust that never fully existed.

None of this makes the coming disruptions easy or painless. Job loss hurts regardless of whether it is expected. Economic insecurity creates stress and limits life possibilities. The psychological toll of constant adaptation is real and should not be romanticized. But there is a difference between a society encountering disruption for the first time and one encountering it again. The second time, people know they can survive it, even if they do not know how.

This is not a romantic picture. There are costs to constant adjustment, and they should not be minimized. But there is also a form of resilience embedded in it-a readiness to reorganize when external conditions shift. In an era where technologies evolve faster than institutions, that readiness matters.

What other societies might learn from this is not a specific policy or program, but a mindset. Institutions that treat stability as the norm and disruption as the exception will struggle when disruption becomes frequent. Education systems built around single-career preparation will fail students who face multiple transitions. Safety nets designed for temporary unemployment will prove inadequate when entire sectors contract permanently.

By contrast, frameworks that assume change-that build retraining into standard practice, that separate healthcare and benefits from specific employers, that treat career transitions as normal rather than catastrophic-may prove more durable. These are not uniquely Bulgarian solutions. But they are solutions that emerge more naturally in contexts where continuity cannot be taken for granted.

The broader lesson is not that some societies will "win" the age of artificial intelligence while others lose it. History rarely unfolds that neatly. The more realistic outcome is uneven adaptation: periods of strain followed by new equilibria, shaped by culture, policy, and collective choices.

Technology does not dictate those choices. It constrains them, accelerates them, and sometimes exposes their weaknesses, but it does not remove human agency. The same tools that automate tasks can expand access, reduce friction, and create new forms of coordination-if societies decide to use them that way.

The critical variable is not the technology itself but the social infrastructure around it. Who owns the systems? Who decides what they optimize for? How are gains distributed? What happens to those who cannot adapt quickly enough? These are political questions, not technical ones, and they will be answered through conflict, negotiation, and institutional experimentation, not through algorithms.

This is where optimism and pessimism both miss the mark. The future will not be utopian, because technology does not erase human selfishness, short-sightedness, or inequality. But it will not be dystopian either, because humans are more adaptable, more inventive, and more resilient than our fears suggest. The outcome will be messy, uneven, and contested-much like every previous technological transition.

When I think back to those Jules Verne books, what stands out is not the machines themselves, but the assumption that humans would learn to live alongside them. The stories were not about surrendering to technology, nor about mastering it completely. They were about coexistence-sometimes awkward, sometimes transformative, always unfinished.

That, to me, remains the most realistic frame for thinking about automation and intelligence today. This is not the end of work, nor the end of human relevance. It is another phase in a very old process: the continual renegotiation of what humans do, what machines do, and how societies distribute the benefits and burdens between them.

The future, as always, will not be decided by algorithms alone. It will be shaped by how we adapt to them-collectively, unevenly, imperfectly, and with more continuity than we often expect.

Scroll to Top