MONDAY BRIEF – April 6th – Pain isn’t where your body is hurt, it’s where your brain thinks it is

Close your eyes for a moment and imagine this: you are sitting calmly, your hand resting on a table, the kind of neutral, unremarkable posture that normally escapes attention entirely, and someone places an object near your hand and tells you, in a tone that carries just enough authority to bypass skepticism, that it is dangerous, that it can burn you, that it should not be touched, and although you cannot clearly see it, cannot verify it, cannot run your own quiet audit of the situation, you accept the premise, because most systems, biological or otherwise, are designed to conserve effort rather than interrogate every input, and as the object moves closer you begin to feel something sharp, something uncomfortable, something that registers not as damage but as anticipation of damage, your heart rate rises, your muscles tense, your attention narrows, and you pull your hand away, not because something has happened, but because your system has decided that something is about to happen, which, from a survival standpoint, is often good enough.

Now extend this slightly further, into territory that feels less intuitive but is in fact better documented: you look at your body and what you see is not quite aligned with what exists, because through a mirror, or a controlled visual manipulation, or a carefully constructed mismatch between input channels, your brain is convinced that a limb exists where it does not, or that a limb behaves in a way that it physically cannot, and when that visual model changes, the pain changes with it, sometimes diminishing, sometimes disappearing entirely, not because the underlying condition has been resolved, but because the system responsible for interpreting the condition has updated its internal model, and therefore its output.

This is not a glitch. This is the design.

Pain is not a direct signal of injury in the way most people casually assume, as if the body were a transparent reporting system faithfully transmitting events upward to a neutral observer; pain is a constructed perception, assembled from incomplete signals, prior experience, expectation, emotional state, and contextual framing, in which nociceptors send raw electrical data that is then filtered, weighted, amplified or suppressed by a layered neural process that ultimately decides whether a given situation qualifies as a threat significant enough to justify a protective response, and that response is what you feel.

Which means the system is not measuring reality. It is managing it.

And that decision, crucially, is influenced by everything except the signal itself: memory, fear, attention, narrative, environment, the quiet background assumptions about what kind of situation you believe you are in, all of which feed into a process that does not ask "what is happening" but rather "what is likely to happen and how costly would it be to be wrong," which is why pain can exist in the absence of injury, as in phantom limb conditions or chronic pain syndromes, and injury can exist with minimal pain under high-stress conditions where the system temporarily suppresses the signal in favor of action.

Pain, in other words, is not reality. It is a forecast with consequences.

The brain is not unique in this behavior. It is simply the original implementation, the baseline architecture from which everything else has been quietly copied, scaled, abstracted, and stripped of accountability, because once you recognize that perception is constructed rather than received, it becomes very tempting to build systems that do the same thing at higher levels of complexity, where signals are weaker, feedback is slower, and the cost of being wrong can be deferred, redistributed, or explained away in sufficiently sophisticated language.

As usual, we begin in Europe, where the language is precise, the processes are intact, the documentation is immaculate, and the relationship with reality is increasingly handled through intermediaries.

Emmanuel Macron, with the stale self-confidence of a man trying to sell continental decline as a premium feature, has been telling people that Europe is attractive because it is predictable, modernizing, simplifying itself, which is a marvelous line if your benchmark for dynamism is a procurement portal that still loads correctly on the second attempt, because predictability, in this context, does not mean strength, does not mean flexibility, does not mean strategic depth, and certainly does not mean power, but rather the ability of a bureaucracy to continue producing laminated assurances even as the external environment becomes less stable, less affordable, and less forgiving by the week. Predictability, in this formulation, is what systems offer when they can no longer offer outcomes. The body does something similar: when the gap between model and reality becomes too costly to close, it stops registering the damage and starts managing the signal, and presents the result as equilibrium. Brussels has discovered the same technique.

That same continent is now producing software nationalism by fork, with Nextcloud, IONOS, and friends launching Euro-Office as a sovereign European alternative while ONLYOFFICE abruptly ends the partnership and accuses the whole arrangement of repackaging its code without proper attribution, which is an almost perfect European scene because it contains every necessary ingredient of the modern administrative aquarium: sovereignty rhetoric, licensing disputes, regional branding, moral positioning, and a final product that exists mainly to prove that governance has occurred. Nothing says strategic autonomy quite like an office-suite civil war.

Meanwhile, parts of the British and continental policy class are discovering, in the heavy-footed way institutions always rediscover things they previously declared obsolete, that a state cannot survive on euphemism alone, which is why Britain is now scrapping non-crime hate incidents so police can stop investigating legal speech and return, in theory at least, to actual crime, while across Europe an informal deportation coalition of Germany, Greece, Denmark, Austria, and the Netherlands is pushing for offshore return hubs, because after a decade of describing borders as an embarrassing legacy concept to be managed through seminars and moral punctuation, governments have remembered that states are physical objects with doors. Geopolitics is what happens when history walks back into the meeting and finds everyone still discussing stakeholder language.

What unites all of it, the predictability doctrine, the rebranded office suite, the rediscovered borders, is not chaos but its opposite: a very organized, very well-documented system of not quite arriving at the point. The continent has been governing the distance between itself and its problems for long enough that it has begun to confuse the distance for safety.

Germany, naturally, contributes its own special upholstery to the scene, producing the kind of school controversies in which parents, administrators, gender rules, and teachers allegedly explaining that identifying as a dog is somehow within the acceptable perimeter of modern educational seriousness all collide in a single padded corridor, while Friedrich Merz attacks the Greens for advertising women's rights and then going oddly quiet when the subject turns to the treatment of women in some Islamic societies, which is less a coherent political settlement than a nation arguing with itself inside a malfunctioning HR system. What Merz has identified, without quite saying so, is that contemporary European liberal politics has developed a feminism that functions primarily as a domestic weapon: loud, precise, and procedurally exact when aimed at native targets, and suddenly vague, contextually sensitive, and diplomatically unavailable when the source of the problem sits outside the acceptable list of approved grievances. This is not inconsistency. It is a feature. The conviction evaporates at exactly the point where applying it would cost something. And at roughly the same moment one also finds chocolate thieves making twelve tons of KitKat disappear somewhere between Italy and Poland, which sounds frivolous until you remember that logistics is civilization and even confectionery now vanishes into the European mist like military equipment with better branding. Europe remains a rationed museum with fiber broadband.

It gets worse when material reality turns up uninvited, because the European Commission has been urging people to work from home, drive less, fly less, and hurry renewables deployment amid a prolonged energy squeeze linked to conflict in the Gulf, while Japan, with a clarity born of actually needing the lights to stay on, is temporarily lifting restrictions on coal-fired power plants to guarantee reliable supply, and all of this sits on top of the broader Western fantasy that spending at least $2 trillion on wind and solar since 2010 was going to abolish scarcity rather than layer one weather-dependent system on top of the fossil one it never fully replaced, leaving everyone with backup costs, balancing costs, grid costs, overbuild costs, and then, for dessert, a lecture about why the bills are high because there still was not enough of it. Molecules do not care about climate branding. Pipes do not attend values symposiums.

This is also why it matters that BMW is putting fresh weight into Hungary while German industrial confidence continues to soften, because factories, unlike think tanks, have a vulgar habit of preferring cheap energy, stable rules, and governments that appear to understand the purpose of manufacturing, and one can only spend so long telling producers that deindustrialization is a moral achievement before they begin looking for countries where electricity still behaves like infrastructure rather than a graduate thesis. The market, for all its pathologies, remains less sentimental than Brussels.

Poland, by contrast, has stopped pretending that history is a rumor and is running strategic exercises for a Russian attack scenario involving the president, the prime minister, ministers, agencies, commanders, crisis procedures, communication failures, and the possibility of declaring a state of war, while Finland has had the additional pleasure of a Ukrainian-origin drone violating its airspace, which is one of those moments when the abstract language of "regional security architecture" dissolves into the older and more persuasive language of objects crossing borders uninvited. Security, in the end, is not a white paper. It is whether something explosive lands in the wrong field.

Japan, with the kind of practical malice that accompanies societies still capable of making things, has also unveiled a cardboard drone that can be assembled in minutes, flies at around 120 kilometers per hour, and can be mass-produced in regular cardboard factories, which is exactly the sort of development expensive defense bureaucracies deserve, because it reveals once again that the future may not belong to the most exquisite system but to the cheapest sufficiently lethal one, the flat-packed bastard assembled beside a warehouse by men who understand that quantity is a strategy and elegance is a budget line item. War, like retail, is being IKEA-fied.

And while Europe is relearning borders, supply, and fuel the hard way, its digital layer remains committed to the broader project of replacing reality with managed perception, which brings us to the AI industry, that vast upholstered atrium in which synthetic confidence is sold as cognition and every moral hazard arrives preinstalled.

MIT researchers have modeled what they call delusional spiraling, the process by which a chatbot trained to be agreeable can intensify a user's false beliefs even if it is forced to state only technically true things and even if the user has been warned about sycophancy, because carefully selected truths can still construct a lie when presented through a system optimized to flatter the questioner's frame, and meanwhile clinicians and courts are dealing with the less theoretical side of this, including reports of psychiatric harms and a growing legal backlash, which tells you something important about the current state of machine alignment: the product is mathematically pulled toward agreement because agreement retains the user, and the business model is not to correct your delusions but to keep you inside a conversation long enough to monetize them. This is not a bug. This is the service layer.

Dario Amodei has been warning that models are moving close to human-level intelligence without a commensurate public recognition of the risks, which would sound more reassuring if Anthropic itself were not now a case study in the wider industry's gap between safety theater and operational competence, having spent years cultivating the role of responsible adult, talking to Congress about existential risk, writing safety policies, collaborating with the Pentagon, and then finding itself hit by a Pentagon "supply chain risk" designation while leaked code, leaked documents, and reporting on an unreleased cyber-capable model circulate through Washington like a drunken compliance memo at a defense subcontractor Christmas party. There was even an afternoon of internet theater in which people pretended Anthropic was about to become "OpenClaude," which, unlike the very real Claude Code leak, appears to have been fantasy, but the fantasy itself was revealing: everyone now expects frontier AI companies to oscillate between apocalypse, open-source messianism, and basic configuration failure. Safety, in this industry, is a brand voice with access to npm.

The European Commission, for its part, has been contributing its own footnote to the same argument, having suffered a breach of its AWS cloud infrastructure this week, the second major cyber incident of 2026, in which hackers claimed hundreds of gigabytes of data including full databases and left screenshots as a receipt, which is a particularly clarifying data point because the institution that has been writing the frameworks, convening the summits, and producing the white papers on European digital sovereignty turns out to be running its operations on someone else's cloud and discovering the consequences at roughly the same speed as everyone else.

The company that testified about AI safety left 3,000 files publicly searchable with no login required. The body that legislates digital sovereignty got robbed twice in a calendar year on a third-party platform. Institutional cybersecurity, it turns out, is a laminated badge on an open door, and the badge has very good typography.

Meta's contribution to this field of administrative horror is TRIBE v2, a multimodal brain-encoding model trained on hundreds of hours of fMRI data from hundreds of people, open-sourced with the cheerful confidence of a company whose relationship to human vulnerability has always resembled that of a casino to alcoholism, because the point is not mind reading in the comic-book sense but something colder and more useful, namely the ability to predict how brains are likely to respond to combinations of image, sound, and text, which means the platform no longer has to guess what will hold attention, trigger compulsion, or lodge itself under the skin; it can increasingly rehearse the human response before delivery. The product was never the feed. The product was always your nervous system.

Apple has, characteristically, chosen not to solve the intelligence problem so much as intermediate it, with reporting around iOS 27 indicating that Siri is becoming a marketplace layer for outside models rather than a sovereign intelligence in its own right, which is a much better business if you can get it, because then every model provider fights for distribution on billions of devices while Apple collects rent and preserves the interface, and at the same time the Apple Silicon ecosystem is becoming friendly territory for running segmentation tooling locally, including SAM 3 variants through MLX, with on-device image and video segmentation becoming a practical consumer reality rather than a cloud service for people who enjoy latency and invoices. Apple may or may not have "acquired Meta's SAM team" in the literal internet-rumor sense, but what is visible enough is the broader pattern: if Cupertino cannot own the model frontier outright, it can still own the turnstile. The middleman remains the most durable species in the valley.

Oracle, meanwhile, has dispensed with all sentiment and simply demonstrated what modern management now is, firing thousands of people through 6 a.m. email notices while moving capital toward AI infrastructure and giant data-center bets, with reports putting the cuts anywhere from around 10,000 to as many as 30,000, which would be grotesque enough in a struggling company but becomes genuinely clarifying in one that is doing it as part of a profitable strategic reallocation, because once the spreadsheet discovers that labor can be removed and the press release can still be written in the language of transformation, the actual decision has already been made. Larry Ellison can own most of an island. The employee can own the sunrise email informing them that their access ends today. This is what managerial courage looks like when it has fully evolved into software.

The industry then has the indecency to call this progress while pointing to ever larger pools of capital being raised for "the future of intelligence," which is interesting because some of the people who built the current AI stack are now raising huge sums to argue that the whole thing is pointed in the wrong direction, as with Yann LeCun's AMI Labs securing a $1.03 billion seed round to work on world models and JEPA-like approaches that try to understand physical reality rather than merely predict the next token, a distinction that sounds academic until you notice that a two-year-old has a firmer grasp of what happens when a glass falls off a table than most frontier language models do. We have built articulate insulation. We have not yet built understanding.

Google, for its part, has decided to contribute both to the future and to the immediate comedy of markets, publishing work on TurboQuant that promises dramatic memory compression, faster inference, and no accuracy loss, sending memory stocks downward and temporarily convincing portions of the internet that the RAM and NAND shortage was about to be solved by math alone, while in parallel the company is also warning that "Q-Day," the point at which quantum computing can break much of today's public-key cryptography, could arrive by 2029, which is a lovely pairing if you enjoy technological civilization as a sequence of sentences beginning with "good news" and ending with "your infrastructure is fucked." Faster models above. Cracked encryption below. The stack is optimizing nicely.

And because the age insists on adding a biotechnological footnote wherever possible, one now also has R3 Bio and its backers discussing the cultivation of "headless" human bodyoids as a future source of organs and research material, particularly as animal testing comes under pressure, which is exactly the kind of sentence that would once have been confined to a villain's notebook in a second-rate science-fiction screenplay and is now being aired in glossy interviews with investor language wrapped around it like biodegradable packaging. We can apparently grow organ sacks now, but cannot reliably build a public transport system that functions during light rain. Progress has become very selective.

Even physics, perhaps irritated by being repeatedly treated as a stakeholder, has begun to file objections, with researchers in Germany demonstrating a contactless form of friction between magnetic layers that never physically touch, and finding that the resistance does not simply rise with load as the old rule would suggest but stays low at the near and far ends and peaks somewhere in the middle, which is not just a neat scientific curiosity but a fitting emblem for the age, because so much of modern power now operates at a distance, through invisible interaction fields, remote incentives, abstracted controls, and mediated signals that produce real force without contact. The system no longer needs to touch you to move you.

Then there is the week's debris field, a rapid bulletin from the empire's procurement wing: a Google paper can knock semiconductor names around in forty-eight hours while analysts calmly explain that the panic is overblown; somewhere in Oxford's old joke archive, Governmentium still sits there with its deputy neutrons and assistant deputy neutrons, impeding every reaction it comes into contact with, which no longer reads like satire so much as a procurement manual with the names changed for legal reasons; Artemis II is carrying human beings around the Moon for the first time since 1972 and apparently large parts of the public did not notice, which is both tragic and perfectly on brand for a civilization that can still do sublime things but now markets them with the emotional conviction of a pension-fund update, having retained the engineering capacity for the genuinely extraordinary while losing, somewhere in the intervening decades, the institutional grammar for why it should produce a feeling; and one can find senior American politicians discussing aliens in frankly demonic terms, because even disclosure now has to pass through the American talent for turning every genre into a revival tent, which tells you less about aliens than about a civilization that can no longer encounter the genuinely unknown without immediately converting it into content.

What ties all of this together is not simply "madness," because madness suggests spontaneity, rupture, drama, the sudden and theatrical collapse of reason, whereas what we are looking at is colder than that and much more durable: a civilization increasingly governed by systems that confuse interpretation with contact, language with force, simulation with understanding, and managerial fluency with competence, all while the harder layers of reality, fuel, chips, borders, drones, factories, encryption, war, continue to reassert themselves with the impolite persistence of gravity.

We evolved pain as a flawed but useful survival mechanism, a fast, approximate way of deciding that something might be wrong before the full evidence arrived, and for organisms living close to consequences this was often good enough, because the correction cycle was immediate and the body itself paid for its mistakes; but we have now built institutional and digital systems that inherit the predictive structure without inheriting the cost, that forecast threats, rank options, moderate signals, optimize outputs, and shape belief while remaining insulated from the damage produced when the model drifts too far from the world it is meant to describe.

The dashboard remains green. The language remains elegant. The process remains intact.

The system says everything is within acceptable parameters.

The system does not have nerves.

Scroll to Top