IT’S MONDAY – FEBRUARY 23RD – BAYES, AUTOCOMPLETE, AND THE COST OF PRETENDING THIS IS INTELLIGENCE

Bayes’ theorem is probably the single most important concept any rational adult can learn, which explains why almost nobody talks about it outside statistics departments and mildly irritated philosophers.

Thomas Bayes, an 18th-century Presbyterian minister with the unfortunate habit of being correct about uncomfortable things, gave us a simple framework: when new evidence appears, how much should you change your belief?

Not abolish it.
Not defend it on television.
Not launch a foundation to protect it.

Change it proportionally.

Bayes tells us that beliefs are not commandments; they are probabilities. You begin with a prior , a working assumption about how the world operates. Then new data arrives. You weigh it. You update.

If someone believes smoking is harmless, that stress causes ulcers, or that human activity has nothing to do with climate change, those are priors. They may be cultural. They may be inherited. They may be based on incomplete data. But they are adjustable.

A single contradictory study may not be enough to overturn a belief. But as evidence accumulates, the probability shifts. At some point, the prior becomes statistically embarrassing.

Rationality is not about true or false. It is about what is most reasonable given the best available evidence.

And that is precisely where we are failing.

Because new evidence about artificial intelligence has arrived. It is substantial. It is peer-reviewed. It is uncomfortable.

And instead of updating our priors, we are building larger data centers.

This Monday Brief is unusually focused on artificial intelligence. Not because AI is intelligent. But because our priors about it are.

For three years now, we have collectively behaved as if autocomplete had achieved enlightenment. We have watched machines generate fluent paragraphs and concluded that comprehension must be lurking somewhere beneath the grammar. We have mistaken articulation for cognition. It is a very human error.

THE ILLUSION OF THINKING

Apple has published a paper with a title so polite it borders on Scandinavian: The Illusion of Thinking.

It is not metaphorical.

It demonstrates that the AI models we use every day , yes, including the ones currently drafting memos, marketing copy, and suspiciously confident LinkedIn posts , do not think. They do not reason. They do not understand.

They predict the next word.

That is not a criticism. It is a description of the architecture.

Yann LeCun articulated the problem clearly: the biggest difficulty is not getting fooled into believing that a system is intelligent simply because it manipulates language fluently.

Language feels like intelligence because it is how we experience our own intelligence. When a machine writes smoothly, argues coherently, and references context, we instinctively project comprehension onto it.

But predicting tokens in a discrete symbolic system is mathematically tractable. It is impressive at scale. It is not understanding. It is pattern matching in symbol space.

The real world is not symbol space.

It is high-dimensional, continuous, noisy, and stubborn. It changes every millisecond. It does not wait to be tokenized. It does not compress itself into neat training data.

LeCun points out something deeply humiliating: your house cat navigates that reality effortlessly. It predicts motion. It understands physical cause and effect. It adjusts to surprises in real time. It does not require a firmware update to land on the counter.

The most powerful AI systems ever built cannot do what your cat does before breakfast.

This is Moravec’s paradox in action. The tasks humans find intellectually difficult , writing essays, solving equations, passing bar exams , are computationally manageable. The tasks we find trivial , folding a shirt, walking across a room, loading a dishwasher , are extraordinarily difficult for machines.

We have built systems that can draft dissertations before we have built systems that can tie their own shoes.

Everything else is autocomplete at scale.

Bayesian update number one: fluency is not cognition.

CONVERSATION BREAKS THE MAGIC

Microsoft Research and Salesforce recently tested fifteen leading large language models across more than 200,000 simulated conversations.

In single-turn prompts, performance hovered around ninety percent.

In multi-turn conversations , the kind that resemble actual human interaction , performance dropped to sixty-five percent.

The same models. The same tasks. The only difference was that the machine had to endure something resembling normal dialogue.

Aptitude declined modestly. Reliability exploded in the wrong direction.

The models answered before context was fully specified. They anchored themselves to incorrect early assumptions and built confidently on them. They forgot the middle of conversations entirely. Longer responses introduced more errors because more assumptions accumulated.

Even so-called reasoning models failed. Additional “thinking tokens” did not solve the issue. Setting temperature to zero did not solve the issue.

Real conversations break every model on the market.

Every benchmark you have seen was tested under controlled, single-prompt laboratory conditions. Reality is iterative. Humans interrupt. They clarify. They contradict themselves.

In other words, humans behave like humans.

The machines struggle.

Bayesian update number two: we are not deploying digital minds. We are deploying sophisticated guessers.

VERTICAL INTEGRATION WITH CONFIDENCE

While Silicon Valley debates token probabilities, China is building ships.

BYD has launched a 219-meter cargo vessel capable of transporting 9,200 electric vehicles per voyage. It runs on liquefied natural gas and represents something profoundly unfashionable: industrial competence.

This is vertical integration extended to the horizon. Manufacturing, transport, energy strategy, supply chain control , in-house.

At the same time, China has deployed overhead mobile charging stations that physically move to vehicles, eliminating waiting lines. Police in Hangzhou are testing exoskeletons. A humanoid robot recently completed 130,000 steps at minus forty-seven degrees Celsius, because winter has become a benchmark.

These are not press releases about imminent consciousness. They are mechanical, physical, industrial systems operating in the real world.

China is not arguing about whether AI feels creative. It is moving hardware through oceans.

Bayesian update number three: symbolic dominance does not guarantee physical advantage.

SWEDEN, CALM PANIC, AND LANGUAGE MODELS THAT APOLOGIZE

Sweden has announced two things: a cellphone air raid alert system and a Swedish-language AI model.

One hopes the alert system has been programmed not to interrupt Fika or Melodifestivalen, because even existential threats should respect cultural scheduling.

The national language model is presented as digital sovereignty. It is likely to be polite, efficient, and mildly self-critical. It will probably apologize before hallucinating.

Meanwhile, German Chancellor Friedrich Merz has stated that Germany needs more migrants, not fewer, while emphasizing the importance of remaining open. The United Kingdom has classified certain migration concerns as extremist ideology. A German court has ordered X to hand over election data for scrutiny.

Europe continues to regulate with conviction.

It is unclear whether regulation updates its priors as frequently as Bayes would recommend.

THE GODMOTHER AND THE DISTANCE PROBLEM

Fei-Fei Li, often described as the godmother of modern AI, has delivered what should be an uncontroversial statement: clarity of vision is not proximity to arrival.

Self-driving cars were demonstrated in 2006. Twenty years later, they are cautiously operational in limited zones. The destination was visible. The distance was underestimated.

Large language models dominate the conversation because they operate in contained text environments. Spatial intelligence , machines interacting in three-dimensional space with physics, friction, and chaos , is far more difficult.

A robot that can clean a bathroom must understand surfaces, force, exceptions, unpredictable clutter, and gravity. That is not a software update. That is a civilizational research project.

Li does not reject the technology. He rejects the timeline.

The industry, however, tends to mistake vivid imagination for short distance. It is a category error repeated with remarkable discipline.

CORPORATE BELIEF ENGINEERING

Klarna’s CEO now predicts that his workforce will shrink from 3,000 to under 2,000 through natural attrition by 2030. This is the same executive who recently emphasized the future of human support.

Stock prices fluctuate. Narratives adjust.

He may simply be saying aloud what many executives believe privately: twenty percent attrition, no backfills, same output. Quiet displacement rather than headlines.

Accenture has taken a different approach. Senior promotions are now tied to regular usage of internal AI tools. Weekly log-ins are tracked. Adoption is measured.

If tools were transformative, they would not require promotion incentives.

Mandating usage creates performative engagement rather than productivity. It signals investment justification, not organic value creation.

A recent NBER study found that ninety percent of executives report no meaningful productivity impact from AI so far.

Perhaps the problem is not insufficient enthusiasm.

Perhaps the problem is that the tools do not yet do what the sales decks promised.

THE SUPPLY CHAIN COLLAPSE

While we debate cognition, AI companies have broken the global memory supply chain.

Samsung, SK Hynix, and Micron control roughly ninety percent of memory production. High-bandwidth memory for AI workloads yields margins three to five times higher than consumer RAM.

When hyperscale data centers offer to buy your entire output at premium pricing, you do not hesitate.

OpenAI’s Stargate project alone is projected to consume forty percent of global DRAM output. HBM demand is surging seventy percent year-over-year. DRAM prices have risen more than one hundred seventy percent since early 2025.

Micron’s revenue is expected to more than double. SK Hynix is on pace to double again. Samsung’s profits have nearly tripled.

Meanwhile, Sony is delaying its next PlayStation. Nintendo is raising prices mid-cycle. Apple warns that iPhone margins are being crushed. Laptop manufacturers are hiking prices fifteen to twenty percent. The PC market may shrink not because demand vanished, but because memory is too expensive.

Even Nvidia cannot secure sufficient GDDR7 supply.

Elon Musk has informed investors that Tesla must either hit the chip wall or build its own fabrication plant. He is planning a TeraFab.

When one of the richest men on Earth cannot secure supply, the rest of the planet should update accordingly.

The AI boom is not free.

You are subsidizing it every time you buy technology.

Three companies. Six hundred fifty billion dollars in AI spending. Every wafer allocated to a data center is a wafer denied to your phone.

Bayesian update number four: intelligence may be artificial, but scarcity is not.

GEOPOLITICS AS PERFORMANCE ART

Russia has upheld a $1.2 quintillion fine against Google. A number one million times larger than the world economy. Google will not pay it. The number exists as rhetorical theater.

Trump has directed the release of government files related to extraterrestrial life. The United States State Department is building a portal with an embedded VPN so Europeans can access content their governments restricted.

Macron has described free speech as “pure bullshit” if citizens do not understand how they are guided through it, framing the debate as if free speech and hate speech were interchangeable variables.

Christine Lagarde may leave the European Central Bank early, reshuffling influence in Europe’s monetary architecture. She also argues against capital taxes as a tool to prevent capital flight.

Meta has patented technology capable of continuing a deceased person’s online presence through behavioral replication.

The modern state oscillates between surveillance, sovereignty, and spectral social media continuity.

Bayes would recommend restraint.

The timeline suggests escalation.

WHERE WE ACTUALLY ARE

Anthropic’s CEO speaks of a “country of geniuses in a data center,” coordinating at superhuman speed. Perhaps that country exists.

But it currently forgets the middle of your conversation.

We have systems that can write essays but cannot catch a falling object. We have data centers consuming global memory supply while robots still struggle with laundry.

We have companies tying promotions to log-ins. Governments redefining speech. Industrial powers building ships.

The map is not the territory.

The prior was that intelligence had arrived.

The evidence suggests we have built extraordinary tools, impressive but narrow, fluent but fragile.

Bayes would suggest updating the probability.

Do not panic.

Do not worship.

Adjust.

And before you applaud the next trillion-dollar data center, go fold a shirt.

If the machine cannot do it yet, at least you still can.

Scroll to Top