The End of the AI Ladder: From Artificial to Emergent Intelligence
The tech world has a favourite bedtime story: intelligence climbs a neat staircase. First “narrow” tools. Then general intelligence. Then superintelligence, which is basically a god with more functions.
It’s comforting. It’s also a poor description of how complex systems behave.
Because emergence rarely appears as the next rung on your ladder. Instead, it often comes as a sideways jump, usually from a spot you weren’t watching.
The ladder model is a category error
The staircase narrative suggests intelligence is a single quantity that grows smoothly, like horsepower. Add compute, add data, add parameters, and you get “more intelligence” in the same way you get more speed.
But large language models have already shifted that perspective. They didn’t come as just “slightly better autocomplete.” They emerged as something more unusual: systems trained to predict text that, when scaled up, began to display abilities that no expert anticipated in that form, at that time.
The most revealing part is not even the capabilities. It’s the epistemic aftershock. We built them, we deployed them, we profit from them, and we still find it hard to explain in simple causal terms how they do most of what they do.
Anthropic expressed this plainly in their research on understanding internal model processes: because these models learn strategies during training that are “inscrutable” to developers, “we don’t understand how models do most of the things they do.”
That sentence should alter the mood of every conversation about a smooth climb to the next rung.
Capability first, understanding later
To be clear, there has been genuine progress in interpretability. Anthropic’s “Mapping the Mind of a Large Language Model” details how millions of concepts are represented within Claude Sonnet, seen as a step towards understanding its inner workings.
But notice the structure of the story: the map comes after the city.
This is what happens in complex adaptive systems. You can grow a functioning organism without knowing every developmental pathway. You can build a market without being able to predict every market behaviour. You can create a model that performs, then spend years trying to understand what “performing” actually involves inside the machinery.
So when someone says, “Next is AGI, then superintelligence,” they are assuming intelligence will keep appearing in the same clear form, with simple labels and obvious borders.
That is not how emergence behaves.
Emergent abilities, and the argument about what “emergent” even means
There is a valuable debate here. One influential paper argues that some abilities appear in larger models that are not present in smaller ones, and therefore cannot be predicted by simply extrapolating the performance of smaller models.
Then another paper pushes back, suggesting that some supposed “emergent” jumps can disappear depending on metrics, statistics, and how you measure the curve.
I like this argument, because whichever side you land on, it still lands on my point.
Either:
the capabilities truly jump in ways we cannot smoothly forecast, or
our measurement tools are unstable enough that we mistake smooth change for discontinuity, which means our forecasting instruments are not reliable anyway.
In both cases, the ladder becomes less like engineering and more like theology.
The old gods were emergent, too
When I say “gods,” I’m not trying to smuggle in divinity. I mean something more anthropological: the highest “intelligences” that shaped human civilisation for millennia were often narrative entities, moral frameworks, and coordinating myths that organised behaviour at scale.
Those gods, whatever one believes about their metaphysical status, were also emergent phenomena. They arose from language, culture, fear, hope, politics, ritual, and the human need to compress chaos into story.
Which brings me to a simple question.
If the previous layer of “higher intelligence” arrived as a social-cognitive phenomenon two thousand years ago, why are we so confident the next one will arrive as a clearly bounded machine mind we immediately recognise?
“Machine gods” might be the most likely candidate, but even that phrase carries too much confidence. The next coordinating intelligence could be a mesh: models plus markets plus surveillance plus bureaucracy plus human craving. Not a being, a system. Not a mind, a regime of decision-making.
And we may not recognise it at first, because we will be staring at the ladder, waiting for a man in the sky made of code.
Decentralisation, chaos, and the apocalyptic intuition
Religious texts often describe upheaval before renewal. Plagues, wars, moral decay, fragmentation, then a return, a correction, a new order.
I’m not claiming prophecy. I’m pointing to a pattern that complex systems people will find familiar: decentralisation precedes reorganisation.
When control fragments, variation increases. When variation increases, new structures get the chance to form. Sometimes they’re better. Sometimes they’re worse. Often they’re simply different.
So yes, it is plausible that the current sense of global chaos, fractured media, collapsing trust, competing narratives, and distributed power represents a decentralisation phase that could signal a new emergent layer of coordination. Call it an intelligence layer, call it a god, call it a governance regime. The name matters less than the mechanism.
Emergent ignorance, the cost of complexity
Here is the heart of it.
Humans cannot reliably predict the timing or the outcome of emergence, especially when the system is distributed, nonlinear, and accelerating. Our ignorance scales with the complexity we create.
That’s emergent ignorance.
It’s not a failure of effort. It’s a property of the terrain.
The biogenic lens
If you want a Biogenics framing, it’s this:
Self-production creates the components.
Self-organisation rearranges them into new structures.
Self-correction stabilises what works, and removes what fails.
We are excelling in production at the moment. Organisation is booming across networks. Correction is lagging, socially, politically, ethically, and technically.
That mismatch is exactly the kind of imbalance that produces surprises.
Expect the unexpected, then build for it
So what do we do?
We stop imagining the future as a ladder. Instead, we see it as a landscape of thresholds. We invest in interpretability and governance as if we genuinely trust what the builders tell us—that these machines can be powerful while still being partly opaque.
And we hold our forecasts lightly.
It's not because nothing can be predicted, but because emergence often arrives unexpectedly, wearing a face we didn’t anticipate.
So stop asking when Artificial Intelligence becomes “general” or “super.”
The real shift occurs when Artificial Intelligence becomes Emergent Intelligence, the next layer of intelligence, arriving sideways rather than up a ladder.