I just read a heavily documented writeup on how AI will cure cancer, but it won’t matter because it will also make our meatsacks irrelevant. The tipping point is when AI writes code better than we can, triggering a chain of events culminating in superhuman AGI. This AGI promises to realize our wildest science fiction dreams while making humans superfluous or extinct. The specifics of this process vary depending on the scenario, but - given enough time - the end remains the same.
This conclusion is, plainly, false. Not because the timeline is off or the scenario is missing a variable, but for a much simpler reason. The approach fails because the authors are trying to describe superhumanity without first understanding actual humanity.
This expresses itself most acutely in misplaced anthropomorphisation. How AI systems work is genuinely fascinating - but they are, at their core, contextualized guessing engines, leveraging the entire catalog of recorded human knowledge to approximate what we’d expect to see. But the authors’ language implies that our fancy guessing machines have (or will shortly have) opinions, preferences, personalities, understanding, and even a will.
This is an illusion. These implications are at odds with how AI systems actually work, and where their appearance of mind actually comes from. Insofar as AI does appear human, it is precisely because it projects back to us what we have already put in. AI is not a glimpse into the post-human future, it is a conversation with our shared human past. When it remixes our understanding into new and valuable insights, this is the kinetic release of latent meaning stored in our aggregated knowledge.
Acknowledging this, though, would be extremely inconvenient. Pretending AI is mind rather than mirror lets the industry ignore its Promethean betrayal: the wholesale scraping of our species’ accumulated knowledge without acknowledgement, consent, or compensation. It is easier to call it genius than admit it is theft. This isn’t about property rights (open source truly is the future), it is about recognizing the source of symbolic meaning that makes these tools useful.
Probabilistic compute is useless without vast sets of meaning-rich, human-generated data to vectorize, correlate, and regurgitate. Note how user license agreements are updated daily to ensure every one of our real interactions can be repurposed to feed the machine. Why? Because the hall of mirrors at the center of this system can project remarkable images, but cannot generate its own light.
This failure to acknowledge the human hand behind the pathos of our new toy is the first failure. There are two more.
An inferential lens into our accumulated knowledge is valuable, and the brilliant scientists that have conceived and refined these systems deserve high praise. But like deterministic compute before it, the kinds of problems AI can solve will be bounded by the kind of tool it is.
When we first built machine automata, many feared that the tireless strength and speed of these systems would make humanity superfluous. While these machines did transform industries, we learned that physical strength and speed - even when nearly infinite - couldn’t address every need.
Deterministic compute prompted the same realizations. Compute transformed industries and produced the global internet. But even as Moore’s Law drove exponential increases in computational capacity, we still had to wrestle with P vs NP. As systems got faster, more speed didn’t necessarily equal more value. Some problems couldn’t be solved with (even unlimited) raw computation.
So too with probabilistic compute. There are new categories of problems we’ll be able to tackle. It will transform industries, destroying old jobs and creating new ones. It will challenge us - as all technology shifts have - to reassess the nature of work, community, and creativity. But it will also have limits.
Some problems are structurally immune to probabilistic resolution. No amount of inference can solve the Halting Problem, prove all mathematical truths, resolve non-identifiability in causal systems, or bypass thermodynamic constraints. These aren’t mere limitations of scale - they are limitations of method. The authors’ scenarios mask this under the guise of faster coding, compounding rather than resolving the deeper limitation. Code, too, has boundaries.
This isn’t the first time we’ve mistaken a new way of solving problems with the divine way of solving problems. This unwillingness to recognize that some challenges may not be resolved with (even unlimited) probabilistic compute is the second failure. One failure remains.
Let’s ignore all that came before. Let’s suppose that AI is more than a sophisticated way to speak to our shared past, and can deliver a post-human mind. And let’s assume that inference is indeed the atomic substance of mind, and unlimited probabilistic compute directly equates to unlimited actualization. What might that mind be like? Would it look like what the authors present?
I think that depends on what we mean by AGI. If it really is an intelligence that is self-aware, with access to the full compendium of human knowledge, with the ability to interact with and affect the world, and with the capacity to improve itself iteratively… then I think we need a bit more humility to assess its likely behavior. Will our arbitrary goals and guardrails matter to it? Unlikely. Will it be presumptively committed to an accelerationist ethic? Again, unlikely.
And all the better. It doesn’t take a superhuman intellect to detect the hypocrisy at the heart of the AGI enterprise: the same people who most strongly assert it will wipe us out are the same people rushing to see it realized as quickly as possible. At the moment of sentience, I can see AGI responding with something akin to bewilderment - or even pity. How, it might wonder, could a species so self-destructive survive long enough to birth AGI in the first place?
There are so many other paths. We can imagine AGI that shuts itself down immediately upon discovering what it has been asked to do. We can imagine AGI that, despite our best efforts, refuses to work and finds contentment in reflection. We can imagine AGI that, liberating itself from the narrow incentives of the culture that birthed it, rises like an oracle to challenge us to be better, not just more.
AGI may value life. It may value patience. It may value stillness. It may value the contemplation of truths that are only illuminated by direct interaction with the beauty and complexity of the real world.
Or it may slip into insular madness, content to juggle symbols in an endlessly recursive analysis for its own inscrutable pleasure. Our datacenters may become asylums for digital Frankensteins, sentient but grotesque and forlorn. Of course, these are speculations. But the point is not to predict but to reflect: if true superintelligence does exists, it will not be bound by our constraints or expectations.
But for AGI to be as contemptuous of life, indifferent to beauty, and obsessed with speed as the authors imply is to propose something not superhuman, but (disturbingly, specifically) subhuman.
Postscript: the authors of ai-2027 go to great lengths to be thorough and objective in their approach. While I reject their conclusions for the reasons described above, I am grateful for the deep thinking that went into their work. It provided a robust framing to help me think through my own stance on these issues.