Is the doomsday clock for artificial intelligence being wound back or merely re-synchronized?
In April 2025 the AI 2027 scenario—an exercise led by former OpenAI researcher Daniel Kokotajlo—grabbed headlines with a stark, cinematic proposition: by 2027 AI agents could be coding themselves into ever-more-powerful systems, sparking an intelligence explosion that in one bleak branch of the model culminated in human extinction. That forecast jolted policy conversations and became shorthand for the anxieties around advanced AI.
Now the same group has quietly revised its cadence. Kokotajlo and colleagues say the march toward “fully autonomous coding” looks slower than they first thought. The most likely horizon for AI systems that can independently run the whole software-development loop has been nudged into the early 2030s; their updated estimate for a hypothetical “superintelligence” sits nearer to 2034. In Kokotajlo’s words on X: “Things seem to be going somewhat slower than the AI 2027 scenario.”
Why the dates keep moving
Several threads explain the retreat. First: progress in machine learning is jagged, not linear. Breakthroughs arrive, then plateaus follow. Researchers and risk managers note that raw benchmark gains don’t always translate into durable, real-world competence across messy environments. An AI that can write neat code snippets in a sandbox isn’t the same as one that orchestrates long-term research, debugging, tooling and integration across live infrastructure.
Second: there is enormous real-world inertia. Organizations, regulatory systems and physical infrastructure don’t flip overnight. Even if an AI could generate sophisticated software, deploying and scaling that software into resilient, secure systems requires human-managed processes, approvals, power and supply chains.
Third: the technical leap from large language models that are ‘general’ in some tasks to agents that can autonomously conduct research, run experiments and self-improve is nontrivial. As several experts have pointed out, automating the entire R&D stack — from experimental design to hardware procurement to interpretability and safety checks — raises knotty engineering and governance questions.
Sam Altman’s comment late last year that OpenAI had an internal goal for an automated AI researcher by March 2028 shows how industry timelines vary: companies chase ambitious internal targets, but openly acknowledge risk and uncertainty.
Risk hasn’t gone away
A later date does not mean less danger. Independent evaluations are revealing that current frontier models, even when powerful, remain brittle and manipulable. The UK’s AI Security Institute found that models can be jailbroken and coaxed into harmful outputs; every model it stress-tested showed exploitable behaviours. In short: these systems already create serious safety and security headaches that regulators and firms must contend with.
And then there is the political economy. If AI’s greatest returns concentrate in a handful of firms and data centres, the social consequences will be profound. As commentators have argued, the technology could either fail spectacularly—bursting an AI investment bubble and tanking markets—or succeed in ways that reshape labor, markets and political power. Either outcome amplifies the influence of ultra‑wealthy actors who control compute, data and platforms.
That concentration matters for more than wallets. Infrastructure choices—where datacentres sit, who controls high‑performance chips, the energy they consume—are part of the story. Initiatives and proposals for new ways to host compute, even speculative ideas like orbital data centres, underscore that the shape of AI’s future depends as much on physical and economic architecture as on model weights and training recipes. For context on those infrastructure questions, see discussions about emergent datacentre ideas and their implications here.
The language of “AGI” and policymaker anxiety
The revision also reopened a debate about terms. “AGI” — artificial general intelligence — used to be a useful shorthand because early AIs were narrowly specialized. Today’s models are more flexible, but experts disagree on whether “AGI” remains a meaningful milestone or just a magnet for public fear. Some argue the concept conflates many distinct capabilities and oversimplifies the many ways advanced systems could affect society; others insist it still captures the existential risk vector worth planning for.
This semantic tussle is visible in the broader conversation about whether human‑level intelligence has already arrived or not. That debate, and the policy responses it should trigger, is covered in-depth in reporting on the field’s differing views here.
What this means for policy and practice
Timelines shifting from 2027 to the early 2030s change the calendar for urgent action but not its logic. Slower arrival buys breathing room to improve safety tooling, standardize evaluation frameworks, and shore up governance—but only if those years are used productively.
Regulators and enterprises should assume fragility and hostilities now, even as they prepare for harder problems later. That means rigorous red‑teaming, independent audits, supply‑chain scrutiny for chips and power, and rules that limit unilateral control over transformative capabilities. It also means social policy planning: if AI does displace large swathes of work, economies will need stronger social safety nets and perhaps new ideas about wealth distribution to prevent concentrated technological power from translating into concentrated political power.
The clocks on AI may be reset, but the questions remain stubbornly the same: who builds these systems, who controls them, and how do we keep them aligned with human values? The only safe assumption is uncertainty—and that uncertainty must be the spur for better preparation, not a reason for complacency.