In a recent interview, Demis Hassabis, co-founder and CEO of DeepMind, expressed what many technologists have quietly feared for years: Artificial General Intelligence (AGI) is coming—possibly within the next five to ten years—and society may not be prepared to face the consequences. This bold assertion has reignited discussions around how transformative, and potentially disruptive, AGI could be.
For those unfamiliar, AGI refers to artificial intelligence that can perform any intellectual task a human can, with the ability to reason, learn, and adapt autonomously. Unlike today’s narrow AI—like chatbots, recommendation systems, or image recognition models—AGI could theoretically outthink us, outperform us in nearly all domains, and evolve into Artificial Superintelligence (ASI) in a matter of moments.
A Shift from Speculation to Conviction
Hassabis’ firm conviction that AGI is on the near horizon marks a significant tonal shift in the discourse. A decade ago, ideas about AGI were dismissed as science fiction, particularly after technologies like autonomous vehicles and virtual assistants underwhelmed early expectations. However, the meteoric rise of large language models, such as ChatGPT, and their rapid evolution have altered this perception dramatically. The exponential improvements in AI systems, visible on a yearly—sometimes even monthly—basis, have made even former skeptics reconsider.
Technologists now speculate that AI systems could outperform a large segment of the workforce in the near future, with some suggesting AI will be able to write code on par with 70% of human programmers within a year, and near 100% the year after. Tasks once requiring specialized knowledge—data analysis, customer support, even office management—are being automated at a stunning pace.
The Readiness Gap
The problem isn’t the technology; it’s us. Society, institutions, and governments appear fundamentally unprepared for what’s coming. While individuals are adaptable, systemic readiness remains alarmingly low. Many people still struggle to manage the consequences of existing technologies—social media, misinformation, automation—let alone a world governed by AGI.
Education systems have not evolved to meet the needs of a post-AGI world. Political leadership shows minimal understanding of the technology’s trajectory or implications. Even among the general public, there’s confusion, fear, or indifference—often a toxic combination in moments of radical change.
Trust, Power, and the AGI Race
Interestingly, trust plays a central role in how people view the potential arrival of AGI. Hassabis, a lifelong researcher with a track record of sharing breakthroughs for the common good, is viewed by many as a more responsible steward of AGI development compared to others in the race. There’s a growing sentiment that corporate incentives, especially the drive for rapid productization and profit, could lead to rushed and unsafe deployment of AGI.
This perception isn’t unfounded. Smaller AI companies may lack the resources to thoroughly vet the safety and alignment of AGI systems, while larger firms could face pressure to maintain market dominance, even at the expense of caution.
Open Source vs. Control
One of the most debated topics in the AGI discussion is whether these technologies should be open-sourced. Advocates argue that open-sourcing promotes transparency, innovation, and equitable access. Communities working together could build tools that uplift society: universal healthcare diagnostics, localized manufacturing, and education tailored to every learner.
But there’s a dark side. Open access to powerful systems also means open access to capabilities with catastrophic potential—bioweapon development, advanced cyberattacks, or autonomous warfare. Some argue that AGI should be treated with the same caution as nuclear technology: restricted, regulated, and never placed into public hands without oversight.
A Paradigm Shift Awaits
Perhaps the most chilling part of Hassabis’ message is its understated simplicity: he’s “not sure society is ready.” Given the pace of advancement and the scale of change AGI could bring—curing diseases, unlocking interstellar travel, redefining the nature of work—it’s clear we’re facing more than just technological disruption. We’re staring down an existential transformation of human civilization.
Whether AGI becomes humanity’s salvation or undoing will depend not only on the minds building it, but on the maturity and foresight of the systems surrounding it. Are we prepared to redefine education, governance, labor, and ethics? Can we transition from reactive politics to proactive planning? And perhaps most importantly—will we do so in time?
Because one thing seems certain: AGI is coming. Ready or not.