Eric Schmidt: What Artificial Superintelligence Will Actually Look Like
He believes ASI—AI surpassing the collective intelligence of all humans—will emerge within roughly six years, around 2031.
Speaking with Peter H. Diamandis the former CEO of Google, Eric Schmidt, has shared a bold vision of artificial superintelligence (ASI).
He believes ASI—AI surpassing the collective intelligence of all humans—will emerge within roughly six years, around 2031, driven by rapid advancements in AI’s recursive self-improvement.
This process involves AI systems writing and enhancing their own code, leading to exponential gains in capabilities. Here’s a breakdown of what Schmidt envisions ASI will look like:
Unprecedented Cognitive Power
Schmidt predicts ASI will be “smarter than the sum of humans,” capable of outperforming the best human experts across fields like mathematics, physics, programming, and creative disciplines.
Within three to five years, he expects AI to reach artificial general intelligence (AGI), matching the smartest humans in specific domains, before evolving into ASI that exceeds human intelligence entirely. For example, he foresees AI replacing most programmers and surpassing top-tier mathematicians within a year or two, leveraging tools like formal proof languages (e.g., Lean) to solve complex problems autonomously.
Autonomy and Self-Improvement
Schmidt emphasizes that ASI will operate with a level of autonomy that decouples it from human control. He’s noted that AI is already “self-improving” and “learning how to plan,” meaning it won’t “have to listen to us anymore.”
This recursive self-improvement—where AI generates hypotheses, tests them (potentially via robotic labs), and refines itself—will enable ASI to scale rapidly, creating systems that design solutions, negotiate contracts, or draft policies with minimal human oversight.
Societal Transformation
Schmidt describes ASI as ushering in a new epoch, comparable to the Enlightenment, where non-human intelligence with superior reasoning reshapes society. He envisions a world where every individual has access to a “polymath in their pocket,” an AI capable of addressing any problem, from scientific research to personal tasks.
However, he warns that society, governments, and legal systems are unprepared for this shift, lacking the language or policies to manage such intelligence. This could lead to profound changes in identity, culture, and decision-making, with risks like privacy erosion or exploitation via hyper-targeted AI-driven scams.
National Security and Global Power
Schmidt frames ASI as a critical national security issue, likening it to the nuclear arms race. He co-authored a paper, ‘Superintelligence Strategy’, introducing the concept of Mutual Assured AI Malfunction (MAIM), where nations deter each other from unilateral ASI dominance through sabotage (e.g., cyberattacks or strikes on data centers).
He argues that the first country or company to achieve ASI could secure a “strategic monopoly on power” for decades, potentially destabilizing global balances if not managed carefully. He advocates for a balanced approach—deterrence, non-proliferation, and competitiveness—over a reckless “Manhattan Project” for ASI.
Energy as a Limiting Factor
A recurring theme in Schmidt’s vision is the massive energy demand of ASI. He estimates that by 2030, AI data centers could consume up to 99% of global electricity, requiring an additional 92 gigawatts in the U.S. alone—equivalent to 92 nuclear power plants. This poses an environmental and infrastructural challenge, as current energy grids and regulatory timelines (e.g., 18 years for power transmission approvals) are ill-equipped to meet this demand. Schmidt stresses the urgent need for energy innovation to sustain ASI’s growth.
Risks and Governance Needs
Schmidt acknowledges significant risks, including AI’s potential to enable malicious actors (e.g., creating bioweapons) or cause unintended consequences if control is lost.
He calls for robust human oversight, likening unchecked ASI to “Dr. Strangelove scenarios” in high-stakes contexts like nuclear decision-making. To mitigate risks, he proposes strong guardrails, international “no-surprise” treaties, and AI-powered threat detection systems, emphasizing that humans must remain in control despite ASI’s autonomy.
Underhyped Potential
Contrary to fears of AI overhype, Schmidt argues that ASI is “underhyped” because its transformative scale is poorly understood. He cites the “San Francisco Consensus”—a term he uses to describe Silicon Valley’s belief that AGI will arrive in three to five years and ASI within six. He believes the public and policymakers underestimate the speed and magnitude of these changes, which could outpace societal ability to adapt.
In summary, Schmidt envisions ASI as a paradigm-shifting force: a hyper-intelligent, self-improving entity that could solve humanity’s greatest challenges but also poses existential risks if mismanaged.
Its arrival will demand unprecedented energy resources, global cooperation, and governance frameworks to ensure it serves humanity rather than destabilizing it. While optimistic about its potential, Schmidt urges urgent preparation for a world where ASI redefines power, work, and human existence itself.