OpenAI CEO Sam Altman Sounds Alarm on Superintelligence, Calls for IAEA-Style Global AI Body
He emphasizes that AI could surpass human capabilities in business, research, and innovation, but must be guided by collective societal responsibility.
OpenAI CEO Sam Altman warns of a superintelligence tipping point at the India AI Summit, urging global coordination for AI governance.
He emphasizes that AI could surpass human capabilities in business, research, and innovation, but must be guided by collective societal responsibility.
Altman calls for an international body akin to the IAEA to oversee AI safely.
The recent keynote talk by Sam Altman, CEO of OpenAI, occurred at the India AI Impact Summit (also called AI Impact Summit or India AI Summit) in New Delhi on February 19, 2026.
In his address, Altman delivered a stark warning about the rapid acceleration toward superintelligence, describing it as potentially arriving in the near term and posing a “governance emergency.”
He predicted that early versions of true superintelligence could emerge in just a couple of years, with more of the world’s intellectual capacity residing in data centers than in humans by the end of 2028. Such systems would outperform human CEOs in running major companies and top scientists in original research, representing a profound shift where AI could enable breakthroughs but also risks like designing new pathogens or enabling centralized power concentration.
Key quotes and points from the talk include:
- “On our current trajectory, we believe we may be only a couple of years away from early versions of true superintelligence.”
- “By the end of 2028, more of the world’s intellectual capacity could reside inside data centers than outside of them.”
- Advanced AI could lead to scenarios where “a single AI company could control more wealth than entire governments” or enable catastrophic misuse, but he stressed democratization of AI as “the only fair and safe path forward” to avoid monopoly risks and ensure broad human flourishing.
- Altman urged urgent global regulation and international coordination, proposing an international body modeled on the International Atomic Energy Agency (IAEA) — the UN’s nuclear watchdog — to oversee AI development, set safety standards, coordinate across borders, and “rapidly respond to changing circumstances.”
- He emphasized safeguards against risks (e.g., biosecurity threats from advanced models), while expressing confidence in humanity’s adaptability to job displacement and economic changes.
- Altman praised India’s progress in AI adoption (noting over 100 million ChatGPT users there) and highlighted the country’s potential to lead in responsible deployment.
The speech aligned with broader summit themes, including calls from other figures (e.g., UN Secretary-General António Guterres) for inclusive global AI governance beyond a few countries or companies. It drew significant online attention, with reactions ranging from alarm over compressed timelines to criticism of OpenAI’s past lobbying against certain regulations, and praise for pushing proactive oversight.
Overall, Altman’s remarks underscored OpenAI’s view of superintelligence as imminent, transformative, and requiring unprecedented international structures to manage its upside (e.g., scientific and economic gains) while mitigating downsides amid accelerating progress.



