OpenAI CEO and Other AI Industry Leaders Warn of Impending ‘Extinction Risk’

Top AI leaders, including OpenAI CEO Sam Altman, have released a concise statement emphasizing the urgent need to address the potential dangers of AI to prevent global extinction. This statement, supported by a diverse group of AI researchers, executives, and experts, was published by the Center for AI Safety (CAIS).

The statement asserts that safeguarding against the risk of AI-induced extinction should be treated as a global priority on par with other significant global threats such as pandemics and nuclear war.

“Mitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks such as pandemics and nuclear war,” The statement itself is straightforward, consisting of a single sentence.

The introduction to the statement, while longer than the statement itself, explains that there is an increasing dialogue regarding various critical risks associated with AI. However, expressing concerns about the most severe risks of AI can be challenging. To overcome this barrier and encourage open discussion, the group formulated a concise statement. It serves the purpose of raising awareness about the growing number of experts and public figures who share concerns about the grave risks posed by advanced AI.

However, it can be difficult to express concerns about some of AI’s most serious risks. The brief statement below aims to overcome this barrier and spark discussion, the organization added. “It is also meant to create common knowledge among the growing number of experts and public figures who take some of advanced AI’s most severe risks seriously.”

Table of Contents

Notable Signatories Revealed

Leading AI figures, including Geoffrey Hinton and Yoshua Bengio, who were honored with the 2018 Turing Award for their deep learning contributions, were among the signatories of the statement. Notably, Yann Le Cun, affiliated with Meta (Facebook’s parent company), did not sign the statement.

Prominent personalities such as Demis Hassabis, CEO of Google’s DeepMind, Sam Altman, CEO of OpenAI (the organization behind ChatGPT), and Dario Amodei, CEO of Anthropic (an AI company), also endorsed the statement.

The signatories included a mix of academics, industry professionals from companies like Google and Microsoft, as well as notable individuals like former Estonian President Kersti Kaljulaid, neuroscientist and podcast host Sam Harris, and Canadian singer Grimes.

The statement was released alongside the US-EU Trade and Technology Council meeting in Sweden, where discussions around AI regulation were expected.

Thierry Breton, the EU’s industry chief, is scheduled to meet Sam Altman in person in San Francisco to discuss how OpenAI will implement the EU’s forthcoming AI regulations, set to take effect by 2026.

Despite Sam Altman previously expressing concerns about the EU’s regulatory proposals and considering withdrawing OpenAI from Europe, he later softened his stance on the matter.

CAIS Newsletter Sheds Light on AI Risks, Urges Global Attention

The one-sentence comment on AI-related concerns included no discussion of what the potential threats were, how severe they have been determined to be, how to reduce them possibly, or who should be accountable for doing so, other than to suggest it “should be a global priority.”

The Centre for AI Safety published an investigation of recent remarks by Yoshua Bengio, director of the Montreal Institute for Learning Algorithms, theorizing how an AI could represent an existential threat to humans before making the statement.

Bengio believes that in the not-too-distant future, AIs will be capable to pursue goals by doing actions in the actual world, something that has only been attempted in more enclosed areas like popular board games like chess and Go. At that moment, he claims that a superintelligent AI could pursue goals that contradict human ideals.

Bengio suggests four possible scenarios that an AI could wind up pursuing goals that are incompatible with humanity’s best interests.

The most serious is the possibility of a hostile human actor commanding an AI to perform something evil. Users have already requested that ChatGPT develop a strategy for global dominance.

He also claims that artificial intelligence could be given an incorrectly characterized or described purpose and draw a wrong inference about its instructions as a result.

The third conceivable area is an AI devising its own subgoals in pursuit of a larger target specified by a human, which may help it reach its goal but at a high cost.

Finally, and perhaps looking a bit farther into the future, Bengio believes AIs may someday acquire a kind of evolutionary drive to act somewhat selfishly, as animals act in nature, in order to ensure their own and their peers’ survival.

Bengio’s solutions for mitigating these dangers include a greater study on AI safety at both the technical and political policy levels.

He was a signatory to an earlier open letter, also signed by Elon Musk, that urged for a pause in the development of bigger and more complicated AI systems to provide time for thought and research.

He also suggests prohibiting AIs from pursuing actual goals and activities for the time being, and argues that fatal autonomous weapons “are absolutely to be prohibited.”