Skip to main content

AI could make humans extinct, say top experts and CEOs in stark warning

There have been conflicting views on the risks to humanity posed by artificial intelligence, with some even going as far as suggesting that AI could make humans extinct. Surprisingly, however, that latter view is shared by many leading experts in artificial intelligence – including the CEOs of both OpenAI and Google DeepMind …

It’s the sort of statement you’d normally expect from conspiracy theorists living in their mom’s basement, but this one couldn’t have more impeccable credentials. The signatories are like a Who’s Who in tech generally, and AI science in particular.

Signatories include renowned academics, and – tellingly – both Sam Altman, CEO of OpenAI, and Demis Hassabis, CEO of Google DeepMind.

The warning comes in the form of a single sentence:

Mitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks such as pandemics and nuclear war.

A preamble does state that the statement is intended to open discussion, but also says that a growing number of experts do genuinely think the stakes could be this high.

AI experts, journalists, policymakers, and the public are increasingly discussing a broad spectrum of important and urgent risks from AI. Even so, it can be difficult to voice concerns about some of advanced AI’s most severe risks. The succinct statement below aims to overcome this obstacle and open up discussion. It is also meant to create common knowledge of the growing number of experts and public figures who also take some of advanced AI’s most severe risks seriously.

The NY Times notes that the signatories also include two of the biggest names in AI.

Geoffrey Hinton and Yoshua Bengio, two of the three researchers who won a Turing Award for their pioneering work on neural networks and are often considered “godfathers” of the modern A.I. movement, signed the statement, as did other prominent researchers in the field (The third Turing Award winner, Yann LeCun, who leads Meta’s A.I. research efforts, had not signed as of Tuesday.)

It follows an earlier open letter calling for a six-month pause on the development of more advanced generative AI models. Signatories included Apple cofounder Steve Wozniak.

The letter says that current AI development is out of control, and may pose “profound risks to society and humanity.”

The latest letter has been described as a “coming out” for AI experts who have been expressing their concerns privately, but have until now been afraid to do so publicly for fear of ridicule. The letter provides safety in numbers and in reputation, as it would now require an even braver person to dismiss fears expressed by so many luminaries.

What’s your view? Could AI represent an existential threat to humanity? Please share your thoughts in the comments.

Image: Google DeepMind/Unsplash

FTC: We use income earning auto affiliate links. More.

You’re reading 9to5Mac — experts who break news about Apple and its surrounding ecosystem, day after day. Be sure to check out our homepage for all the latest news, and follow 9to5Mac on Twitter, Facebook, and LinkedIn to stay in the loop. Don’t know where to start? Check out our exclusive stories, reviews, how-tos, and subscribe to our YouTube channel

Comments

Author

Avatar for Ben Lovejoy Ben Lovejoy

Ben Lovejoy is a British technology writer and EU Editor for 9to5Mac. He’s known for his op-eds and diary pieces, exploring his experience of Apple products over time, for a more rounded review. He also writes fiction, with two technothriller novels, a couple of SF shorts and a rom-com!


Ben Lovejoy's favorite gear

Manage push notifications

notification icon
We would like to show you notifications for the latest news and updates.
notification icon
You are subscribed to notifications
notification icon
We would like to show you notifications for the latest news and updates.
notification icon
You are subscribed to notifications