- AI expert survey reveals a 5% average risk estimate for AI causing human extinction.
- While non-minuscule, this suggests extreme existential dangers.
- There is little expert consensus on the threat level moving forward, given uncertainties around future AI capabilities.
Annihilation just 5% risk
A sweeping new survey of over 2,700 AI experts suggests the relatively low risk of advanced artificial intelligence wiping out humanity. Researchers asked participants to gauge the likelihood of catastrophic outcomes from AI, including human extinction.
Nearly 60% of respondents assessed the threat of AI-driven annihilation at around 5%. While not an insignificant risk, this indicates most researchers believe extreme dangers are still remote.
Studied across institutions
The study was orchestrated by academics across institutions such as Oxford and the University of Bonn.
Author Katja Grace said the findings reveal that serious researchers “don’t find it strongly implausible that advanced AI destroys humanity.” However, she added there is little consensus around a “general belief in a non-minuscule risk.”
Lots of debate
The existential threat posed by AI has stirred vigorous debate recently amongst tech luminaries. Some, like Google Brain co-founder Andrew Ng, dismiss the most dire forecasts.
But other experts, including OpenAI leader Sam Altman, warn of potentially catastrophic consequences from rapid AI advancement if not properly regulated.
This difference of perspective recently prompted AI pioneer Yann LeCun to accuse some industry voices of stoking AI fears for regulatory motives. The new research clearly shows expert opinions distributed across a spectrum.
Moving forward, policymakers shaping the AI landscape must weigh measured risks against potentially transformative benefits.