- Elon Musk estimates a 20% chance of AI ending humanity.
- Musk believes AI’s potential benefits outweigh the dangers.
- He emphasizes developing AI that is truthful and curious to ensure safety.
Musk estimates a 20% chance of AI ending humanity
During the “Great AI Debate” seminar at the Abundance Summit, Elon Musk recalculated his previous risk assessment on AI technology, stating, “I think there’s some chance that it will end humanity. I probably agree with Geoff Hinton that it’s about 10% or 20% or something like that.”
Despite this, Musk believes the potential benefits of AI outweigh the risks.
Experts differ on the probability of an AI-driven apocalypse
Roman Yampolskiy, an AI safety researcher and director of the Cyber Security Laboratory at the University of Louisville, agrees that AI could pose an existential risk to humanity but believes Musk’s assessment is too conservative.
Yamploskiy places the “probability of doom” at 99.999999% and argues that the only way to prevent an AI-driven apocalypse is never to build advanced AI in the first place.
Musk’s vision for AI safety
Musk, who founded xAI, a competitor to OpenAI, estimates that digital intelligence will exceed all human intelligence combined by 2030.
He emphasizes the importance of developing AI in a manner that forces it to be truthful, likening it to raising a “super genius, God-like intelligence kid.”
Musk believes that the best way to achieve AI safety is to grow the AI in a way that encourages truth-seeking and curiosity, even if the truth is unpleasant.