- Roman Yampolskiy warns of a 99.9% chance of AI-induced human extinction.
- He believes creating bug-free AI is unlikely.
- Yampolskiy predicts three potential AI outcomes: universal death, suffering, or loss of human purpose.
In a recent podcast interview, AI researcher Roman Yampolskiy of the University of Louisville shared his alarming prediction that there is a 99.9% chance AI will wipe out humanity within the next century.
Yampolskiy’s bleak outlook contrasts with the estimates of most AI engineers, who place the likelihood of AI-induced human extinction between 1% and 20%.
Bug-free AI, mission impossible?
Yampolskiy’s dire prediction hinges on the unlikely possibility of creating highly complex AI software with zero bugs in the next 100 years.
He points to the fact that no AI model to date has been completely safe from people attempting to manipulate it to perform unintended actions.
Recent incidents involving deepfakes, misinformation, and nonsensical outputs from AI models like Google AI Overviews underscore the challenges in ensuring AI safety.
Sandboxes, sci-fi, and sleepless nights
OpenAI CEO Sam Altman has suggested a “regulatory sandbox” approach to AI development, where experimentation is combined with regulation based on outcomes.
However, Altman has also expressed concerns about the “sci-fi stuff” related to AI and the potential for things to go terribly wrong.
Yampolskiy cautions that humans, like squirrels in the face of AGI, cannot predict what smarter systems will do.
Pick your poison: three flavors of AI apocalypse
Yampolskiy outlines three potential outcomes of AI development: universal death, universal suffering, and the loss of human purpose.
The latter refers to a world where AI systems surpass human creativity and take over all jobs, leaving humans without a clear role.
While most experts acknowledge some level of risk associated with AI, they generally consider the likelihood of a catastrophic outcome to be lower than Yampolskiy’s 99.9% estimate.