Billionaire entrepreneur Elon Musk believes AI is poised to surpass human intelligence, but he’s surprisingly optimistic—saying there’s “only a 20% chance of annihilation.”
In a recent interview on The Joe Rogan Experience, Musk reaffirmed his long-held belief that artificial intelligence will become an existential threat, predicting that AI could outthink humans as early as next year and surpass collective human intelligence by 2029 or 2030.
Despite this, Musk sees the probability of a positive outcome at 80%. “I think it’s going to be either super awesome or super bad,” he told Rogan, dismissing any middle ground.
AI’s Existential Risk: A Growing Concern
Musk isn’t alone in fearing AI’s dangers. Geoffrey Hinton, a pioneer in deep learning, has estimated a 10% chance that AI could lead to human extinction in the next 30 years. Meanwhile, AI safety researcher Roman Yampolskiy has taken a more extreme stance, pegging the probability of “doom” at nearly 100%.
Musk’s involvement in AI began with OpenAI, a company he co-founded with the goal of creating an open-source, nonprofit alternative to Google’s AI dominance. However, after parting ways, Musk has been vocal in his criticism of OpenAI’s shift to a for-profit model under Microsoft.
His response? The creation of Grok, an AI model under his new company, xAI, which he claims is designed to be a “maximally truth-seeking AI,” even if its responses are politically controversial.
As AI development accelerates, Musk remains both hopeful and wary. “I always thought AI was going to be way smarter than humans and an existential risk,” he reiterated. “And that’s turning out to be true.”
Leave a Reply