Elon Musk has said that, "AI is more dangerous than nuclear bombs"
Elon Musk has said that, "AI is more dangerous than nuclear bombs."
He contended that any entity posing physical danger to the general public should be regulated. Cars undergo extensive regulation. Communications are subject to rigorous regulation. Aircraft are heavily regulated. The overarching philosophy behind regulation is that when something poses a danger to the public, there must be regulatory oversight. Musk goes on to emphasize that, in his perspective, AI poses a greater threat than nuclear bombs. If there are regulations for nuclear bombs to prevent individuals from creating them in their backyard, then AI warrants similar regulatory measures.
This is not the first instance of the AI doomsayer advocating for AI regulation. In 2017, Musk referred to AI as a bigger threat than North Korea. He also predicted that robots would eventually outperform humans, leading to job disruptions. Speaking to the National Governors Association, he mentioned that the first jobs to be impacted would be those of transportation operators, as transportation becomes entirely autonomous. However, he added that job loss to AI is not the primary threat. The billionaire cautioned that AI represents a fundamental risk to the survival of human civilization, and the solution lies in regulatory oversight.
In 2018, after stepping down from the OpenAI board, which he co-founded with Sam Altman and others, the Tesla CEO stated at SXSW in Texas that the lack of AI/AGI oversight was absurd, and the technology was more perilous than nuclear weapons. Earlier last month, Musk labeled AI as an existential risk at the UK Safety Summit. He has been more vocal about his concerns regarding AI/AGI on X. In April 2023, responding to a tweet from his former partner, Canadian actress Talulah Riley, about the threat of superintelligence, Musk mentioned that he has witnessed numerous technologies develop, but none at this level of risk. He noted that AGI poses a significantly higher threat than nuclear weapons.
During the brief removal of OpenAI’s CEO, Sam Altman, Reuters reported a potential breakthrough at the AI startup behind ChatGPT called ProjectQ*. Rumors suggest it could be a precursor to AGI, sparking speculation in Silicon Valley about the consequences of such AGIs without regulations. Musk criticized the company and questioned its former Chief Scientist and co-founder, Ilya Sutskever, about whether the world needs to be informed if OpenAI is engaging in something potentially dangerous to humanity. He also expressed apprehensions about Microsoft’s unrestricted ownership of AGI.