University of Oxford professor says the probability of extinction by AI was 'close to zero'
Per The Independent
While AI has sparked fears of potential doomsday situations, an Oxford University professor debunked these claims, saying that the probability of extinction by AI happening was "close to zero."
The professor also said taht the statement that AI could cause the extinction of humans was just a "publicity stunt." This was according to Sandra Wachter, a University of Oxford professor of technology and regulation.
Wachter: “What we see with this new open letter is a science fiction fantasy that distracts from the issue right here right now. The issues around bias, discrimination and the environmental impact."
The statement by the professor was to address a reacently released letter by the Center for AI Safety, a nonprofit from San Francisco, which reportedly included AI experts and pioneers.
Wachter: “The whole discourse is being put on something that may or may not happen in a couple of hundred years. You can’t do something meaningful about it as it’s so far in the future."
The professor said it would be better to focus on how jobs could be replaced than worrying about the "Terminator scenario." Wachter also highlighted the environmental impact of AI, saying it would take 360,000 gallons of water daily to cool a mid-sized data center.
Wachter: “It’s a publicity stunt. It will attract funding."
Toward the end of February, it was reported that over half of Americans thought that AI posed a risk to humanity. This came from a poll held by Monmouth University, which saw 55% of people believing that AI could pose a risk to humanity.
Only 9% of participants thought AI was good for humanity during that time, while 46% believed it would be equally good and bad.
Per the letter, the Center for AI Safety said that AI posing a risk of extinction should be a global priority. The nonprofit involved over 300 executives, researchers, and engineers working on AI.
See flow at unusualwhales.com/flow.
Other News:
- Over half of Americans think AI poses a risk to humanity
- Center for AI Safety says the global priority should be the risk of extinction by AI
Resources: