Everyone’s favourite billionaire and Tesla boss, Elon Musk, has given a stark and worrying warning about the danger of artificial intelligence in the future.
The SpaceX owner has stated AI is more dangerous than nuclear warheads and called for a regulatory body to be in place to oversee developments of super intelligence.
Musk made the announcement during the South by Southwest tech conference in Austin, Texas on Sunday, March 11.
As you may – or may not – be aware, this isn’t the first time Musk has made concerning predictions when it comes to the topic of artificial intelligence. In the past, he’s previously called AI far more dangerous than North Korea.
Whether you believe him or not – or even if you’re sitting on the fence – there are others who’ve called his predictions, fear-mongering.
According to CNBC, Facebook founder Mark Zuckerberg said Musk’s doomsday AI scenarios are unnecessary and ‘pretty irresponsible’.
However, a defiant Musk has called those who push against his warnings ‘fools’, saying:
The biggest issue I see with so-called AI experts is they think they know more than they do and they think they’re smarter than they actually are.
This tends to plague smart people. They define themselves by their intelligence and they don’t like the idea a machine could be way smarter than them, so they discount the idea — which is fundamentally flawed.
I’m really quite close, I am very close, to the cutting edge in AI and it scares the hell out of me. It’s capable of vastly more than almost anyone knows and the rate of improvement is exponential.
Musk pointed to machine intelligence playing the ancient Chinese strategy game ‘Go’ to demonstrate rapid growth in AI’s capabilities.
For example, London-based company, DeepMind – which was acquired by Google in 2014 – developed an artificial intelligence system, AlphaGo Zero, which learned to play the game without any human intervention.
It learned from randomised play against itself and the company made this announcement in a paper published in October last year.
One of Musk’s greatest concerns is AI’s development and the fact it could outpace our ability to manage it in a safe way, continuing:
So the rate of improvement is really dramatic. We have to figure out some way to ensure that the advent of digital super intelligence is one which is symbiotic with humanity. I think that is the single biggest existential crisis that we face and the most pressing one.
I’m not normally an advocate of regulation and oversight — I think one should generally err on the side of minimizing those things — but this is a case where you have a very serious danger to the public.
It needs to be a public body that has insight and then oversight to confirm that everyone is developing AI safely. This is extremely important.
I think the danger of AI is much greater than the danger of nuclear warheads by a lot and nobody would suggest that we allow anyone to build nuclear warheads if they want. That would be insane.
And mark my words, AI is far more dangerous than nukes. Far. So why do we have no regulatory oversight? This is insane.
I don’t know about you, but given the intelligence this man has, I’m slightly concerned.
If, like many of us, you’re concerned about what the future holds – or whether you just love a good conspiracy, chances are you’ll have heard of David Icke.
Check out this documentary of when UNILAD paid him a visit:
The robots are coming – you’ve been warned!