Humans Will Not Be Able To Control Superintelligent Artificial Intelligence, Study Shows
A new study has warned that it will become impossible to predict the actions of superintelligent artificial intelligence (AI), raising questions over whether humans may eventually lose control.
Research conducted by the Max-Planck Institute for Humans and Machines and published in the Journal of Artificial Intelligence Research has found that in order to accurately predict what an individual AI is going do, scientists would have to run an exact simulation of the system – a feat that will grow more difficult as AI systems become more and more advanced.
Much of the study was based around a question first explored by Alan Turing, known as the ‘halting problem’, which asks whether machines can be stopped by ‘containment algorithms’ – tests that could be applied to programs to effectively shut them down by trapping them in a cycle.
The problem is that the advanced AI systems of today are far more sophisticated than the ones that were first given containment algorithm tests, meaning circuit breakers designed to ‘halt’ AI if it tried to cause harm may be powerless against the AI of the future. Turing himself concluded in 1936 that a single containment algorithm applicable to all programs could not exist, and designing individual containment algorithms tailored to specific systems would become only more difficult as computers grow more intelligent.
Iyad Rahwan, a researcher at the Max-Planck Institute, told Business Insider that ‘the ability of modern computers to adapt using sophisticated machine learning algorithms makes it even more difficult to make assumptions about the eventual behavior of a superintelligent AI’, suggesting that future AI systems may not be able to be reined in by failsafes.
Rahwan is concerned enough about the results of the study that he has suggested that AI programs should not be created unless it’s for a specific understood purpose, as creating AI for the sake of it may make it harder to limit its capabilities as the system develops.
Even more worryingly, superintelligent AIs capable of outsmarting humans aren’t as far off as we might think. In fact, they could already be here. According to lead study author Manuel Cebrian, ‘there are already machines that perform certain important tasks independently without programmers fully understanding how they learned it’.
So as businesses and people across the world race to embrace AI, maybe it’s worth taking a second to think about whether all-powerful machines are something we actually want to be creating – especially now that we know it’s unlikely they will ever be stopped.
If you have a story you want to tell, send it to UNILAD via [email protected]
Most Read StoriesMost Read