unilad
Advert

AI Neural Networks Are Now Smart Enough To Know When They Shouldn’t Be Trusted

by : Hannah Smith on : 25 Nov 2020 16:39
AI Neural Networks Are Now Smart Enough To Know When They Shouldn't Be TrustedAI Neural Networks Are Now Smart Enough To Know When They Shouldn't Be TrustedPA Images

The idea of all-knowing, uncontrollable artificial intelligence (AI) is scary, but if we’re lucky, it looks like AI might just be able to save us from itself.

A team of computer scientists from Massachusetts Institute of Technology (MIT) say they have created AI neural networks that are able to determine whether they can be trusted, and alert humans when they might be about to get something wrong.

Advert

The deep learning networks are being trained to mimic our brains in order to spot specific patterns, many of which may be too complex for actual humans to understand. The idea is that AI-based systems will eventually be able to not only make predictions and decisions, but also let us know how confident they are in those decisions.

Artificial IntelligenceArtificial IntelligencePixabay

As per Science Alert, the team behind the new networks say that this will help ensure greater accuracy, and with AI already being introduced in areas like autonomous driving and medical care, making sure AI can tell if its got things right could genuinely be a matter of life or death.

Alexander Amini of the MIT Computer Science and Artificial Intelligence Laboratory told MIT News:

Advert

We’re starting to see a lot more of these [neural network] models trickle out of the research lab and into the real world, into situations that are touching humans with potentially life-threatening consequences.

Any user of the method, whether it’s a doctor or a person in the passenger seat of a vehicle, needs to be aware of any risk or uncertainty associated with that decision.

The system was tested by getting an AI program to judge depths in an image, while also giving an estimate of its own certainty. The test found that the program was able to tell when it might be wrong, and was also able to flag when it was shown data it was not used to.

Getty

This self-awareness in AI is known as Deep Evidential Regression, and scientists say that as more higher quality data is made available, the ability of systems to accurately score their own trustworthiness will only improve. It’s not the first time safeguards have been built into neural networks, but the team says that it is the quickest and most efficient way for an AI to analyse its own confidence levels.

Advert

This system has not yet been peer-reviewed, but Amini is confident the technology could help improve the real-time safety of AI-assisted technology, saying that ‘any field that is going to have deployable machine learning ultimately needs to have reliable uncertainty awareness’.

So if a real-life Skynet does end up trying to destroy us all, at least it’ll be able to warn us that it’s going to do it.

If you have a story you want to tell, send it to UNILAD via [email protected]

Most Read StoriesMost Read

News

New Republican Representative Marjorie Taylor Greene Plans On Filing Articles Of Impeachment Against Biden

Topics: Technology, Artificial Intelligence, Now, Science, Tech

Credits

Science Alert
  1. Science Alert

    Artificial Intelligence Is Now Smart Enough to Know When It Can't Be Trusted