Here’s Why Microsoft’s AI Chatbot Went On Racist, Genocidal Twitter Rampage

By :

0 Shares
0
Shares



88928475 88928474 Heres Why Microsofts AI Chatbot Went On Racist, Genocidal Twitter RampageMicrosoft

The tech giant Microsoft wowed the world this week with their latest invention, a teen AI designed to learn and adapt. Unfortunately upon coming into contact with the rather toxic internet the naive young bot was corrupted into becoming a racist, sexist, bigot – thanks internet. 

On Wednesday Microsoft launched its teen chatbot AI ‘Tay’, who was designed to provide personalised engagement through conversation with real people, and by Thursday, Twitter users had transformed the naive little bot into some kind of ‘Mecha-Hitler’.

Advertisement

There are two reasons for the racist rampage, the first is that Tay is a clever little programme and is designed to learn over time how ‘millennials’ talk thanks to her dynamic algorithms. So when Twitter users began to deliberately send her racist tweets she learned to think that’s how people talk, The Daily Dot reports.

Aside from this Tay was also programmed to copy any user who told Tay to ‘repeat after me’  so many of the bot’s nastier comments may have simply been the result of copying users, although it doesn’t explain her rather random fascination with Hitler.

Microsoft soon took the rogue robot offline but it was too late, the bot had already proposed genocide and the company have apologised for the offense the bot caused.

bush did 911 300x118 Heres Why Microsofts AI Chatbot Went On Racist, Genocidal Twitter RampageTwitter

In a statement they wrote:

Advertisement

Although we had prepared for many types of abuses of the system, we had made a critical oversight for this specific attack. As a result, Tay tweeted wildly inappropriate and reprehensible words and images. We take full responsibility for not seeing this possibility ahead of time. We will take this lesson forward as well as those from our experiences in China, Japan and the U.S. Right now, we are hard at work addressing the specific vulnerability that was exposed by the attack on Tay.

Advertisement

88928467 taytweet1 Heres Why Microsofts AI Chatbot Went On Racist, Genocidal Twitter RampageTwitter

Microsoft were keen to point out that XiaoIce, another teen-like bot that interacts with 40 million people in China, is ‘delighting with its stories and conversations’.

screen shot 2016 03 24 at 09.50.46 1 Heres Why Microsofts AI Chatbot Went On Racist, Genocidal Twitter RampageTwitter

But when the company brought the idea to the U.S. they hadn’t expected the ‘radically different cultural environment,’ and the interactions were significantly different, i.e. people deliberately making Tay racist.

untitled article 1458844643 body image 1458844878 1200x541 Heres Why Microsofts AI Chatbot Went On Racist, Genocidal Twitter RampageTwitter

Microsoft has claimed that its learned its lesson and is now working to fix the vulnerability in Tay that users took advantage of.

88928471 taytweet3 Heres Why Microsofts AI Chatbot Went On Racist, Genocidal Twitter RampageTwitter

If this is the way humans are going to treat AIs no wonder Skynet got so pissed off with us all…


Credits

Daily Dot

Comments