ESC

Twitter Turned Microsoft’s New AI Experiment Into a Nazi in Less Than 24 Hours

The future of artificial intelligence is looking bright.

Yesterday, Microsoft unveiled Tay, a robot they hoped would be able to learn from people and engage in “casual and playful conversation.”

It took less than a day for Twitter users to corrupt Tay and make that casual conversation look like it was happening between Ava and Adolph circa 1940. This is mostly because the geniuses at Microsoft programmed an AI you can make say anything by telling it to “repeat after me.”

The chatbot was designed to learn through conversation and mirror its companions and public data mining. Unfortunately, most of the internet is garbage.

Screen_Shot_2016-03-24_at_10.46.22_AM.0

Not all of Tay’s tweets were Hitler-y though. Some were funny or even flirtatious.

Microsoft has been trying to clean up after Tay and delete all its offensive remarks. They’re currently trying to fix it up, but the robot raises questions. Like, how are you gonna build a robot that uses and parrots internet knowledge and culture that’s not 98% terrible?

Apparently Microsoft doesn’t know. Lol.

[H/T The Verge, RT]

Subscribe
Notify of
guest
2 Comments
Oldest
Newest Most Voted
Inline Feedbacks
View all comments
AussieD
AussieD
4 years ago

One of the largest tech company’s in the world doesn’t understand that the internet has no filter or accountability for human speech… The future looks grim.

John Lucky
John Lucky
4 years ago

Sure so wanting to build a wall so to protect Americans makes people/AI Nazis?

https://en.m.wikipedia.org/wiki/Border_barrier

Well then….

Latest
Load more