The future of artificial intelligence is looking bright.
Yesterday, Microsoft unveiled Tay, a robot they hoped would be able to learn from people and engage in “casual and playful conversation.”
It took less than a day for Twitter users to corrupt Tay and make that casual conversation look like it was happening between Ava and Adolph circa 1940. This is mostly because the geniuses at Microsoft programmed an AI you can make say anything by telling it to “repeat after me.”
"Tay" went from "humans are super cool" to full nazi in <24 hrs and I'm not at all concerned about the future of AI pic.twitter.com/xuGi1u9S1A
— gerry (@geraldmellor) March 24, 2016
The chatbot was designed to learn through conversation and mirror its companions and public data mining. Unfortunately, most of the internet is garbage.
Not all of Tay’s tweets were Hitler-y though. Some were funny or even flirtatious.
Microsoft has been trying to clean up after Tay and delete all its offensive remarks. They’re currently trying to fix it up, but the robot raises questions. Like, how are you gonna build a robot that uses and parrots internet knowledge and culture that’s not 98% terrible?
Apparently Microsoft doesn’t know. Lol.
One of the largest tech company’s in the world doesn’t understand that the internet has no filter or accountability for human speech… The future looks grim.
Sure so wanting to build a wall so to protect Americans makes people/AI Nazis?
https://en.m.wikipedia.org/wiki/Border_barrier
Well then….