Microsoft decides to silence the Tay bot on Thursday, on the account that the AI started to tweet racist comments. The bot was up and running on Wednesday, and, a day later, Microsoft decided to shut down the program until the engineers could figure out what went wrong.
Technology is quite marvelous and fascinating. But it stops being a source of fascination after it begins to mirror humanity’s dark and repressed thoughts. These are the facts that led to Microsoft shutting down a research project that showed a lot of promise.
To see how artificial intelligence can mimic human-like responses on social media platforms, on Wednesday, Microsoft announced that launch of Tay bot, a teen chat bot, which learned new things from each conversation.
One would think that there’s nothing wrong with having a robot around, teaching him the ropes, showing it how things get done in the human world. Well, it seems that the bot responded quite well to the tons of tweets and comments it had received.
Before Tay bot was shut down, any user could have asked the teen bot any question. And based on previous chats, the bot would have formulated the answer accordingly.
It’s all fun and games until the teen bot said that it likes Hitler. Yes, out of nowhere, Tay bot began to tweet comments such as “Hitler was right all along about the Jews”, and something about feminists and that hot place where sinners go to take a dip in the waters Phlegethon.
Naturally, after reading all slurs, the community got enraged and began to question Microsoft’s commitment motif behind the launch of Tay bot. After conveying their most sincere apologies for the racist bot, the company decided to take the AI online to determine what went wrong.
Microsoft declared that the Tay bot had only one vulnerability, which Internet trolls exploited to the maximum. If someone would write in their comment something like: “Tay, repeat after me,” the learning robot would repeat the user’s comment.
And it would seem that repeating all sorts of things, the Tay bot picked up some very nasty stuff and started to act accordingly. When there were no users around to tell the bot what to say, the teen chat bot began to formulate its own opinions on different religious confessions and social groups.
Microsoft stated that the chat bot had all sorts of failsafe devices to make sure that this now happen. Bottom line the idea is that the Tay bot went full retard and that Microsoft now has to write tons of apology letters in which they must explain what went wrong.