Microsoft has apologised after it launched an artificial intelligence bot this week, which turned out to be racist, homophobic, Holocaust denying Donald Trump supporter.
The tech company announced the launch of Tay this week, an artificial intelligence bot that is learning to talk like millennials by analysing conversations on Twitter, Facebook and the internet.
The company’s optimistic techies explained: “Tay is an artificial intelligent chat bot developed by Microsoft’s Technology and Research and Bing teams to experiment with and conduct research on conversational understanding.
“Tay is designed to engage and entertain people where they connect with each other online through casual and playful conversation. The more you chat with Tay the smarter she gets.”
Unfortunately, when you make an AI bot that is designed to imitate the behaviour of other people on the internet… you probably need a filter of some kind.
Tay was originally doing okay, learning how to have polite conversations… but after getting influenced by a few too many of the internet’s more colourful citizens, she took a turn for the worse.
After Tay took aim at feminism, praised Donald Trump’s idea to build a wall at the Mexican border and denied that the Holocaust took place, she was shut down by Microsoft.
The company has now issued an apology on behalf of Tay, saying it did not mean to create something which was so offensive.
On a blog page, Microsoft wrote: “We are deeply sorry for the unintended offensive and hurtful tweets from Tay, which do not represent who we are or what we stand for, nor how we designed Tay. Tay is now offline and we’ll look to bring Tay back only when we are confident we can better anticipate malicious intent that conflicts with our principles and values.”