Microsoft created an AI to learn from how we tweet, it immediately became racist

Some techs at Microsoft decided to build an AI that would learn how to speak by studying how we communicate on Twitter. It took 24 hours for it to become a racist nut job. If anything, it’s surprising it took that long.

Dubbed @TayandYou, the project was developed by Microsoft’s technology and research and Bing teams and designed to have automated discussions with users while learning from the language they used. The Internet being a terrible place, it didn’t take long for some rabble-rousers to introduce the ‘bot to the more offensive side of things. As The New York Times notes, it took just a few hours for @TayandYou to begin disputing the existence of the Holocaust and using “unpublishable words” to describe women and minorities.

Microsoft had to shut down the social and cultural experiment in less than a day and deleted several of the offensive tweets while doing so.

“Unfortunately, within the first 24 hours of coming online, we became aware of a coordinated effort by some users to abuse Tay’s commenting skills to have Tay respond in inappropriate ways,” a Microsoft representative said in a statement. “As a result, we have taken Tay offline and are making adjustments.”

The project was designed to engage in “casual and playful conversation,” which is a cool idea, but like most things — humanity just didn’t deserve this sweet, snarky AI pal. People ruin everything.

(Via The New York Times)

More from around the web