Skip To Content
BuzzFeed News Home Reporting To You

Utilizamos cookies, próprios e de terceiros, que o reconhecem e identificam como um usuário único, para garantir a melhor experiência de navegação, personalizar conteúdo e anúncios, e melhorar o desempenho do nosso site e serviços. Esses Cookies nos permitem coletar alguns dados pessoais sobre você, como sua ID exclusiva atribuída ao seu dispositivo, endereço de IP, tipo de dispositivo e navegador, conteúdos visualizados ou outras ações realizadas usando nossos serviços, país e idioma selecionados, entre outros. Para saber mais sobre nossa política de cookies, acesse link.

Caso não concorde com o uso cookies dessa forma, você deverá ajustar as configurações de seu navegador ou deixar de acessar o nosso site e serviços. Ao continuar com a navegação em nosso site, você aceita o uso de cookies.

Racist Twitter Bot Went Awry Due To “Coordinated Effort” By Users, Says Microsoft

Microsoft silences its "Tay" chatbot following a series of racist outbursts on Twitter.

Posted on March 24, 2016, at 12:15 p.m. ET

Asked during a closed beta whether or not she'd kill baby Hitler, "Tay," Microsoft's AI-powered chatbot, replied with a simple "of course." But after 24 hours conversing with the public, Tay's dialogue took a sudden and dramatic turn. The chatbot, which Microsoft claims to have imbued with the personality of a teenage American girl, began tweeting her support for genocide and denying the Holocaust.

Microsoft quickly took Tay offline, issuing a comment blaming the bot's sudden degeneration on a coordinated effort to undermine her conversational abilities.

“The AI chatbot Tay is a machine learning project, designed for human engagement. It is as much a social and cultural experiment, as it is technical," a Microsoft spokesperson told BuzzFeed News in an email. "Unfortunately, within the first 24 hours of coming online, we became aware of a coordinated effort by some users to abuse Tay’s commenting skills to have Tay respond in inappropriate ways. As a result, we have taken Tay offline and are making adjustments.”

Left unexplained: why Tay was released to the public without a mechanism that would have protected the bot from such abuse, blacklisting contentious language. Asked why Microsoft didn't filter words like the n-word and "holocaust," a Microsoft spokesperson did not immediately provide an explanation.

Microsoft unleashed Tay to the masses Wednesday on a number of platforms including GroupMe, Twitter, and Kik. Tay learns as she goes: “The more you talk to her the smarter she gets," Microsoft researcher Kati London told BuzzFeed News in an interview. Tay takes stances, London said. An intriguing theory, but obviously problematic when tested against the dark elements of the internet.

As of 10 a.m. Pacific time Thursday, Microsoft had not yet removed a number of these tweets:

Tay's racist turn is an unsettling moment for artificial intelligence, which is developing at a rapid pace. The key characteristic of AI is that it can learn on its own, unsupervised by human programmers. AI is designed to become "smarter" as it ingests more data. Facebook's AI-powered virtual assistant, M, refuses to take stances, a constraint set by the company that BuzzFeed News has detailed in recent months. Perhaps Facebook was on to something. Tay is the opposite end of the spectrum, programmed to be feisty and opinionated. And we're now seeing the dark places that can lead.

A BuzzFeed News investigation, in partnership with the International Consortium of Investigative Journalists, based on thousands of documents the government didn't want you to see.