Researchers Accused Google And “60 Minutes” Of Spreading AI “Disinformation”

AI is not “some mysterious, magical, autonomous being,” one critic said.

Still from "60 Minutes" segment with Google CEO Sundar Pichai

Artificial intelligence researchers are calling out both CBS and Google for overhyping AI after the network aired a high-profile 60 Minutes interview with Google CEO Sundar Pichai on Sunday. In the segment, correspondent Scott Pelley claimed that an AI program made by Google had learned a language it had never seen before all by itself, and Pichai referred to AI tech as a “black box” that even the people who worked in the field didn’t fully understand. 

“Of the AI issues we talked about, the most mysterious is called ‘emergent properties,’” Pelley said in the segment. “Some AI systems are teaching themselves skills that they aren’t expected to have. How this happens is not well understood.”

The segment then cuts to a video of an AI program created by Google that the network does not identify, showing a user asking the program questions in Bengali, a language spoken in Bangladesh and India, and the program responding with answers in both Bengali and English. The software in question is PaLM, the same underlying tech that powers Bard, Google’s recently released AI chatbot.

Pelley said that the program “adapted on its own” after it was asked questions in Bengali, a language he claimed it was not trained to know. 

“We discovered that with very few amounts of prompting in Bengali, it can now translate all of Bengali,” James Manyika, a Google vice president also interviewed by 60 Minutes, said on the segment. “So now, all of a sudden, we have a research effort where we’re now trying to get to a thousand languages.”

One AI program spoke in a foreign language it was never trained to know. This mysterious behavior, called emergent properties, has been happening – where AI unexpectedly teaches itself a new skill.

Twitter: @60Minutes

In popular Twitter threads, two prominent AI researchers questioned these claims. Margaret Mitchell, a researcher and ethicist at AI startup Hugging Face who formerly co-led Google’s AI ethics team, pointed out that PaLM, is, in fact, trained on Bengali according to a paper published by Google’s own researchers. The paper says that Bengali made up 0.026% of PaLM’s training data.

“By prompting a model trained on Bengali with Bengali, it will quite easily slide into what it knows of Bengali: This is how prompting works,” Mitchell tweeted. It is not possible, she added, for AI “to speak well-formed languages that you’ve never had access to.”

A CBS spokesperson did not respond on the record to BuzzFeed News’ requests for comment.

Google first showed off PaLM at its annual conference for developers last year. Pichai himself demonstrated the software’s ability to understand and respond to questions in Bengali onstage.  

View this video on YouTube

“What is so impressive is that PaLM has never seen parallel sentences between Bengali and English,” Pichai said at that event. “It was never explicitly taught to answer questions or translate at all. The model brought all of its capabilities together to answer questions correctly in Bengali, and we can extend the technique to more languages and other complex tasks.”

Jason Post, a Google spokesperson, told BuzzFeed News that the company had never claimed that it didn’t train PaLM in Bengali. “While the PaLM model was trained on basic sentence completion in a wide variety of languages (including English and Bengali), it was not trained to know how to 1) translate between languages, 2) answer questions in Q&A format, or 3) translate information across languages while answering questions,” Post said in a statement. “It learned these emergent capabilities on its own, and that is an impressive achievement.”

Emily M. Bender, a University of Washington professor and researcher who wrote a Twitter thread about the 60 Minutes segment, took issue with Manyika’s comments. The program’s ability to translate “all of Bengali,” Bender told BuzzFeed News, is an “unscoped, unsubstantiated claim.” 

“What does ‘all of Bengali’ actually mean?” Bender tweeted. “How was this tested?” She also wrote that Manyika’s statement ignored or hid the fact that Bengali texts are in the training data.

Bender tweeted that the term “‘emergent properties’ seems to be the respectable way of saying AGI,” which stands for artificial general intelligence, a hypothetical technology that can learn on its own and perform tasks better than humans. “It’s still bullshit,” she said.

Mitchell was equally blunt on Twitter. “Maintaining the belief in ‘magic’ properties, and amplifying it to millions (thanks for nothin @60Minutes!) serves Google's PR goals,” Mitchell tweeted. “Unfortunately, it is disinformation.”

Several other people in the tech space also publicly criticized CBS and Google: 

that @60Minutes interview was irresponsible. both @sundarpichai and the reporter should have known better than peddle absolute nonsense thanks @emilymbender, for tirelessly correcting records

Twitter: @Abebab

This is just such a disaster on every count, it’s so embarrassing for 60 minutes to have bought this hook line and sinker without ever having consulted a credible AI scholar or tech journalist

Twitter: @bcmerchant

In a separate Medium post, Bender called upon companies like Google to take “into consideration the needs and experiences of those your tech impacts” instead of positioning AI “as some mysterious, magical, autonomous being.”

Bender told BuzzFeed News that misleading AI coverage could have deleterious effects. “When tech leaders muddy the waters about how the technology actually works and invite us to believe it has mysterious ‘emergent’ properties, that makes it harder to create appropriate regulation,” Bender said. “It is essential in this moment that we hold companies accountable for the technology they put into the world and not allow them to displace that accountability to the so-called AI systems themselves.”

Skip to footer