Humans Have Always Been Listening To Your Voice Recordings. Why Don’t Tech Companies Just Tell Us That?

Recent revelations could finally force tech giants to explain how they use our data.

Voice assistants seem simple. You say something like “Alexa” or “Hey Siri” followed by a request and then — bam! — your Discover Weekly starts playing over the speaker. But a human being could be listening to everything you say — and the tech companies want to hide those people from you.

As we found out yesterday, Facebook paid outside contractors to transcribe voice memos from users who turned on chat transcription in the Messenger app. The company is the latest in a string, including Amazon, Google, Apple, and Microsoft, caught sending users’ audio to third-party firms for analysis.

To experts, the news might not have come as a revelation. Among people who understand the mechanics of artificial intelligence–powered software, it’s well known that humans review verbal commands to improve speech recognition. But among the general public, there was a sense of shock: The big platforms are tapping into our microphones, and they aren’t telling us.

Those reports seem to have produced changes. Today, Microsoft updated its privacy policy to explicitly say humans review user content, as a result of the practice made public by Motherboard. Facebook, meanwhile, told Bloomberg it paused human review “more than a week ago.” The Irish Data Protection Commission also launched an inquiry into the social media giant’s practices.

But if it seems like the tech companies aren’t being clear with consumers, according to privacy and artificial intelligence experts, that’s because they aren’t.

Most folks buying Google Homes and Echos from a mall kiosk aren’t aware. That’s in part because of the products’ “just like that!” marketing, but largely because Amazon, Google, Apple, Microsoft, and Facebook haven’t clearly told consumers what they do with their voice and video information. None of those companies’ data policies state that what we say and do in front of our voice assistants, internet-connected cameras, and messaging apps can be shown to strangers employed by the companies or their contractors.

That’s bad regardless of whether or not one should expect humans to review voice commands to help train machine learning algorithms at scale.

“I certainly expected that these companies would be feeding recordings to a team of annotators — but that’s only because I have a background in AI,” Jeremy Gillula, the tech policy director at the Electronic Frontier Foundation, told BuzzFeed News. “I don’t expect people to understand that. Companies should be making [human review] far more explicit — in their packaging materials, in their help pages, basically everywhere they talk about this product. They should say, ‘yes, honest-to-god humans can listen to what you say,’” he said.

“[The companies] should say, ‘yes, honest-to-god humans can listen to what you say.’”

Asking for the weather forecast and setting a timer don’t particularly seem to violate privacy. But wake word detection is far from perfect. Devices with voice assistants are always on, and often capture audio that wasn’t intended for them.

Reports in the last six months revealed that human reviewers heard audio snippets mistakenly captured by Amazon’s Echo smart speakers, spoken home addresses picked up by Google Assistant, and private discussions recorded by Apple’s Siri.

After the stories were published, Amazon added the ability to opt out of human review in the Alexa app, but confusingly named the setting “Help Improve Amazon Services and Develop New Features.” Google introduced a similar opt-out feature, but buried it under three different menus in the Home app, in addition to suspending human review in Europe, where it may have been worried about the implications of GDPR. Apple temporarily halted its Siri contractor program and said it would let customers opt out of human review in the future.

Contractors aren’t just listening to our smart speakers. Whistleblowers have also claimed that third parties listened to or watched live footage streamed from the homes of Ring security camera customers, personal conversations conducted on Skype through the app’s translation service, and, most recently, audio calls between friends over Facebook Messenger.

All the while, consumers have had little knowledge about what tech companies are collecting and sharing about them.

“For a long, long time, when we bought a product, we expected the product to work for us, that it was ours, and that we controlled it. The tech community has taken a different approach,” Gillula said. “Now, when you’re using a product, yes, the product will do what you want it to do, but [a company] is also going to use what you do with it to improve future products — and that’s about the extent of what they say in their terms of service.”

Dokyun Lee, an assistant professor at Carnegie Mellon University researching machine learning, told BuzzFeed News that Facebook and others should clearly explain how human involvement can improve the technology. “Of course, many companies would risk people opting out, but a robust system like this would be beneficial for earning users’ trust,” Lee said.

Instead, Facebook’s data policy says this: “Our systems automatically process content and communications you and others provide to analyze context and what’s in them for the purposes described below.” Nowhere does the policy explain that humans can access your audio. In fact, “systems” and “automatically” could easily be taken to imply that a robot, machine, or software is processing the content — and that language is precisely where this data policy, and others like it, fails its users.

Even the setting on Amazon’s Alexa to say no to human review doesn’t clearly say that analysis will be done by humans. Its new opt-out page explains that voice commands are “manually reviewed to help improve our services,” and “only a small fraction of voice recordings are manually reviewed” — which doesn’t explain much at all.

“Opt out is basically worthless,” said ACLU senior policy analyst Jay Stanley. “Studies show that most people leave defaults the way they are. What Amazon is saying is that your privacy only matters if you’re savvy enough to opt out. It should be opt-in, instead.”

Stanley, Gillula, and other privacy experts are calling for strong privacy laws that would create uniform expectations about the use of personal data. Today, Democratic Rep. Seth Moulton from Massachusetts proposed a bill called the Automatic Listening and Exploitation Act (ALEXA for short) that would penalize companies if a smart device recorded audio without the consumer triggering it with a wake word.

Such legislation would compel companies to add disclosures to their products, instead of using privacy as a PR tool. Before the Consumer Electronics Show in January, Apple bought a billboard above the conference with “What happens on your iPhone, stays on your iPhone” in giant text. In April, Facebook CEO Mark Zuckerberg said, “the future is private.” A month later, Google CEO Sundar Pichai wrote an op-ed about how privacy shouldn’t be a luxury good. But in light of recent disclosures, the companies’ privacy policies have been revealed as mere posturing.

Perhaps Silicon Valley — and especially Apple, which has sought to market itself as a privacy-forward tech company — should heed the advice of Steve Jobs, the late Apple founder and CEO, who said in 2010, “Privacy means people know what they’re signing up for. In plain English, and repeatedly, that’s what it means. Ask them. Ask them every time. Make them tell you to stop asking if they get tired of your asking them. Let them know precisely what you’re going to do with their data.”

Skip to footer