Over the holidays, PopSugar’s Twinning app experienced a sudden viral resurgence. The app, which uses facial recognition to tell you what celebrities you resemble, dominated Facebook, Twitter, and Instagram. Bored, housebound vacationers took untold numbers of selfies, uploaded them to the app, and posted the results for all to see.
Then, on New Year’s Eve, they learned those selfies were stored unprotected on Twinning’s servers and were “easily downloadable by anyone who knew where to look.”
The revelation immediately sparked other concerns about the app, namely its terms of service agreement, which granted PopSugar sweeping permissions over submitted images. As one business law site noted, “They can use the image to create new content unrelated to #twinning. They are — based on the language in the terms — likely even entitled to sell your images.”
Twinning’s privacy debacle was another in a seemingly endless string of such incidents arising from the opaque and sloppily Byzantine policies and practices that govern our digital information. In the final months of 2018, we learned about massive data security failures at companies from Facebook to British Airways. T-Mobile hacks exposed user info; the app Timehop compromised 21 million phone numbers and email addresses; glitches and breaches exposed the private information of 52.5 million Google+ users; and a cool 500 million Marriott Starwood hotel guests’ data was exposed in a four-year-long database leak. The most unsettling reports in the latter involved the ways our data moves around the internet without our explicit knowledge. In late December, a New York Times report detailed Facebook’s data-sharing partnerships, which included deals with several massive foreign companies that allowed for the sharing of users’ contact lists and address books. One of the most shocking claims revolved around Facebook partner contracts that allegedly allowed Netflix and Spotify to “read, write, and delete users’ private messages.”
This raucous parade of privacy missteps has stoked a growing collective outrage about tech companies playing fast and loose with personal information we have assumed they would properly secure and protect from misuse. There have been Senate hearings, Twitter protests, angry defections, pointed screeds. We’re mad and getting madder.
Certainly, we're aware that some companies are bed-shittingly poor stewards of our personal information.
But are we sure we know why? Certainly, we're aware that some companies are bed-shittingly poor stewards of our personal information, that our data has been left vulnerable, or sold, or misused. But do we really understand what that means for our safety and security online — and when it actually matters?
There are clearly times when we don't, when any effort to understand what actually happened to our personal data is subsumed by knee-jerk outrage — or apathy.
Take that New York Times Facebook privacy exposé. A number of the partnerships reported by the paper — data-sharing alliances with Chinese and Russian companies — seem to confirm long-held beliefs that Facebook’s obsession with growth and scale have indeed come at the expense of user privacy. But other outrage-inspiring details were oversold. The Times said Netflix and Spotify were allowed to read and write private user messages. But as Slate’s Will Oremus noted, this access wasn’t particularly untoward. It was “about allowing Facebook users to read, write, and delete their own Facebook messages from within Netflix and Spotify once they linked their accounts and logged in.” In other words, it wasn’t blanket, do-whatever-the-hell-you-want access to private information. Similarly, the permissions listed in the Twinning app’s terms of service, though they might seem excessive and invasive (and perhaps are), are actually fairly common. Take a look at Instagram’s.
It’s this kind of nuance that’s so often lost in the fallout from privacy breaches. That’s understandable, because this stuff is frustrating and confusing, top to bottom. Terms of service agreements are ludicrous rat’s nests of legalese and business and engineering terminology. The companies that create them are vast and complex, their inner workings full of dizzyingly complex trade secrets. And their businesses are changing in ways that can quickly turn a blanket liability protection for a technology for which we imagined only a few possible and largely reasonable uses into a blanket protection for a technology with unsettling new uses that we never imagined.
And even if we did read the fine print on the services we use, many of the most troubling overreaches happen largely out of sight. In December, the German mobile security initiative Mobilsicher released a report detailing how Android apps like Tinder, Grindr, and others are quietly transmitting sensitive data about people’s religious affiliation, dating preferences, and health to Facebook through a complex process of linking and cross-referencing advertiser IDs from mobile devices. The report notes that while Facebook didn’t conceal this practice, most app developers weren’t aware of it, likely because they didn’t scrutinize Facebook’s software development kit terms of service as closely as they should have. Even more confusing? Facebook views this sort of data collection as an industry-standard practice, not an incursion on privacy.
The end result of this broader privacy reckoning is a bizarre, low-grade feeling of anxiety about the safety of our digital lives, but without any concrete way to see the larger picture. Reported data abuses that sound scandalous — like Spotify and Netflix’s message-reading partnership — may very well not be; meanwhile, mundane-sounding decisions like granting an academic researcher API access for a Facebook quiz app can result in a global political data scandal like Cambridge Analytica.