In the middle of its Tuesday morning I/O presentation, Google played an audio recording of what was essentially an AI-powered crank call. In it, the latest version of Google’s Assistant calls a hair salon and books an appointment — on its own. During the call, Assistant deftly works with the salon receptionist to find an appointment time, peppering its questions and remarks with realistically placed umms and ahhs. It confirms the date and, after a few pleasantries, ends the call with the receptionist seemingly unaware they'd just conversed with a computer.
It was a classic moment of Silicon Valley keynote magic — a display of technology that feels equal parts unbelievable, transformative, and terrifying. On Twitter, journalists attending the keynote prefaced their tweets about the demo with “no joke” to preempt the eye-rolling and incredulous replies they expected. “Humans quickly becoming expensive API endpoints,” Chris Messina, a Silicon Valley product designer, quipped after the announcement. An offhanded joke, but one that pokes at the unease that comes with seeing a banal task that typically requires a human carried out by a computer pretending to be one. Or the unease of seeing your wants and needs reduced to simple computer input — your haircut as grist for the machine learning mills.
That same sinking feeling applies to much of what Google unveiled at I/O this year. Many of the tech giant’s presentations on Tuesday centered around perception and prediction and offer users to offload more and more of their daily tasks — from booking appointments to composing email — to various Google programs. The company’s argument is straightforward: Leave it to the machines; you’ll be happier, more productive, and free to devote your time to the stuff that really matters. It's the crux of Google’s latest celeb-filled ad campaign, which bears the tagline “Make Google Do It.” But left unspoken is what users might be sacrificing for such benefits. Google has always offered users a trade-off for its services: Let us know everything about you. We promise it’ll be worth your while. But the company's latest vision of the future offers users more invasive trade: dominion over the choices you make — in some cases, control of your literal words and actions.
There’s Smart Compose, which uses artificial intelligence to scan emails for context and suggests ways to finish your sentences while you’re typing. It’s a way to save time, sure, but one that cedes choice of word and sentence composition to an algorithm. It’s a silly little thing, ridiculous. But holy hell! A computer is telling you to write things and you are writing them.
And then there’s an update to Google’s Android operating system that will use machine learning to predict which apps you’re likely to use and allocate processing power to maximize your battery. It will also predict how bright you like your screen, and another new feature called App Actions will recommend apps you might want to use, and specific ways to use them — not “Would you like to make a call?” but “Would you like to call Mom?” Google Photos will analyze your pictures and suggest ways to retouch them. Meanwhile, a revamped Google News will use heavy AI curation to serve up articles and videos it thinks you’ll be interested in based along with information it thinks you should have from outside your bubble. And then there’s the calendar robocaller — which quite literally speaks and interacts with people for you.
Even some of Google’s laudable “digital well-being” efforts seem to infantilize the user. Rather than address the recent scandals that have plagued YouTube — child exploitation content, radicalizing algorithmic conspiracy theory rabbit holes, whatever it is Jake and Logan Paul are doing — Google opted to debut new time-management features including one that suggests the user “take a break” if they’ve been streaming videos too long.
These predictive and assistive features all require users to opt in and, in most cases, operate within a set of established parameters. And many of the functions of the software are far from nefarious. For its Home Assistant, the company has instituted a feature users can turn on that requires children to ask questions politely (called "Pretty please") in order to get a response. As ever, of course, there’s a trade-off: We'll help teach your children good manners for you... just install this speaker that listens to their every word. Taken separately, the changes appear small and harmless, but the offloading of each minute task further acclimatizes the user to the Google ecosystem and, over time, breeds reliance.
This implicit transaction is nothing new — Google has long wanted to be your everything, which requires that it know most everything about you. Its Home and Assistant products are just the most recent attempts at fostering the kind of intimate relationship that makes that possible But this sort of AI intimacy feels creepy. It’s invasive, infantilizing. Training AI on the minutiae of our daily lives in service of convenience seems, at first glance, like a not so unreasonable deal. But it’s hard not to feel like something larger, is being sacrificed in all of this — the death of something...human by a thousand algorithmic cuts.
Which is one reason why the company’s “Make Google Do It" ad blitz is so insidiously clever. At a moment where people all over the world are rethinking issues like digital privacy and re-examining the agency the Big Tech giants have over them, Google has upended that very idea.
“Make Google Do It” suggests we have agency. But increasingly it’s the other way around. Offloading human tasks to a computer feels empowering. It may well increase productivity and give us more time to do the things we love. But it requires sacrificing control of the many little things that make up our daily lives — our schedules, how we write our emails, which app to use next, and even when to call Mom. It’s hiding in plain sight, right there in the ad copy. “Make Google Do It” is most definitely a command — but its a command from Google, not its users.