The tech world is awash with talk of artificial intelligence. The seemingly out-of-nowhere magical force is now powering everything from image recognition to virtual assistant chatbots; it’s on the lips of every tech executive within 10 feet of a microphone. Not surprisingly, AI was front and center last week at Google’s I/O conference, a massive gathering of some 7,000 developers and media all looking to Google for a peek at the future. Google CEO Sundar Pichai did little to temper that blue-sky enthusiasm, ending the closing AI portion of his keynote with a line that felt cribbed straight from a Star Trek script: “Things previously thought to be impossible may in fact be possible.”
AI is becoming an increasingly more important feature in our daily lives, yet one of the more fascinating aspects of its rise is how poorly we understand what it actually is. If AI is truly going change the world, it’s fair to first ask for its definition. And what better place to find it than Google I/O?
At least that seemed a reasonable expectation, until a first round of answers to the question “How would you define AI?” Here’s a sampling:
“I would definitely interview someone else.”
“I’m not sure. I haven’t done anything with AI.”
“No thanks. Sorry. Good luck.”
“I’m actually on a call.”
“I don’t know anything about it.”
“It’s machine learning.”
“I don’t know. I’ll pass.”
“I work at Yahoo….”
Not everyone found themselves at such a loss. Dan Cernoch, Ticketmaster’s head of abuse prevention, described true AI as a computer replicating the functionality of a human brain. “We’re a long way away from that,” he said. A lot of what people are calling AI, Cernoch argued, is a lesser thing called "machine learning." Indeed, he explained, many use AI as an umbrella term with machine learning lumped underneath. More on that in a bit.
Another attendee put it this way: “[AI is] where machines start becoming more intelligent than what they are programmed for.” Instead of simply spitting back information humans have fed it, AI can actually reason on its own. “It’s being able to understand things, versus being told,” he said.
Understanding that the definition of AI wasn't going to be found, we searched out Google senior research scientist Greg Corrado for an expert opinion. “Artificial intelligence is the art and science of making machines intelligent,” Corrado explained. But that was too broad a definition, so he quickly refocused on machine learning (sorry, Mr. Cernoch), which he described as the biggest growth area in AI. “Rather than directly trying to program computers to be clever, we program computers to learn,” Corrado said.
The best way to explain machine learning, an abstract concept, is via concrete examples; Corrado began with image recognition. You can teach a computer to recognize images of certain things by feeding it lots of images in which those things are identified. Feed a computer lots of images of cats, for example, and the computer can learn how to recognize new images of cats.
The computer does this through something called a neural network, which Corrado said is designed to mirror the human brain. According to Corrado, the brain's billions of neurons all make tiny decisions based on small amounts of information, but working together they can perform advanced thinking tasks. “Intelligence is something that emerges out of the concerted action of these billions of individual neurons,” he explained.
Artificial intelligence has neurons too. “Instead of individual cells that are not that bright, we have individual mathematical functions that are not very bright,” Corrado said, describing the artificial neural network. “We build these functions on top of each other and they learn to do tasks all together. They learn to coordinate.”
Moving back to the image recognition example, Corrado explained that these artificial neurons will individually scan tiny patches of pixels in an image and make some judgment about them. “Is it all white stuff? Is it all dark stuff? Is there an edge? Which way is the edge pointing? That kind of very low-level image analysis,” he said. A large number of artificial neurons can scan an image and pass their conclusions to another set of neurons, which in turn make their own decisions based on the data they’re being fed. Eventually, after many layers of this, a neural network can determine whether it’s looking at a face, or a car or a truck.
Machine learning works relatively well for image recognition. It also works for things like language translation, where it uses a similar approach to extrapolate words from analyzed bits of sound. When it has the words figured out, it can run them through a translation program. “It’s looking for signals in that audio stream to try to guess, ‘What letter should I output in order to transcribe this?’” Corrado said.
Asked how machine learning works for things like booking a movie ticket — a task Google’s AI-powered Google Assistant performed during CEO Sundar Pichai’s keynote — Corrado explained that parts of that task were not done by AI. “When you build a whole product, there are all kinds of subsystems and it’s definitely not the case that machine learning does every little sub-piece,” he said. “For example, when you go and you look up movie times at local movie theaters, you want a direct, perfect retrieval of that information. You can write a program that does it correctly, and there’s no need to try to learn how to do it in some soft, squishy way. Machine learning is really best to fill in these kind of missing pieces where there’s some intuitive step.” AI, Corrado said, is better suited to understanding the language we use to ask for those tickets.
Further questions remained, but Corrado couldn’t stay a minute longer. Google’s AI-powered products weren’t going to build themselves, after all. At least not yet.