No one has been able to shut up about artificial intelligence since OpenAI unleashed ChatGPT onto the world in November. But this week, there was so much AI chatter that even people who work in the field have been struggling to keep up.
First, Google announced that it was sprinkling AI dust on Gmail, Google Docs, Sheets, and Slides. When the changes eventually roll out, you’ll be able to have Google Docs write you an entire essay, cover letter, sales pitch, job description, or basically anything else you want. Gmail will be able to summarize email threads and compose replies automatically on your behalf, and you’ll be able to ask Slides to create an entire presentation with a few simple words. Google also opened up access to a system that will let other companies use its AI model to create their own ChatGPT-like tools.
Hours later, Anthropic, an AI startup in which Google recently invested more than $300 million, announced a new ChatGPT rival, a chatbot called Claude that it was making available to businesses.
Shortly thereafter, Open AI, the 800-pound gorilla in the room, noisily announced GPT-4, the next version of the tech that powers ChatGPT and DALL-E 2, the company’s image generator. OpenAI claimed that GPT-4 is significantly more powerful, accurate, and smarter than its previous version. The company said GPT-4 is capable of feats such as doing your taxes, creating entire websites by simply looking at a rough design scrawled on a piece of paper, and passing a bevy of standardized tests, including the Uniform Bar Exam.
This was just TUESDAY.
On Thursday, Microsoft made a splashy announcement, saying it would infuse boring old Microsoft Office — Word, Excel, PowerPoint, Outlook, and Teams — with shiny new AI capabilities thanks to the company’s partnership with OpenAI. Much like Google’s offering, the new Office will let you do away with the drudgery of writing, plus create slick PowerPoint presentations in seconds and make sense of complex Excel spreadsheets in response to your questions.
Meanwhile, there were other assorted announcements: Midjourney, a DALL-E 2 competitor, announced a new version said to be “more advanced” and “higher resolution.” Stanford University released its own AI model based on tech developed by Meta, and dozens of companies, big and small, sent out a flurry of press releases declaring that they were jumping on the AI bandwagon.
“This week is all about an AI arms race,” Neil Sahota, a lecturer at the University of California, Irvine, and an adviser to the United Nations on AI, told BuzzFeed News. “Everybody knows that it’s going to be the first one or two companies in the market that are really going to see the competitive advantage, because in probably four or five years, all of this will be commodity. Everyone wants to out-hype the competition right now, and no one wants to get left behind.”
By far, the most significant announcement of the week was GPT-4, a transformational technology that, the Washington Post declared, will “blow ChatGPT out of the water.” New York Times columnist Kevin Roose called it “exciting and scary.”
OpenAI claimed that GPT-4 is 40% more likely to give factually accurate responses compared with the current version that powers ChatGPT, and 82% less likely to respond to requests for content that it wasn’t allowed to address.
Meanwhile, OpenAI said that GPT-4 had passed a simulated version of a bar exam, scoring high enough to be in the top 10% of test takers; the previous version landed in the bottom 10%. An impressive feature of the new model — one that isn’t publicly available yet — is its ability to understand images. Upload a picture of what’s in your refrigerator, for instance, and watch ChatGPT spit out a recipe based on the ingredients it identifies.
Here’s a video in which OpenAI President Greg Brockman shows off how the tech created a simple pencil sketch of a website into an actual one, complete with working code, within seconds:
“This is a generational technology,” said Jake Heller, cofounder and CEO of Casetext, a San Francisco–based legal software company that partnered with OpenAI more than six months ago to train a new product called CoCounsel, which uses GPT-4 to comb through legal documents and provide summaries, review contracts, draft memos, and help attorneys prep for depositions by coming up with questions to ask witnesses. “For the very first time, it lets us develop tools that can be used in a professional setting.”
The fact that so many AI announcements happened this week was not an accident, Heller said: “They all got wind that each other was moving, and they all decided to jump on board at the same time.”
Experts are worried about the ramifications of all these moves. Rapid advances in AI driven by smaller startups like OpenAI have put pressure on tech giants to ship AI-enabled products as fast as possible. Multiple Google employees have already quit the company and launched their own AI startups after they were reportedly frustrated by Google’s cautious pace in developing AI products.
Still, last month, Google announced a new version of Google Search called Bard. The next day, Microsoft announced a new version of Bing powered by tech from OpenAI. Both Google and Microsoft were criticized for their rushed rollouts. The stock price of Alphabet, Google’s parent company, crashed by $100 billion after users discovered a factual error that Bard made in a promotional video. Microsoft, meanwhile, was forced to neuter a version of Bing that told the Times’ Roose it loved him and tried to get him to walk out of his marriage.
“There’s so much hysteria and hype around these faster and faster rollouts right now,” said Amba Kak, executive director of New York University’s AI Now Institute and a former senior adviser on AI to the Federal Trade Commission, “that I think we’re in danger of walking back a lot of the progress that has been made around responsible release practices and the need for regulatory oversight of these technologies.”
Indeed, Microsoft recently laid off an entire team responsible for flagging AI’s harm to society, according to Platformer. “The pressure from [CTO] Kevin [Scott] and [CEO] Satya [Nadella] is very very high to take these most recent OpenAI models and the ones that come after them and move them into customers’ hands at a very high speed,” John Montgomery, the company’s vice president of AI, told staff members, according to audio of the meeting obtained by Platformer.
“What I see right now is a circus,” Lauren Goodlad, chair of Critical AI, an interdisciplinary initiative at Rutgers University, told BuzzFeed News. “Everybody wants in on the action.”
AI companies have acknowledged the harm that their products are capable of causing. For instance, OpenAI outlined the mischief GPT-4 could get up to, such as generating a recipe to make a dangerous chemical using kitchen supplies. (It’s something the company fixed before launch.) Meanwhile, Anthropic’s CEO admitted that AI models, like the one Claude runs off of, can “sometimes make things up,” and Microsoft said that Copilot, its name for the AI tech in Office, can be “usefully wrong.”
Even Sam Altman, OpenAI’s CEO, said in a television interview on Thursday that he was “worried” about his company’s tech being used for “large-scale disinformation” and “offensive cyberattacks.” On the same day, researchers at a cybersecurity firm reportedly tricked GPT-4 into creating malware, something that OpenAI claimed it wasn’t supposed to do.
“‘Move fast and break things’ seems to suddenly be in vogue again,” Kak said.
OpenAI also got criticized for not being, well, open. The company said it wouldn’t reveal what data it had trained GPT-4 on, or provide details of the hardware or software used to create it. The company’s cofounder and chief scientist, Ilya Sutskever, told the Verge that OpenAI, which announced that it would “freely collaborate” with others when it launched in 2015, had been “wrong” about its earlier approach.
“Sure, there’s an acknowledgment that these systems carry risks,” Kak said. “But more and more, it’s the AI industry that is setting the tone and leading the conversation rather than inviting external scrutiny.”
Months down the road, experts said, we will look back at this week as the one where the AI snowball truly started rolling. We’ve lived through this cycle already with social media — magical new tech that starts causing unforeseen harm before anyone has had a chance to adapt or regulate it.
In just months, AI went from being an abstract concept to actual apps accessible by nearly everyone with a smartphone or a computer. “That’s why this is the next frontier,” Sahota said. “Because now people get it. Now there’s suddenly a marketplace that everyone is rushing to fill. Because, you know how it is — there’s profit to be had.” ●