We Need To Decide If Robots Are Protected By The First Amendment

Should bots be required to tell you that they're not human? And would such a rule be a violation of free speech?

The words you are about to read were written by humans.

That kind of disclosure isn’t quite necessary today, because machines still aren’t able to write convincing op-eds. What they can do is spread information, and misinformation, with alarming speed. Bots can file thousands of comments with the Federal Communications Commission criticizing net neutrality. They can deny every report of climate change on Twitter. Bots can call 16 million Americans during dinner to offer them a loan.

A recent demo showed just how far computers have come in imitating people. Google’s new artificially intelligent personal assistant, Duplex, was shown making phone calls to small businesses and imitating a human voice, down to the pauses and “ums” that characterize natural speech, with the people at the other end of the line carrying out a conversation seemingly unaware they were speaking to a machine. Unless Google clues them in, a receptionist could schedule a hair or dentist appointment and never realize they were communicating with a computer.

General concerns about the ethical implications of misleading people with convincingly humanlike bots, as well as specific concerns about the extensive use of bots in the 2016 election, have led many to call for rules regulating the manner in which bots interact with the world. “An AI system must clearly disclose that it is not human,” the president of the Allen Institute on Artificial Intelligence, hardly a Luddite, argued in the New York Times. Legislators in California and elsewhere have taken up such calls. SB-1001, a bill that comfortably passed the California Senate, would effectively require bots to disclose that they are not people in many settings. Sen. Dianne Feinstein has introduced a similar bill for consideration in the United States Senate.

But should a bot have to out itself? And is such a requirement consistent with free speech principles? At first blush, this looks like an easy case. After all, disclosure does not necessarily result in censorship, nor does it force an anonymous speaker to identify themselves by name. Under the proposed federal bill, a bot need only acknowledge that it is an automated program — something like, “this is a bot” — to remain compliant. This feels like a far cry from an abridgment of speech.

On closer inspection, however, bot disclosure laws would need to be very carefully crafted if they are to comply with the letter and spirit of the First Amendment — as we argue at length in a recent draft paper.

Speech regulation must be appropriately tailored to its underlying justification, which blanket calls for disclosure are not. Bots used to support or undermine political candidates may be fair game for campaign finance–style disclosure rules. Bots that call at all hours with commercial offers may be subject to limits just as other telemarketing. But how would the government justify, for example, requiring an artist who uses bots to explore the boundary between humans and machines to reveal whether a human is behind their most recent creation?

While an initial version of the California bill applied to all bots, subsequent versions confine the disclosure requirement to commercial or electoral speech. The federal bill also directs itself to two federal agencies — the Federal Trade Commission and Federal Election Commission — which address commerce and elections, respectively. Still, sorting out which speech “incentivize[s] a purchase or sale of goods or services” or seeks to “influence a vote in an election” will be very difficult, and seems likely to result in government overreach.

Even rules that look innocuous on paper might be problematic when enforced. Imagine, for example, that a California resident suspects an unlabeled account is a bot. Unless a careful verification process has been set up, which neither the California bill nor Senate bill presently requires, the person behind the account might have to reveal themselves just to defend against an accusation of automation. This obligation threatens the right to speak anonymously, as a recent blog post by the Electronic Frontier Foundation illustrates well.

It is also far from clear that a disclosure law would address the deepest harms bots present. The legislative history of the California bill, and the preamble to the federal one, show that proponents were concerned about Russian interference in the 2016 presidential election. This interference took the form of armies of bots spreading false information shared from fake social media accounts. Hundreds of bots might cause a fringe opinion to trend on Twitter or render a hashtag unusable by flooding it with irrelevant content. Requiring individual accounts to label themselves as bots does little to address such cumulative, scale-driven harms.

In our essay, we outline several principles for regulating bot speech. Free from the formal limits of the First Amendment, online platforms such as Twitter and Facebook have more leeway to regulate automated misbehavior. These platforms may be better positioned to address bots’ unique and systematic impacts. Browser extensions, platform settings, and other tools could be used to filter or minimize undesirable bot speech more effectively and without requiring government intervention that could potentially run afoul of the First Amendment. A better role for government might be to hold platforms accountable for doing too little to address legitimate societal concerns over automated speech.

We do not mean to minimize the problem. We are not against intervention, including by government. But any regulatory effort to domesticate the problem of bots must be sensitive to free speech concerns and justified in reference to the harms bots present. Blanket calls for bot disclosure to date lack the subtlety needed to address bot speech effectively without raising the specter of censorship.


Ryan Calo is the codirector of the University of Washington Tech Policy Lab and the Lane Powell and D. Wayne Gittinger associate professor at the University of Washington School of Law.

Madeline Lamo is a Hazelton fellow with the University of Washington Tech Policy Lab.


Skip to footer