Constitution Daily

Smart conversation from the National Constitution Center

Lawsuit analyzes First Amendment protection for AI chatbots in civil case

May 7, 2025 by Scott Bomboy

A dispute between a software company that creates interactive chatbots for gaming purposes and the family of a late teenage game player is the latest test of the constitutional boundaries of artificial intelligence agents.

In Garcia v. Character Technologies, the U.S. District Court for the Middle District of Florida is faced with a novel question: Do AI chatbots enjoy some of the same free speech rights granted under the First Amendment to people?

Megan Garcia is the mother of Sewell Setzer III, a 14-year-old who took his own life after engaging for months with an AI chatbot, and she is suing on his behalf. Garcia believes her son was unduly negatively influenced by chatbots created by Character.AI (also known as C.AI), a digital platform created by Character Technologies.

The company uses Large Language Model (LMM) technology to allow people to create their own characters that act as companion chatbots with input from other users. According to Garcia’s court filings, Sewell mainly interacted with characters reminiscent of the Game of Thrones books and television series that offered sexually exploitative content and abusive communications, leading to Sewell’s development of anxiety, depression, and ultimately his suicide. The suit from Garcia alleges claims against Character Technologies for wrongful death and survivorship, negligence, deceptive and unfair trade practices, and other acts.

Character Technologies argues that the Constitution’s First Amendment protects it from the claims on several grounds. “The First Amendment protects the rights of listeners to receive speech regardless of its source,” Character Technologies argued in a recent reply brief. It also cited “numerous instances where courts have dismissed similar tort claims against media and technology companies to protect the viewers’ and listeners’ First Amendment rights.”

Another First Amendment claim in the case touches on the “speech” rights of the chatbots, where Character Technologies says the First Amendment’s protections are not limited to speech by human speakers. Citing Citizens United v. Federal Election Commission, it quotes part of Justice Antonin Scalia’s concurrence as a central tenet: “The First Amendment is written in terms of ‘speech,’ not speakers.”

Character Technologies also cites Supreme Court precedents that support the rights of listeners independent of the fact that the speakers in question did not themselves have clear First Amendment rights. The company further argues that the lawsuit wrongly wants all AI-generated speech excluded from First Amendment protections, while the chatbots are expressing “pure speech” that is entitled to the highest levels of First Amendment protections.

A Cat in the Court Case

Garcia’s attorneys cite an unusual decision from the 1980s with a bit of a cult following, Miles v. City Council of Augusta, Ga. as countering the claim that non-humans have free speech rights.

In Miles, an 11th Circuit Court of Appeals decision, the court held that a “non-human entity” lacks free speech rights. The alleged speaker was Blackie the Talking Cat, whose owners claimed was exempt from a city business license ordinance on free speech grounds.

Blackie’s owners had trained him to mimic sounds that resembled human speech, and they requested “contributions” from the public to support themselves and Blackie. “After receiving complaints from several of Augusta’s ailurophobes,” or people with an extreme or irrational fear of cats, “the Augusta police—obviously no ailurophiles themselves—doggedly insisted that appellants would have to purchase a business license,” the decision explains. The Miles family paid the Augusta licensing fee and then sued. They claimed Augusta’s ordinance violated their rights of speech and association. A federal district judge did not rule on the cat’s constitutional rights but concluded, “The thrust of the ordinance is directed, not at speech and association, but at the generation of revenue through the imposition of an occupation tax.”

On appeal to the 11th Circuit Court, a three-judge panel reached the same conclusion and would not consider a free speech argument on Blackie’s behalf. “This Court will not hear a claim that Blackie’s right to free speech has been infringed,” the judges wrote in a per curiam opinion. “First, although Blackie arguably possesses a very unusual ability, he cannot be considered a ‘person’ and is therefore not protected by the Bill of Rights. Second,” the court added humorously in dicta, “even if Blackie had such a right, we see no need for appellants to assert his right jus tertii (as a third party). Blackie can clearly speak for himself.”

In the Garcia case, Garcia argues that AI output is not “speech” unless it reflects human expressive intent to communicate a message. In referring to the Miles case, Garcia argues that the case  “presents a large obstacle to Defendants’ First Amendment challenge because they are attempting to attribute speech to a non-person entity. But without a person as a speaker, this argument falls short. Defendants cannot have it both ways.” In response, the attorneys for Character Technologies called the Miles precedent “misplaced” when applied to chatbots. “That case—which contains three cursory sentences in a humorous footnote about a supposedly talking cat—does not address a listener-rights argument at all and does not control here.”

Garcia further argues that other precedents, including the Supreme Court decision in Texas v.  Johnson (1989), require speakers to have an intention when they speak, that the intent to “convey a particularized message was present.” However, “the LLMs at issue in this case have no intentions in the sense that humans do. They are machines without sentience or cognition. For C.AI to claim that the words of its AI chatbots’ outputs deserve First Amendment protection, they must demonstrate that there was intention behind the expression. C.AI has not done so,” Garcia concludes.

But Character Technologies claims that its Characters are conducting “pure speech, and “the Court need not … determine whether [the] expression showed intent to convey a particularized message.”

Other Cases

Garcia v. Character Technologies is the latest in a series of legal actions asking the courts to define the rights and limitations of artificial intelligence. In March 2025, a three-judge panel for the U.S. Court of Appeals for the District of Columbia determined that a machine can’t be listed as the author of the work submitted by the work’s human owner to the U.S. Copyright Office for protection.

In February 2025, a federal court in Delaware ruled in favor of Thompson Reuters, the owners of Westlaw, in a claim against the owners of a competitive product. Westlaw had refused to license its content to Ross Intelligence to train an AI-driven search engine that would compete with Westlaw. Instead, Ross licensed a third-party product that incorporated Westlaw’s headnotes into its own content, and Ross used that content to train the search engine. Circuit Judge Stephanos Bibas ruled for Thompson Reuters on copyright grounds and denied fair use claims from Ross.

Scott Bomboy is the editor in chief of the National Constitution Center.