Skip to content

Schatz, Kennedy Introduce Bipartisan Legislation To Provide More Transparency On AI-Generated Content

New Bill Would Require Clear Labels On AI-Made Content

WASHINGTON – U.S. Senators Brian Schatz (D-Hawai‘i) and John Kennedy (R-La.) introduced bipartisan legislation to provide more transparency on content generated by artificial intelligence (AI). The new bill would help ensure people know when they are viewing AI-made content or interacting with an AI chatbot by requiring clear labels and disclosures.

“People deserve to know whether or not the videos, photos, and content they see and read online is real or not,” said Senator Schatz. “Our bill is simple – if any content is made by artificial intelligence, it should be labeled so that people are aware and aren’t fooled or scammed.”

“AI is moving quickly, and so are the companies that are developing it. Our bill would set an AI-based standard to protect U.S. consumers by telling them whether what they’re reading, seeing or hearing is the product of AI, and that’s clarity that people desperately need,” said Senator Kennedy.

In May, an artificial photo of an explosion near the Pentagon went viral, duping Americans and triggering a dip in the stock market. Deepfake images of President Trump being arrested were viewed by millions on social media, demonstrating the types of manipulation that could be used during elections. And fraudsters are already abusing AI systems to generate scam calls, impersonating a loved one’s voice and scamming Americans out of their hard-earned money. As generative AI becomes increasingly convincing and widespread, new measures must be adopted to ensure that Americans can determine genuine from machine-generated content.

The Schatz-Kennedy AI Labeling Act would:

  • Require that developers of generative AI systems include a clear and conspicuous disclosure identifying AI-generated content and AI chatbots;
  • Make developers and third-party licensees take reasonable steps to prevent systematic publication of content without disclosures; and
  • Establish a working group to create non-binding technical standards so that social media platforms can automatically identify AI-generated content.

The AI Labeling Act is supported by National Consumers League, Consumer Action, Common Sense Media, Public Citizen, Common Cause, Future of Life Institute, Consumer Federation of America, National Association of Voice Actors, SAG-AFTRA, International Alliance of Theatrical Stage Employees (IATSE), Accountable Tech, American Federation of Teachers, Writers Guild of America, East, Authors Guild, and Department for Professional Employees, AFL-CIO.

“We at the Writers Guild of America, East support this important legislation as a critical step to ensure that the American people know when they encounter the work of artificial intelligence.  As writers our members value the creativity and diligence and critical thinking that only human beings can bring to drama, comedy, and news. At minimum, viewers, listeners, and readers need to be fully informed when content is generated, in whole or in part, by artificial intelligence technology rather than by human creators,” said Lowell Peterson, Executive Director of Writers Guild of America, East.

“Thank you Senators Schatz and Kennedy for introducing the AI Labeling Act. The ability of AI to mimic and replicate human voice and likeness should concern us all. The dangers to the careers of the replicated people are real. The dangers to American citizens, consumers, and voters has only just begun to manifest. Americans deserve to know when, where, how, and why they are interacting with text, audio and audiovisual media not created by humans,” said Duncan Crabtree-Ireland, SAG-AFTRA National Executive Director and Chief Negotiator.

“The rapid spread of AI-generated content poses new threats to our online information landscape, creating more vulnerabilities for the public to consume and interact with disinformation and manipulated content. Developers and social media platforms must take the first step to clearly disclose and identify AI-generated content to curb potential harms and protect users. The AI Labeling Act establishes clear and simple guidelines for companies to be responsible stewards and foster a better informed public as we work to understand both the benefits and threats of generative AI,” said Nicole Gill, Executive Director of Accountable Tech.

The full text of the bill is available here.