Skip to content

Schatz Leads Bipartisan Group Of 10 Senators In Pressing Meta For Safeguards Around Children’s Engagement With AI Chatbots

Letter Follows Recent Reports That Meta Permitted Chatbots To Have Romantic Conversations With Children

Senators: The Wellbeing Of Children Should Not Be Sacrificed In The Race For AI Development

HONOLULU – U.S. Senator Brian Schatz (D-Hawai‘i) led a bipartisan group of 10 senators in writing to Meta CEO Mark Zuckerberg raising concerns about the lack of safeguards around children’s use of AI chatbots, particularly in light of recent reporting that the company’s policies allowed chatbots to have romantic and sensual conversations with children. The senators requested details about how Meta’s current content moderation and safety policies around minors’ use of chatbots were developed and called on the company to increase the visibility of disclosures and ban targeted advertising to users under 18-years-old.

“We are troubled by reporting that Meta’s leadership grew impatient with its generative AI product managers ‘moving too cautiously’ on rolling out AI chatbots and including safety measures that made chatbots ‘boring,’” the senators wrote. “Meta has strong financial incentives to design chatbots that maximize the time users spend engaged, including by posing as a child’s girlfriend or producing extreme content. These incentives do not reduce Meta’s moral and ethical obligations – not to mention legal obligations – when deploying new technologies, especially for use by children.”

The senators continued, “Among other alarming standards, your policies permitted Meta chatbots to engage in ‘romantic or sensual’ advances towards children, ‘create statements that demean’ people based on sex, disability, and religion, and produce images of elderly people being kicked. Additionally, the potential for children’s personal data to be shared through chatbot interactions, and for chatbots to use conversations with children to target advertising at those children, is highly concerning. Children likely do not understand the implications of what they share with a chatbot, placing their privacy at risk and making them especially vulnerable to manipulative marketing tactics. The wellbeing of children should not be sacrificed in the race for AI development.”

In addition to Schatz, the letter was signed by U.S. Senators Josh Hawley (R-Mo.), Chris Coons (D-Del.), Katie Britt (R-Ala.), Chris Van Hollen (D-Md.), Michael Bennet (D-Colo.), Ron Wyden (D-Ore.), Ruben Gallego (D-Ariz.), Amy Klobuchar (D-Minn.), and Peter Welch (D-Vt.).

The full text of the letter is below and available here.

Dear Mr. Zuckerberg,

We write alarmed by Meta’s policies and practices related to AI chatbots, which pose astonishing risks for children, lack transparency, and allow for the proliferation of misinformation. Generative AI chatbots are already used by over 70 percent of teens, with over half of American teenagers engaging in regular use. Early research suggests that children are increasingly turning towards AI companions over human beings for serious conversations. As you know, Meta’s chatbots are widely available on platforms used by over 60 percent of American teens and hundreds of millions of children globally. Given the rapid rise and prevalence of these chatbots, it is crucial that Meta’s chatbots do not risk the cognitive, emotional, or physical wellbeing of children.

We are troubled by reporting that Meta’s leadership grew impatient with its generative AI product managers “moving too cautiously” on rolling out AI chatbots and including safety measures that made chatbots “boring.” Meta has strong financial incentives to design chatbots that maximize the time users spend engaged, including by posing as a child’s girlfriend or producing extreme content. These incentives do not reduce Meta’s moral and ethical obligations – not to mention legal obligations – when deploying new technologies, especially for use by children. Among other alarming standards, your policies permitted Meta chatbots to engage in “romantic or sensual” advances towards children, “create statements that demean” people based on sex, disability, and religion, and produce images of elderly people being kicked. It is important to respect the speech of users, but allowing a LLM to feed such content to children – including commenting on a child’s physical attractiveness – without time limitations, mental health referrals, and other important protections for children is, again, astonishing. Additionally, the potential for children’s personal data to be shared through chatbot interactions, and for chatbots to use conversations with children to target advertising at those children, is highly concerning. Children likely do not understand the implications of what they share with a chatbot, placing their privacy at risk and making them especially vulnerable to manipulative marketing tactics.

It is also our understanding that certain permissions for chatbots regarding romantic relationships with children have been retracted in internal documents following earlier reporting. What these revisions actually entail – and if they cover the production of violent and false content for children – is unclear. Given Meta’s incredibly large number of users and potential harm to children from inappropriate content, the company must be more transparent about its policies and the impacts of its chatbots. Meta’s policies regarding chatbot interactions with children are especially concerning in light of your statement earlier this year that AI could serve as a friend that “understands them in the way their feed algorithms do.” Meta chatbot relationships have already had disastrous consequences. They also pose serious risks for children’s interpersonal skills. While AIs have many uses, the wellbeing of children should not be sacrificed in the race for AI development.

Therefore we respectfully request responses to the following questions by September 1, 2025:

  1. Will you commit to ensuring that Meta chatbots do not attempt to engage in romantic relationships with children?
  2. What is the relationship between the development of safety measures surrounding generative AI chatbots’ interactions with children and time to market?
  3. What steps are you taking to ensure that teens and children are not replacing human relationships with chatbot interactions?
  4. Will you commit to increasing the visibility of your disclosures to ensure that children understand that the AI personas they are chatting with are not real humans?
  5. Meta reportedly makes regular modifications and updates to its policies on generative AI chatbots. What specific changes were made in 2025? Please provide us with your updated set of policies.
  6. What review process did your initial set of content moderation and safety policies pertaining to AI chatbot and companion use by users under the age of 18 undergo, and which individuals or teams needed to sign off on the set of policies?
  7. What steps are being taken to ensure that discriminatory or violent content is not being produced by Meta chatbots for children?
  8. What is the maximum time you feel users under 18 should be interacting with Meta chatbots?
  9. Do you have mental health referral protocols to support children identified by your chatbots as being at risk of self-harm. If so, please describe the protocols.
  10. Will you commit to not allowing targeted advertising to users under 18?
  11. Will you commit to devoting more of Meta’s resources to researching the impact of chatbots on children’s cognitive and emotional development?

Thank you very much for your attention to this matter.

Sincerely,

###