Skip to content

Schatz, Britt, Colleagues Press AI Companies To Commit To Timely, Consistent Safety Disclosures On Models

WASHINGTON – U.S. Senator Brian Schatz (D-Hawai‘i), a senior member of the Senate Committee on Commerce, Science, and Transportation, and Katie Britt (R-Ala.) called on leading artificial intelligence (AI) companies to improve transparency around the capabilities of their models and the risks they pose to users. In letters to OpenAI, Microsoft, Google, Anthropic, Meta, Luka, Character.AI, and xAI, the senators highlighted reports of AI chatbots encouraging dangerous behavior among children, including suicidal ideation and self-harm, and requested commitments to timely, consistent disclosures around model releases as well as long-term research into chatbots’ impacts on the emotional and psychological wellbeing of users. In addition to Schatz and Britt, the letter was signed by U.S. Senators James Lankford (R-Okla.) and Chris Coons (D-Del.).

“If AI companies struggle to predict and mitigate relatively well-understood risks, it raises concerns about their ability to manage more complex risks,” the senators wrote. “While we have been encouraged by the arrival of overdue safety measures for certain chatbots, these must be accompanied by improved public self-reporting so that consumers, families, educators, and policymakers can make informed decisions around appropriate use.”

The full text of the letter is below and available here.

Dear Mr. Altman,

We write to request additional information about your company’s public reporting practices regarding its artificial intelligence (AI) models. As a market leader for consumer AI chatbots, the steps that you take to increase transparency around your capabilities and risk evaluations have direct impacts on the wellbeing of Americans who use your products. We are already seeing how increasingly powerful models have become integrated into many aspects of personal life for users. As demonstrated harms to users’ psychological well-being have emerged, it is critical that, in addition to improving safety design for users, your company takes necessary steps to foster public transparency to promote these goals.

In the past few years, reports have emerged about chatbots that have engaged in suicidal fantasies with children, drafted suicide notes, and provided specific instructions on self-harm. These incidents have exposed how companies can fail to adequately evaluate models for possible use cases and inadequately disclose known risks associated with chatbot use. Additionally, we have seen how companies can struggle to prevent known model risks or unwanted behaviors prior to deployment. If AI companies struggle to predict and mitigate relatively well-understood risks, it raises concerns about their ability to manage more complex risks. While we have been encouraged by the arrival of overdue safety measures for certain chatbots, these must be accompanied by improved public self-reporting so that consumers, families, educators, and policymakers can make informed decisions around appropriate use. More detailed information is still needed for age-restricted models; even chatbots aimed at children with additional safeguards have produced pro-eating disorder, violent, and sexual content.

In addition to impacts on mental health and risks to vulnerable users, accelerating model capabilities necessitate greater transparency around other potential risks involving public safety and national security. In particular, companies have disclosed how advanced models may pose misuse risks in areas including cybersecurity and biosecurity. Many frontier AI companies made voluntary commitments at the Seoul AI Summit, or in support of the G7 Code of Conduct, to provide transparency into their efforts to assess risks to national security. We are supportive of ongoing disclosures for these risks, and request that companies adhere to their prior commitments.

Public disclosure reports, such as AI model and system cards, serve as the closest equivalent to nutrition labels for AI models. While they are essential public transparency tools, today’s changed landscape calls for assessing current best practices and how they can be better responsive to user risks. Current public disclosure practices can be inconsistent or insufficient, may not be released alongside product launches, and can lack standardization. The distinction between major and minor releases is left to the discretion of developers, sometimes without explanation. Model and system cards may also fail to incorporate new or updated information about existing models while models are deployed to the public, including information about user safeguards. Companies must continue to monitor their model performance and publicly disclose new developments as they relate to security and user safety. This information enables third-party evaluators to assess a model’s risks and supports organizations, governments, and consumers in making more informed decisions.

It is critical that public disclosures and risk evaluations are comprehensive, consistent, timely, and responsive to emerging risks. We therefore request responses to the following questions by January 8, 2026:

  1. Do you commit to publishing model and system cards in a standardized location concurrently or prior to future model releases?
  2. Do you commit to maintaining a consistent schema for evaluating potential harms in your public disclosures and risk frameworks, and to providing transparent justifications for changes made to them, or for when a released model evaluation deviates from the established schema?
  3. Do you commit to clearly defining the criteria for determining which model releases require updated public disclosures and new risk evaluations, and which descriptors of models exempt them from such evaluations, such as 'experimental,' 'preview,' 'minor,' or other similar terms? Please elaborate on the rationale for why such models may considered sufficiently low-risk to not warrant full reassessment.
  4. Do you commit to flagging significant patterns of violations of risk frameworks for your models and providing explanations for their continued deployment despite these issues?
  5. Do you commit to collaborating with external partners—including CAISI, academic researchers, and experts—to establish appropriate timelines for conducting sufficiently robust pre-deployment evaluations?
  6. Do you commit to researching the short-term and long-term emotional and psychological impacts to your chatbots’ users across various populations—including vulnerable groups such as children and the elderly—and including their results in your public disclosures? If so, please elaborate on how this research will be conducted in consultation with relevant third-party researchers.
  7. Do you commit to disclosing how safeguards for vulnerable groups – such as children and senior citizens, and those experiencing mental or emotional discuss – including but not limited to age-estimation technologies and mental health referrals? If so, please elaborate on how they are informed by best practices to ensure user safety and wellbeing, data privacy, and efficacy.
  8. Do you commit to disclosing whether information from your company’s chatbot conversations is used for targeted advertising? If your company already discloses this information, how do you disclose that fact to users? If conversation data is used for targeted advertising or otherwise shared with third parties, will you commit to conducting ongoing analysis of potential data privacy, cybersecurity, and other risks from such access?
  9. Do you commit to publicly disclosing new information about other potential risks as they are made known to you in a timely manner, including categories of harm not previously recognized or accounted for within your existing evaluation frameworks?
  10. Do you commit to ensuring that public disclosure information that is relevant for consumers is shared in a manner accessible and understandable to consumers? If so, please disclose where you will host this information.
  11. Do you commit to disclosing whether your company withdraws from, or modifies its full participation in, voluntary agreements or frameworks?

Thank you for your attention to these matters.

Sincerely,

###