Opinion: Why we need the Protecting Kids on Social Media Act
Brian Schatz, a Democrat, represents Hawaii in the U.S. Senate; Tom Cotton, a Republican, represents Arkansas; Chris Murphy, a Democrat, represents Connecticut; Katie Boyd Britt, a Republican, represents Alabama.
Do you know which social media apps your children are on? Do you understand how those apps might harm them? Few parents, no matter how diligent or tech savvy, can confidently answer “yes” to those questions.
This should concern everyone. Kids on social media are vulnerable to bullies, predators and personalized algorithms that glue them to their screens and feed them addictive content that wreaks havoc on their mental health.
We are a bipartisan group of senators and parents who are concerned about the harmful effects of social media use on our children. On April 26, we introduced the Protecting Kids on Social Media Act, a bill to keep young children off social media, crack down on addictive algorithms that boost toxic content and empower parents to decide when their children are old enough to create an account.
Most social media companies claim not to allow anyone younger than 13 on their platforms, but often they rely on self-reporting methods that kids can easily circumvent. Thirty-eight percent of kids ages 8 to 12 say they use social media, according to a survey by Common Sense Media.
Teenagers report spending nearly nine hours every day in front of screens. More than half of adolescents say their use of social media is “nearly ubiquitous.” Four out of five kids report spending more time than they intended online, and more than 60 percent report trying and failing to quit social media — possible signs of dependency and addiction.
The technology industry bears much of the blame for keeping children locked to their screens. Social media platforms use powerful algorithms to hook users and keep them scrolling as long as possible. The financial incentive to addict the young is clear: The more time users spend on apps such as Facebook or Snapchat, the more money those companies make from advertising — and the more targeted, and thus more valuable, those ads will become. And to meet that goal and keep children hooked, these personalized algorithms often feed kids toxic content meant to induce an emotional reaction, making them more depressed, anxious and upset. The results have been devastating: a generation of young Americans suffering from mental health issues.
The rise of smartphones and social media has coincided with a mental health crisis among the young, especially teen girls. Rates of depression among teenagers doubled from 2009 to 2019, according to the Centers for Disease Control and Prevention — and that was before kids were socially isolated during the pandemic. In 2021, nearly 3 in 5 teen girls reported feeling persistently sad or hopeless. Nearly 1 in 3 seriously considered suicide, an increase of nearly 60 percent from a decade ago. The reason is clear: Social media facilitates and fuels self-doubt, depression, bullying and other anti-social behaviors that are already common among young people.
The major social media companies are aware of the harmful effects their products have on young people. In March 2020, a comprehensive study of Instagram found that it created a “perfect storm” for teenagers’ mental health. The study was conducted by Instagram, which hid the results until they were revealed by a whistleblower in 2021.
Teen mental health is just one crisis fueled by social media. Child safety is another.
Every parent’s worst fear is that their child would come into contact with a predator. That fear can easily become a reality online. Federal law enforcement believes there are 500,000 predators online every day. Often, these predators pose as minors and contact their victims using chatrooms and instant-messaging features on social media. Since most social media sites rely on self-reporting to verify age and do not require parental consent for minors to join, they are hunting grounds for predators to conceal themselves and prey on children without their parents’ knowledge.
We wish social media companies could cure these problems on their own. But these companies have proved incapable or unwilling to protect kids on their platforms. It falls to Congress, then, to offer sensible new safeguards for kids’ use of social media.
The Protecting Kids on Social Media Act would require social media companies to verify the age of users and prevent children younger than 13 from using their platforms. Further, the bill would require the companies to obtain the consent of a parent or guardian before allowing teenagers under 18 on the platform. And it would prevent the companies from using personalized algorithms to promote content to underage users, cracking down on addiction and “rabbit holes” of dark and disturbing content.
To assist social media companies in this undertaking, the bill would establish a federal pilot program that uses information in existing government databases to verify age and parent or guardian relationships. This would create an optional, streamlined method for complying with the bill’s requirements, without requiring users to hand over their private information to a private company.
But the expectations of this bill are clear: Social media companies must keep children safe and parents informed. If tech companies fail in this duty, our bill empowers the Federal Trade Commission and state attorneys general to bring a civil action against the offending platforms.
The Protecting Kids on Social Media Act would not solve every problem of the digital age, but it would curb some of the worst abuses of social media companies that prioritize profit over our kids’ safety and well-being. Congress can no longer sit on the sidelines.