Skip to content

Schatz, Speier, Schakowsky Demand Answers From Facebook On Widespread Misinformation On Platform, Inconsistent Moderation

Lawmakers: Facebook’s Inconsistent Moderation Enforcement Appears To Be Financially and Politically Motivated

WASHINGTON – U.S. Senator Brian Schatz (D-Hawai‘i), along with U.S. Representatives Jackie Speier (D-Calif.) and Jan Schakowsky (D-Ill.) today slammed Facebook for its failure to protect its users from widespread misinformation on its platform. The lawmakers demanded answers on the inconsistent enforcement of Facebook’s content moderation policies which provide deference to right-wing content and fail to prevent the dissemination of politically-biased, divisive misinformation.

“It appears that Facebook is failing to protect consumers from misinformation by inconsistently enforcing its own policies for its own financial and political benefit,” the lawmakers wrote in a letter to Facebook CEO Mark Zuckerburg. “Facebook’s published policies for content moderation are only as effective as its willingness to enforce them consistently. Unfortunately, various reports indicate that Facebook has not done that, and instead, chooses to apply different moderation rules across the platform in deference to right-wing content. Far from being biased against one political perspective, it appears these decisions are actually propelling right-wing voices, including those peddling misinformation.”

The full text of the letter follows and is available here.

 

Dear Mr. Zuckerberg,

Millions of Americans turn to Facebook to stay informed. Although Facebook does not always create the content its consumers share and read on its platforms, it is responsible for developing and enforcing policies for that content. Unfortunately, it appears that Facebook is failing to protect consumers from misinformation by inconsistently enforcing its policies for its own financial and political benefit.

Companies like Facebook can host information provided by a third party with nearly complete immunity from liability as a publisher or speaker. The law does not require that platforms adopt particular content moderation policies or that they remain “neutral” when deciding what types of content they allow on their sites. As a result, Facebook has enormous power to publish content from around the world, which underscores the importance of Facebook delivering to consumers the services it promises.

We should all be able to agree that, because of this immunity that shields Facebook from liability, Facebook has a heightened obligation to be straight with its customers about how it will moderate content on its site and to be consistent about how it does so. At a minimum, Facebook must evenly enforce the company’s published policies.

Facebook’s published policies for content moderation are only as effective as its willingness to enforce them consistently. Unfortunately, various reports indicate that Facebook has not done that, and instead, chooses to apply different moderation rules across the platform in deference to right-wing content. Far from being biased against one political perspective, it appears these decisions are actually propelling right-wing voices, including those peddling misinformation.

In addition to reports of the deliberate inconsistent enforcement of Facebook’s moderation policies, we also understand that the company has intervened and overturned credible fact-checkers’ decisions to protect right-wing viewpoints and speakers. And, although Facebook has developed tools to prevent the dissemination of divisive information, media reports suggest the company does not always use them because of how they might suppress revenue or arouse political criticism. These choices, if true, suggest a troubling pattern, particularly when coupled with the fact that Facebook’s platform design is financially motivated to keep its consumers engaged with the platform, a feature that further fuels extreme content and misinformation.

Facebook’s reported failures to follow its own policies not only harm its users, but also society as a whole. Even the company’s recent civil rights audit highlights inconsistent moderation as posing negative impacts on vulnerable groups. Furthermore, Facebook’s poor enforcement of its Community Standards has resulted in women politicians enduring misogynistic and hateful content at far greater levels than their male counterparts, sowing distrust in women’s leadership and authority. These deliberate actions appear to have eroded public trust in accurate and factual information, deepened societal polarization, and threatened open democratic discourse—all of which are at odds with Facebook’s purported mission of giving users “the power to build community and bring the world closer together.”

This is unacceptable. Facebook should quickly and consistently address violations of its Community Standards, no matter the political motivation of a user. Although the First Amendment limits the ability of the government to dictate what information platforms host on their sites, Congress still has an important role to play in ensuring platforms are accountable to consumers and transparent about the products they offer. Please answer the following questions on how Facebook is ensuring that all Americans receive accurate information by October 9, 2020:

  1. Does Facebook ever waive its own moderation policies against users that violate Facebook’s Community Standards and does it intervene on behalf of a user to reverse a moderation decision? If so, under what circumstances, and how? How does Facebook track those instances? Why are statistics related to the waiving and non-enforcement of the Community Standards not part of Facebook’s quarterly transparency reports?
  2. What recent research or assessments has Facebook conducted to determine whether its products and services, including its use of algorithms, lead to or exacerbate political polarization or societal division? How does Facebook intend to revise its policies in light of any relevant findings to mitigate those impacts?
  3. Facebook stated that it is committed to putting most recommendations from its Civil Rights Audit “into practice.” Has Facebook evaluated how its application of different standards to right-wing misinformation may impact the platform, including the civil rights of its broader user base?
  4. How does Facebook evaluate the performance of third-party fact checkers, including those with known political viewpoints, and does it make such evaluations publicly available? What notice do consumers receive if one of Facebook’s fact checking partners makes an error or revises its conclusion?
  5. What tools does Facebook have that would help reduce divisiveness on its platform? How does Facebook utilize these tools? Are there any reasons for Facebook not to use them?
  6. Capturing user attention and time is key to the business model of Facebook’s platforms and products. Has Facebook conducted any financial analysis on the impact of its moderation policies? How does enforcing Facebook’s moderation polices against widely-shared misinformation financially impact Facebook?
  7. How does Facebook provide notice to its users that it may have promoted and/or recommended content that was later found to be in violation of Facebook’s Community Standards or terms of service?
  8. How does Facebook use research and data, including from internal and external sources, in the refinement and application of its moderation policies? Please provide the most recent CrowdTangle trend report and any other relevant Facebook-funded research that reflect data on news organizations’ content performance.
  9. Will the Facebook Oversight Board provide more clarity on Facebook’s moderation enforcement related to illegal or harmful content, including content that has been amplified or suppressed, on Facebook’s platforms or products? Will decisions of the Board establish precedent for future violations? What assurances can Facebook offer to Congress that it will not interfere in the decisions made by the Board?

Sincerely,

###

Related Issues

  1. Technology