Posted by Guest on July 09, 2019 in Blog

Following the 2016 elections, Facebook has been the subject of scrutiny for its role in providing a platform to spread misinformation. In particular, Facebook has faced criticism for showing over 3,000 political ads purchased by the Russia-based Internet Research Agency (IRA) to American citizens. From 2015 to 2017, IRA posts on Facebook were shared over 30 million times. Over the past year, a team of outside auditors drawn from civil rights organizations has been working with Facebook staff to evaluate Facebook’s policies on a number of civil rights issues, ranging from hate speech to voting rights, and make recommendations for how Facebook should improve the way it handles civil rights – particularly efforts to disrupt democracy, such as election and census suppression attempts. However, while these policies seem good on paper, there are still unanswered questions about how they will be implemented and whether they will be effective. In particular, it is doubtful that Facebook’s artificial intelligence content moderators are capable of accurately making the nuanced moderation decisions Facebook wants to implement, and unclear if they will hire additional human moderators to help improve the quality of moderation.

On June 30th, Facebook pledged to take action to prevent bad-faith actors from using it to influence the outcome of the 2020 elections and the 2020 Census. Since the 2016 election, Facebook has implemented several new policies designed to prevent voter suppression, including a new policy banning ads that tell users not to vote and banning statements of political exclusion. Facebook defines political exclusion as being statements saying that certain protected groups should not be able to participate in the political process, including calls for protected groups to be prevented from voting or running for office. The auditors have created a list of recommendations that Facebook has committed to implementing for both the elections and the census.

     I. Expanded Protections Against Voter/Census Interference

In 2018, Facebook introduced new language to its anti-voter suppression policy, so that the following behaviors are now officially prohibited:

  1. Misrepresenting how to vote (including statements about being able to vote via an app);
  2. Misrepresenting voting logistics or requirements;
  3. Misrepresenting whether a vote will be counted;
  4. Threatening violence in regard to voting, voter registration, or the outcome of an election.

Facebook has now committed to releasing a similar policy prohibiting misrepresentation and threats related to the 2020 Census.

     II. Proactive Detection of Interference

Currently, Facebook detects up to 65% of the hate speech it removes using proactive detection technology, a term for automated computer programs that use machine learning to identify patterns in online posts. Posts flagged by proactive detection will not be automatically removed, however. First, they will be sent to human reviewers to double-check their accuracy. Facebook has stated that to ensure quality feedback, over 90% of U.S.-based reported posts will be reviewed by U.S.-based reviewers.

     III. Dedicated Project Teams

Facebook has created a dedicated team for the 2020 U.S. Elections, including personnel from “product, engineering, threat intelligence, data science, policy, legal, operations, and other support.” Facebook has said they are in the process of creating a similar team for the 2020 Census by Fall 2019.

In addition to the dedicated teams, the auditors report that Facebook plans to have a rapid response team of personnel on-site in a centralized location to strategize and respond as the election unfolds, based on a model they found effective during the 2018 Midterm Elections.

     IV. Election- and Census-Specific Training

The auditors state that Facebook has committed to providing training about voting rights and the significance of the Census. This training will be conducted by outside consultants. The voting rights training will occur in Fall 2019, and the Census training will occur in early 2020.

     V. External Partners

Facebook has said they will partner with outside voting rights organizations, civil rights advocates, and census protection groups to discuss their concerns about voter and census suppression. Because the methods bad-faith actors use change rapidly, Facebook plans to engage in an ongoing discussion about how best to protect the elections and the census.

     VI. Promoting Participation

Lastly, Facebook has said they will display reminders for important election dates, including voter registration deadlines. They will also work with their partners to discuss how to effectively use Facebook’s platform and tools for promotion.

Facebook hasn’t said they will provide reminders about the 2020 Census timeline, but they have said they will run workshops for their census advocacy partners about how they can use Facebook’s platform and products to promote the census.

While Facebook’s public commitment to improving their policies is appreciated, Facebook has left certain important questions unanswered.

For example, will Facebook’s bans on racial appeals and political exclusion related to voter suppression also be applied to the 2020 Census? Historically, racial and ethnic minorities, such as Arab Americans, have been undercounted, and the citizenship question looms large in the conversation around the 2020 Census. While the Supreme Court ruled on June 27th that the citizenship question will not be on the census form under the Department of Commerce’s current reasoning, the ultimate fate of the question has not been determined. The context the Hofeller documents have given to the citizenship question, including the implication that the question was intended to suppress the political power of minorities, makes discussing race in the context of the census more important than ever.

The new ban on political exclusion is among several improvements that Facebook has promised which hinge on the strength of their content moderation teams. In their progress report, Facebook repeatedly referenced their “proactive detection technology,” a form of artificial intelligence (AI) which will automatically remove questionable content and send it to human moderators for review. In theory, this significantly reduces the burden on human users to see and report content that violates Facebook’s policies. However, in practice, there is evidence that Facebook’s content moderation AI are much more effective at removing spam content than personal harassment. Moreover, given ongoing discussions about bias in machine learning, the (in)competence of AI at moderating human culture, and civil rights organizations’ stated concerns in the audit report about Facebook overlooking intent when moderating posts, AI’s promise may be significantly overstated. Can AI truly understand when someone is quoting hate speech to condemn it, and not promoting the original hate? Can AI recognize implied hate or dog-whistles?

Facebook has argued these potential issues are no cause for concern because there will always be human moderators reviewing posts before they are finally removed. However, this raises another question: are the content reviewers fit to review Facebook’s content? Over the past several years, former content reviewers have repeatedly said they work in a dehumanizing work environment that actually traumatizes and radicalizes content reviewers. In the report, the auditors state that Facebook has said they will improve their content moderation policies and provide additional training on census and voting rights issues, which is a step in the right direction to curb hate speech.

But if these new policies are introduced to content reviewers who are worked beyond their capacity, exposed to traumatizing material, and consistently undervalued, how much will they actually accomplish?

As the 2020 elections and the 2020 Census draw nearer, it is critical to take social media’s influence on the political process seriously.  Facebook’s investment in taking on critical issues of online hate speech, civil rights violations, and disinformation is important, but it still has work to do to ensure its new policies are implemented effectively. In the meantime, civil rights proponents and organizations will continue the vital work of sounding the alarm on platform & rights abuses that undermine our democracy.

This post was guest authored by Summer 2019 Ph.D. fellow Emma Drobina. 

Disinformation and the Census Series 

Lies, Damned Lies, and Disinformation

An Introduction to Disinformation, Interference, and the 2020 Census

Misinformation and the Census

Phishing, Spoofing and the 2020 Census

Technological Roadblocks to Responding to the Census

comments powered by Disqus