Posted by Guest on June 14, 2019 in Blog

No discussion of the American political process today is complete without a mention of fake news, junk news, or – most broadly – disinformation. Disinformation is defined as false information that is spread with the intent for it to deceive or harm, often occurring alongside misinformation, which is false information spread with good intentions.

Neither disinformation nor misinformation is a new phenomenon. Lies, rumors, and the distorted truth have been key features of propaganda for centuries, and American politics have been no exception. Going all the way back to the American Revolution, revolutionaries burned the house of the Massachusetts governor in protest after the Boston Gazette reported his support for the Stamp Act – when he actually opposed it. This to say, disinformation has been a concept people have grappled with for longer than the United States has even existed. Social media, however, allows for disinformation to spread further and faster than ever before.

For several reasons, it is difficult to formulate hard-and-fast rules on what is or isn’t disinformation.

  1. The facts of a situation must be judged on a case-by-case basis. One false article does not necessarily make the next article also false, and vice versa.
  2. Multiple interpretations of the same situation may be perceived to be the truth by different people.
  3. Opinion pieces should not be classed the same as biased presentations of facts.

However, we can make some generalizations about what disinformation looks like. According to the Oxford Internet Institute, there are five main indicators of disinformation:

  1. bias: reporting that presents opinion as fact or is heavily skewed towards a single ideology.
  2. highly emotional language: this may include misleading or “clickbait” headlines, personal attacks on people the author disagrees with, or exaggerated reactions of anger such as using words with strong negative connotations like “evil,” or posting images intended to show reactions of anger or disgust.
  3. a lack of professionalism: for example news outlets that hide who wrote and edited their articles or who fail to issue corrections fail to uphold journalistic standards.
  4. poor fact-checking: use of only one source or failure to check if sources are accurate.
  5. imitating the appearance of mainstream news: intentionally creating a website that mimics the appearance of mainstream new websites, often attempting to present opinion as fact.

Despite their poor record on accuracy, articles with these characteristics are more likely to be shared. For example, researchers at New York University found that for each highly emotional word, such as “hate,” “shame,” and “evil,” in an article, its diffusion through social media increased by 20%. Backing this up, a study put out by MIT found that disinformation is 70% more likely to be shared and spread online than true news.

However, according to research done by the Knight Foundation, 65% of disinformation comes from just 10 websites. This is supported by the NYU Stern Center for Business and Human Rights’ 2019 report on disinformation which describes a system where key websites form concentrated nodes of misinformation that are then spread by fake accounts on social media platforms like Facebook and Twitter.

All of this comes at a time when Americans’ trust in traditional news media is near historical lows. While Gallup has reported that trust is continuing to rise after the all-time low of 32% trust in 2016, overall trust levels are still far behind the all-time high of 72% in 1972, though individuals show a great deal of variation in trust level by party identification. This lack of trust makes some individuals more inclined to seek out alternative sources of news, often on social media, which exposes them to high levels of disinformation.

The existence of disinformation is not new. Social media websites have a mass audience far larger than any newspaper. Facebook alone claims over 1.38 billion daily users, over 600 times as many papers as USA Today circulates each day. Social media enables easy, fast sharing of information across the globe, and over 62% of Americans get at least some of their news from social media. Furthermore, humans tend to trust people who are socially close to them, meaning that people may be taken in by disinformation posted by their friends, family, or even celebrity icons when they would be more suspicious of the same article coming from a stranger.

Disinformation on social media is not restricted to any one topic, website, or format. It is present in advertisements, in automated posts from fake accounts, and in posts created and shared by users who genuiely believe false information. It is generated by foreign operatives interfering with the political process, and from domestic actors on both sides of the aisle. Looking through social media will show disinformation on climate change and vaccinations, faked campaign promises for a dry Alabama claiming to be from Roy Moore, and claims that George Soros is sponsoring a caravan of migrants who plan to invade the United States.

The sheer breadth of these topics makes one thing clear: disinformation dominates online discourse. Out of every 25 to 33 accounts created on Facebook, one is fake, frequently automatically created by a script or bot. Because bots can create hundreds of accounts quickly, the total number of fake accounts is huge – in just one year (October 2017 to September 2018) Facebook deleted over 2.8 billion fake accounts. And that’s just fake accounts – according to a study from the Oxford Internet Institute, roughly 25% of the content shared by real users on Facebook and Twitter on the topic of the US Midterms was “junk news,” or deliberately deceptive reporting.

That’s not to say that social media sites have done nothing to combat disinformation. In early 2019, YouTube, Pinterest, and Facebook took steps to combat the spread of anti-vaccination views on their platforms. Just identifying which users are fake and which are bots is an effort against disinformation, given the major role bots play in its spread. To reduce the spread of disinformation, social media sites use a variety of approaches. They may block searches related to terms that have high levels of disinformation – for example, Pinterest prevents users from searching for anti-vax related keywords. Most social media accounts use algorithms targeted towards removing bots; some also use algorithms to detect posts containing disinformation and mark them for removal or to reduce the amount they are recommended. Facebook and other sites also use human moderators, who review user-submitted reports as the final verdict of accuracy.

Though it has been around for some time, disinformation is something we can all be cognizant of and work to combat.

Over the next few months, AAI will be hosting a series of blog posts created to discuss this timely topic and share resources to help teach people what they can do to help push back against disinformation. We can fight disinformation if we work together to spot it, report it, and uplift truthful messages in our daily lives.


Emma Drobina is a Summer 2019 PhDX Fellow at the Arab American Institute. 

comments powered by Disqus