4 Tips to Help Spot Disinformation About COVID-19

Tuesday, March 24th, 2020

This article, written by GQR Project Associate Caroline Grace, was originally posted on Medium on March 23, 2020.

Many Americans are turning to social media to find answers to their worried questions about COVID-19. This has left platforms grappling with how to combat the significant amount of disinformation that is gaining traction online. Platforms are implementing new policies, such as: redirecting users to reliable information and resources like the WHO and the CDC; taking down content about fake cures and conspiracy theories; and banning ads that monetize coronavirus fears.

Despite these efforts, there are several loopholes in these policies that are still allowing bad actors to spread disinformation about the coronavirus. It is always dangerous to even point out these kinds of acts, because it could provide a road map for those who want to spread disinformation. Until the platforms do a better job controlling bad actors and the spread of disinformation, it is important to alert the platforms, the public, and policymakers to the danger so that they can be on guard, and potentially adopt needed policy changes.

As a reminder, we do not recommend users try to engage with or counter these and other false claims when they see them — any online engagement with content only amplifies those narratives further. Always carefully consider whether repeating the rumors or falsehoods is helpful or hurtful in stopping its spread. This Wall Street Journal article has additional helpful guidance.

Those looking to detect or suppress disinformation should be aware bad actors are using these 4 loopholes to work around the platforms’ policies to fight disinformation.

Intentional misspellings

Searches for some misspellings of keywords relating to COVID-19 do not trigger platforms to alert users to reliable information and sources. While content using the correct spelling of “coronavirus” or “COVID-19” will direct users to accurate information and official health organization accounts on Facebook, Twitter, or Instagram, very slight misspellings will not. Instead, these slight misspellings offer users COVID-19 related content without any warning. Misspelled searches may direct users to some posts containing racist and xenophobic content, conspiracy theories, and potentially harmful fake cures. Some of these misspellings are generating tens or hundreds of thousands of social media mentions each week.

Using the Stories feature on Facebook and Instagram

Facebook and Instagram have a Stories option, where users can post content for up to 24 hours before the post is archived. Users viewing disinformation in Stories at this point are not being redirected to official health information, and not receiving a warning that they are potentially viewing disinformation, as is the case for content posted in the core feeds.

Memes and screenshots

Disinformation in memes and screenshots are notoriously difficult for researchers to track, because images are not machine-readable and harder to find. If users post memes or screenshots containing disinformation about COVID-19 — such as graphics promoting fake cures or conspiracy theories — platforms at this point are not removing the information. Facebook, for example, is not removing some memes relating to disinformation and does not appear to have a formal process for monitoring this type of content. These types of content are also visually engaging, and often travel faster and further on social media, which makes this a critical vulnerability in the platforms’ policies.

Private conversation spaces

Conversations taking place in private groups and messaging platforms such as WhatsApp, are another major vulnerability. For example, India, which is the largest market for WhatsApp, is experiencing high levels of health disinformation relating to COVID-19 on WhatsApp. Disinformation that starts in private WhatsApp groups can spread widely and penetrate other social media. Rumors containing false COVID-19 cures and protective measures give users a false sense of security regarding their vulnerability to the virus. Facebook groups are often community- and interest-based and provide easy avenues to target specific audiences. Because information is often shared in these groups from likeminded sources, friends, or family, readers may grant it higher levels of trust. And in such WhatsApp groups, Facebook is not redirecting users to reliable information and sources the same way it does in its search and news feed.

COVID-19 is an unprecedented and fast-moving crisis. There is some evidence of the major social media platforms trying to weed out harmful content, but these examples underscore the need for the platforms to do much more.