Monday, December 2nd, 2019

This article by GQR VP Jiore Craig was originally posted by Campaign & Elections, December 2, 2019.

As we head into 2020, disinformation tactics are morphing at light speed and the threat is greater than ever. Yet there’s also danger of campaigns chasing this threat down too many digital rabbit holes and becoming paranoid and reactive without a clear sense of how to best protect themselves. While there’s no simple vaccine, there are some common-sense rules campaigns should follow to protect themselves and avoid becoming immobilized by fear and confusion.

One thing campaigns can bet on is that disinformation tactics this cycle will be markedly more sophisticated and better disguised than in 2016. Disinformation peddlers have had non-stop opportunities to refine their slimy craft over the past three years. In addition to the scores of U.S. elections since 2016, there have been dozens of major national elections each year in other countries – at least 45 this year alone.

From Russian meddling in Madagascar to waves of disinformation about Brexit and the approaching U.K. election, there’s no shortage of testing grounds. As fast as experts get a handle on tactics, smoke-screeners develop new ones, often at almost no cost. Disinformation innovations far outpace the slow, half-hearted steps social media platforms like Facebook and Google take to address the problem.

With no sign of meaningful regulation in sight, campaigns can count on innumerable forms of disinformation in 2020. Certain wide-reaching disinformation themes, like false accusations of voter fraud, illustrated in the recent Kentucky gubernatorial election, are all but guaranteed. Tactics like using phony local news pages to push misleading information already exist in places like Michigan. Deep fakes, trolls, and conspiracy theories will confuse and confound as well. But by summer 2020, new disinformation tactics yet to be seen by even top experts will show up, too.

U.S. campaigns in 2020 will have their hands full countering such disinformation tactics. But they need not despair, get paralyzed with disinformation fear, or waste scarce time and resources chasing every online threat. Instead, here are four common-sense tactics campaigns can apply to fight back effectively and efficiently against ever-changing disinformation tactics:

1. Go on offense to build trust and inoculate against disinformation.
Disinformation relies on broken-down trust. A campaign actively building relationships with voters through robust digital and offline organizing programs and creative, well-resourced social media campaigns can simultaneously build trust with voters and win votes. The more voters trust a candidate and their campaign, the weaker disinformation is as a weapon for manipulating public opinion. Every physical or digital door-knock is a counter-punch to disinformation peddlers.

2. Don’t expect a cure-all.
Campaigns often look for magic technologies to address disinformation, believing some digital tool or dashboard will protect them. It won’t. Social listening dashboards, which typically take months to build, are often designed primarily for corporate brand reputation. When they do focus on listening for political disinformation, they prioritize patterns and tech from the last election, not the next one. Many social listening tools produce seemingly-digestible metrics but fail to determine what portion of their data represents actual voters – or what to do about their findings. Get used to this reality: no social media platform or expensive monitoring tool alone will protect campaigns and putting too much reliance on any of them makes campaigns more vulnerable.

3. Engage in smart social listening.
Campaigns should invest in protecting themselves from disinformation. But they must do so in a way that focuses on their own unique political and geographic context and produces actionable intelligence for wider campaign operations.

Smart social listening should focus on what matters to each campaign’s voter landscape. In the same way, campaigns must understand their electorates to effectively shape their messaging, place ads, knock on doors, or respond to news, they similarly must know how their voters are using social media to determine where to focus online. For example, if only 5 percent of a campaign’s target voters are using Instagram, investing heavily in Instagram ads is a waste. Similarly, if a meme with a misleading claim about the candidate shows up on Instagram but nowhere else, a campaign focused on its voters will know it warrants monitoring — not a rapid-response press conference.

Smart social listening is the precondition for effective threat triage — figuring out which threats can be totally ignored, which require watchful waiting, and which require an all-hands-on-deck response.

Too often, campaigns waste precious time going down social media rabbit holes because they fear they may miss something important. They risk confirmation bias, believing every fake post is a mortal threat. When campaigns look for threats, they risk amplifying pernicious attacks that might never have reached voters otherwise.

Similarly, when campaigns try to monitor every surface of the internet, they risk wasting money on threat detection rather than getting in front of voters with their own message. Smart social listening makes the voter, not the candidate or threat, the focal point of listening.

4. Don’t fight this alone.
Campaigns need not feel they’re alone in the fight against disinformation. There are many centers of anti-disinformation activity — democracy advocates, national security and cybersecurity experts, counter-extremists, privacy advocates, law enforcement, and social media platforms. Though many of these may not share a campaign’s electoral focus, they still can provide important help. Those who are worried about progressive goals, democratic sovereignty, citizen privacy, and platform accountability must find new ways to work together. We need a cross-industry approach to fighting online threats that spreads new anti-disinformation insights and technologies as quickly as bad actors innovate. Campaigns should work on building bridges to such groups, so they can benefit from anti-disinformation best-practices while staying focused on getting the most votes on Election Day.