The Information War Is On. Are We Ready For It?

Disinformation, misinformation, and social media hoaxes have evolved into high-stakes information war. But our frameworks for dealing with them have remained the same.
Image may contain Parliament Architecture and Building
Dennis Macdonald/Getty Images

On August 1, 2018, the Senate Select Committee on Intelligence held a public hearing asking experts to testify on how foreign actors have used—and are using—social media to meddle in the American political process. The question of whether Russian entities interfered in American politics was not up for debate; that has already been firmly established. There was also no question about whether Russian influence operations are ongoing across social platforms: they are. Just 24 hours before the hearing, Facebook announced it had found numerous fake Pages masquerading as left-wing activists. Rather, the committee wanted researchers to state, on the record, facts that concretely establish what was learned about social media influence operations prior to, during, and following, the 2016 presidential election. They also wanted to know, how do we prevent this from happening again?

Five experts testified, including me. We all agreed: this is an information war. These operations are ongoing and the adversaries will evolve.

In my testimony, I laid out that there is both a short-term threat—the hijacking of narratives in the upcoming 2018 election—and significant long-term challenges. Crucially, that tech platforms and government alike need to decide how to respond to information operations while preserving our commitment to free speech and the free flow of ideas. As Senator James Risch put it: “The difficulty is, how do you segregate those people [foreign adversaries] who are doing this from Americans who have the right to do this?”

Right now, the responsibility for solving this problem falls to the private platforms that control our public squares. But that doesn’t appear to be working. Because, regardless of how you feel about the tech platforms, eradicating misinformation while preserving free speech is a monumental challenge.

And now, government officials are now grappling with their role in this battle. Before the hearing, Senator Mark Warner, the vice chairman of the SSCI, released a policy paper offering ideas for the regulation of tech while also addressing the government’s responsibility. His proposals were sweeping and touched on important issues, including specific technical issues related to computational propaganda, the impact on consumers, and the lack of clear roles and responsibilities borne by platforms and the government. Senator Ron Wyden, one of the authors of Section 230 of the Communications Decency Act, the legislation that has protected internet companies from being liable for the information published on their platforms, was particularly forceful in the hearing as well, stating that “these pipes are no longer neutral”, and that 230 gave the platforms both “a shield and a sword”—and they’d ignored the sword.

But, ultimately, what the government—and the general public—is realizing is that while disinformation, misinformation, and social media hoaxes have evolved from a nuisance into high-stakes information war, our frameworks for dealing with them have remained the same. We discuss counter-messaging, treating this as a problem of false stories rather than as an attack on our information ecosystem. We find ourselves in the midst of an arms race, in which responsibility for the integrity of public discourse is largely in the hands of private social platforms, and determined adversaries continually find new ways to manipulate features and circumvent security measures. Addressing computational propaganda and disinformation is not about arbitrating truth. It’s about responding to information warfare—a cybersecurity issue—and it must be addressed through collaboration between governments responsible for the safety of their citizens and private industry responsible for the integrity of their platforms.

Malign narratives have existed for a very long time, but today’s influence operations are materially different—the propaganda is shared by friends on popular social platforms. It’s efficiently amplified by algorithms, so campaigns achieve unprecedented scale. Adversaries leverage the entire ecosystem to manufacture the appearance of popular consensus. Content is created, tested, and hosted on platforms such as YouTube, Reddit, and Pinterest. It’s pushed to Twitter and Facebook, with standing audiences of hundreds of millions, and targeted at the most receptive. Trending algorithms are gamed to make content go viral—this often has the added benefit of mainstream media coverage on traditional channels including television. If an operation is successful and the content gets wide distribution, or the Page or Group gains enough followers, recommendation and search engines will continue to serve it up.

The Internet Research Agency, the Russian troll farm charged with interfering in the U.S. election, employed this playbook. Their operation began around 2013, continued through the 2016 election, and even increased on some platforms, such as Instagram, in 2017. The operation reached hundreds of millions of users across Facebook, Twitter, Vine, YouTube, G+, Reddit, Tumblr, and Medium. Websites were created to push content about everything from social issues to concerns about war, the environment, and GMOs. Twitter accounts masqueraded as local news stations. WhiteHouse.gov petitions were co-opted. Facebook Events were promoted, and activists were contacted personally via Messenger, to take the operation to the streets.

The focus of the IRA campaign was to exploit social, and especially racial, tension. Despite YouTube’s claim that the content found on its platform was “not ​targeted ​to ​any ​particular ​sector ​of ​the ​US population”, the majority was related to issues of importance to the black community, particularly officer-involved shootings. Hundreds of thousands of Americans liked Facebook Pages with names like Blacktivist, Heart of Texas, and Stop All Invaders. The amount of explicitly political content that mentioned candidates was small, but unified in its negativity toward the candidacy of Secretary Clinton. In content that targeted the left, this included messages aimed at depressing turnout among black voters, or painting Secretary Clinton in a negative light compared to Jill Stein or Senator Bernie Sanders. And nearly two years since the 2016 election, only the social networks that hosted this campaign are in a position to gauge its impact.

The IRA was not the only adversary to target American citizens online. The co-opting of social networks reached mainstream awareness in 2014 as ISIS established a virtual caliphate; the debate about what to do made it obvious that no one was in charge. That confusion continues even as the threat expands: the Wall Street Journal recently revealed that a private intelligence company, Psy-Group, marketed their ability to conduct similar types of influence operations to impact the 2016 election.

Social platforms have begun to take steps to reduce the spread of disinformation. These steps, several of which were inspired by prior tech hearings, are a good start. But as platform features and protections change, determined adversaries will develop new tactics. We should anticipate an increase in the misuse of less resourced social platforms, and an increase in the use of peer-to-peer encrypted messaging services. Future campaigns will be compounded by the use of witting or unwitting people through whom state actors will filter their propaganda. And we should anticipate the incorporation of new technologies, such as videos (“deepfakes”) and audio produced by AI, to supplement these operations, making it increasingly difficult for people to trust what they see.

This problem is one of the defining threats of our generation. Influence operations exploit divisions in our society using vulnerabilities in our information ecosystem. They take advantage of our commitment to freedom of speech and the free flow of ideas. The social media platforms cannot, and should not, be the sole defenders of democracy and public discourse.

In the short term, our government, civil society, political organizations, and social platforms must prioritize immediate action to identify and eliminate influence campaigns, and to educate the public ahead of the 2018 elections. In the longer term, it’s time for an updated global Information Operations doctrine, including a clear delegation of responsibility within the U.S. government. We should pursue the regulatory and oversight frameworks necessary to ensure that private tech platforms are held accountable, and that they continue to do their utmost to mitigate the problem in our privately-owned public squares. And we need structures for cooperation between the public and private sectors; formal partnerships between security companies, researchers, and government will be essential to identifying influence operations and malign narratives before they achieve widespread reach.

Finally, we should agree that deciding how to fight an information war should not be a partisan issue. As Senator Kamala Harris stated during the hearing, we’re all part of the big American family. “What we have in common is a love of country and a belief that we as Americans should solely be responsible for the choosing of our leaders, and the fate of our democracy, and who will be the President of the United States. Someone else came into the house of this country and they manipulated us...they provoked us, and they tried to turn us against each other.” And just like in any family we might not always like each other, but we can come together for the right cause. In this case, to defend our democracy.


More Great WIRED Stories