lol something matters
Lisa Larson-Walker
Science

LOL Something Matters

We’ve been told that facts have lost their power, that debunking lies only makes them stronger, and that the internet divides us. Don’t believe any of it.

I.

Ten years ago last fall, Washington Post science writer Shankar Vedantam published an alarming scoop: The truth was useless.

His story started with a flyer issued by the Centers for Disease Control and Prevention to counter lies about the flu vaccine. The flyer listed half a dozen statements labeled either “true” or “false”—“Not everyone can take flu vaccine,” for example, or “The side effects are worse than the flu” —along with a paragraph of facts corresponding to each one. Vedantam warned the flyer’s message might be working in reverse. When social psychologists had asked people to read it in a lab, they found the statements bled together in their minds. Yes, the side effects are worse than the flu, they told the scientists half an hour later. That one was true—I saw it on the flyer.

This wasn’t just a problem with vaccines. According to Vedantam, a bunch of peer-reviewed experiments had revealed a somber truth about the human mind: Our brains are biased to believe in faulty information, and corrections only make that bias worse.

This supposed scientific fact jibed with an idea then in circulation. In those days of phantom Iraqi nukes, anti-vaxxer propaganda, and climate change denialism, reality itself appeared to be in danger. Stephen Colbert’s neologism, truthiness—voted word of the year in 2006—had summed up the growing sense of epistemic crisis. “Truth comes from the gut,” Colbert boasted to his audience. “Anyone can read the news to you. I promise to feel the news at you.”

Back then it seemed as though America had slipped the moorings of her reason and was swiftly drifting toward a “post-fact age.” Scholar Cass Sunstein blamed the internet for this disaster: Online communities, he argued, could serve as “echo chambers” for those with shared beliefs. Then came Vedantam’s piece, with real-life data to support the sense that we all were flailing in a quicksand of deception and that the more we struggled to escape it, the deeper we would sink into the muck.

Writing in Slate last year, former professional fact-checker Jess Zimmerman remembered Vedantam’s article as “my first ‘lol nothing matters’ moment,” when she realized her efforts to correct the record might only make things worse. Another nothing-matters moment followed one week later, when Vedantam told WNYC about a different study. A pair of political scientists had given 130 students a mocked-up news report on a speech about the invasion of Iraq that described the country as “a place where terrorists might get weapons of mass destruction.” Half the subjects then read a correction to that news report, noting that the CIA had found no evidence of such weapons in Iraq. For students who were politically conservative, the correction didn’t work the way it should have; instead of making them more suspicious of the idea that Saddam Hussein had been hiding WMDs, it doubled their belief in it.

News about this research made its way to Slate, the Wall Street Journal, This American Life, la Repubblica in Rome, and several hundred other media outlets around the world. Sunstein cited the result—an “especially disturbing finding,” he declared—in his next book on the nature of extremism.

The study of corrected news reports, like the work on vaccine myths, helped provide a scientific framework for our growing panic over facts. Now we had a set of interlocking theories and experiments on which to hang the claim that truth was being vanquished from democracy—that the internet divides us, that facts will make us dumber, and that debunking doesn’t work. These ideas, and the buzzwords that came with them—filter bubbles, selective exposure, and the backfire effect—would be cited, again and again, as seismic forces pushing us to rival islands of belief.

Ten years on, the same scientific notions have now been used to explain the rise of Donald Trump. The coronation of the man who lied a thousand times, a champion of “alternative facts,” had brought us from the age of truthiness to the era of post-truth—2016’s word of the year. In a span of several weeks after Trump’s inauguration, Slate announced that “It’s Time to Give Up on Facts,” Rolling Stone declared “The End of Facts,” the New Yorker told us “Why Facts Don’t Change Our Minds,” and the Atlantic ran through “the facts on why facts alone can’t fight false beliefs.” These lamentations continued unabated throughout 2017. Just two weeks ago, Facebook said it would no longer flag phony links with red-box warnings, since pointing to a lie only makes it stronger. The truth, this move implied, does more harm than good.

But there’s a problem with these stories about the end of facts. In the past few years, social scientists armed with better research methods have been revisiting some classic work on the science of post-truth. Based on their results, the most surprising and important revelations from this research—the real lol-nothing-matters stuff—now seem overstated. It may be that the internet does not divide us, that facts don’t make us dumber than we were before, and that debunking doesn’t really lead to further bunk.

In fact, it may be time that we gave up on the truth-y notion that we’re living in a post-truth age. In fact, it may be time that we debunked the whole idea.

II.

We didn’t need some lab experiment to tell us that the truth is often unpersuasive and that it’s hard to change a person’s mind. But that’s not what the end-of-facts researchers were saying. Their work got at something far more worrisome: a fear that facts could blow up in all our faces and that even valid points might reinforce a false belief.

This is not a small distinction. If the truth were merely ineffective, then all our efforts to disperse it—through educational websites, debunking flyers, and back-and-forths on Facebook—would be at worst a waste of time. But what if the truth had a tendency to flip itself around? In that case, those same efforts might be tugging people in the wrong direction, pulling them apart. Even if the tugs were very slight, the effect could multiply in terrifying ways—a million tiny forces from a million tiny arguments that added up to a tidal wave of disagreement.

Obama is not a Muslim / Obama is a Muslim

In 2007, an example of this boomerang phenomenon seemed to be unfolding in real time. Polls showed Americans were more likely to describe then–presidential candidate Barack Obama as a Muslim than a member of any other faith. A related set of smears had oozed across the country via Fwd: Fwd: emails, asserting that Obama joined a Christian church to hide his madrassa past, that he wouldn’t say the Pledge of Allegiance, and that he’d been sworn into the Senate on a copy of the Quran.

In the face of all this faulty information, journalists tried redoubling their focus on the facts. Two weeks before Vedantam wrote his Washington Post piece on the dangers of debunking, the Tampa Bay Times’ Bill Adair launched PolitiFact. Two weeks after, the Post’s Glenn Kessler started his weekly “Fact Checker” column, with its Pinocchio rating scheme. Yet the checkers sensed that certain lies about Obama were resistant to their efforts or were maybe even fueled by them. “The number of Americans who believe Obama is a Muslim has gone up,” a nonplussed Adair told NPR in March 2008. “It was 8 percent back in November. The latest poll, it’s up to 13 percent.”

How could this be happening? Norbert Schwarz, the psychologist whose work on dispelling myths about the flu vaccine had been described in Vedantam’s piece, thought he had the answer. Based on the data he’d collected with his postdoc Ian Skurnik, it seemed to him Obama’s denial of a Muslim past would only make the rumors worse.

Schwarz helped draft a memo to the Obama campaign, sharing this advice. By that point he’d joined a secret panel of advisers to the candidate, a group that included Sunstein as well as several winners of the Nobel Prize. This group—which would later be dubbed an “academic dream team”—had been formed to supply Democratic candidates with cutting-edge research on the psychology of messaging. “In no case should you say that Obama is not a Muslim, since repeating it will only cause a backlash,” Schwarz says he advised the campaign. Instead, Obama should emphasize the fact that he is a Christian and that he brings his family to church.

The dream team never got explicit feedback on this memo, but it did seem to Schwarz that the campaign was heeding his advice. In early 2008, Obama began to focus on pronouncements of his Christian faith and his devoted membership in Chicago’s Trinity United Church of Christ. This would backfire in spectacular fashion: In mid-March, a controversy erupted over unpatriotic sermons from that church’s pastor, Jeremiah Wright.

Schwarz recounts this wistfully, as “an interesting illustration of what can happen when you make the correct recommendation in a world that you have no control over.” In any case, after the election, the boomerang theory of debunking was established as a rule of thumb. In November 2011, a pair of cognitive psychologists in Australia, Stephan Lewandowsky and John Cook, published an eight-page pamphlet they called “The Debunking Handbook,” on the “difficult and complex challenge” of correcting misinformation. They cited work from Schwarz and Skurnik, among others, in describing several ways in which debunkings can boomerang or backfire. Arriving when it did, in the middle of the post-fact panic, their handbook satisfied a pressing need. Chris Mooney, author of The Republican War on Science, called it “a treasure trove for defenders of reason.” The liberal website Daily Kos said it was a “must read and a must keep reference.” Its text would be translated into 11 languages, including Indonesian and Icelandic.

“The existence of backfire effects” have “emerged more and more over time,” Lewandowksy told Vox in 2014. “If you tell people one thing, they’ll believe the opposite. That finding seems to be pretty strong.”

III.

If you tell people one thing, they’ll believe the opposite. This improbable idea had been bouncing around the academic literature for decades before Schwarz and others started touting it. The first hints of a boomerang effect for truth emerged in the early 1940s, as the nation grappled with a rash of seditious, wartime rumors. Newspaper fact-check columns, known as “rumor clinics,” sprang up in response to the “fake news” of the time—the claim, say, that a female munitions worker’s head exploded when she went to a beauty parlor for a perm. The rumor clinics spelled out these circulating falsehoods, then explained at length why they were “phony,” “sucker bait,” or “food for propageese.” But experts soon determined that these refutations might be dangerous.

Undocumented immigrants do not commit more violent crimes / Undocumented immigrants commit more violent crimes

By January 1943, mavens at America’s “rumor-scotching bureau,” the Office of War Information, told the New York Times that debunkers could “make a rumor worse by printing it and denying it in the wrong manner.” Shortly thereafter, an Austrian émigré and sociologist named Paul Lazarsfeld published the results from his seminal study of Ohio voters. Lazarsfeld, who was based at Columbia University’s Office of Radio Research, found these voters had been awash in a “flood of propaganda and counterpropaganda” about the candidates running for president in 1940—but that they’d mostly filtered out the facts they didn’t care for. Like-minded voters tended to communicate only among themselves, he said, which in turn produced “a mutual strengthening of common attitudes,” to the point that even rival facts might only “boomerang” and reinforce their original views.

More examples of the boomerang effect would be presented in the years that followed. In 1973, for example, psychologists presented evidence that the social message of the TV sitcom All in the Family had backfired. The show’s creators aimed to skewer and rebut the attitudes of its central character, the bigot Archie Bunker. But when scientists surveyed high school students in a Midwest town, they found that the most prejudiced teenagers in the group were the ones most likely to be watching Archie every week. “The program is more likely reinforcing prejudice and racism than combating it,” the researchers concluded.

Another famous study, published in 1979, found a boomerang for environmental messages. Researchers in Arizona passed out flyers at a public swimming pool that featured one of three messages: “Don’t Litter,” “Help Keep Your Pool Clean,” or “Obey Pool Safety Rules.” The “Don’t Litter” message seemed to backfire and make the garbage problem worse: Half the people who received that flyer tossed it on the ground, as compared with just one-quarter of the people who’d received the other messages.

In a classic paper, also out in 1979, Stanford psychologists Charles Lord, Lee Ross, and Mark Lepper got at the related concept of motivated reasoning. For that study, which has since been cited thousands of times, they presented undergraduates with conflicting data on the efficacy of the death penalty. They found that the exact same information would be interpreted in different ways, depending on how the subjects felt before the research started. The net effect of their experiment was to make the students more convinced of their original positions—to polarize their thinking.

Thirty years later, as a fresh array of boomerang or backfire effects made its way to print, psychologists Sahara Byrne and Philip Solomon Hart reviewed the science in the field. Their paper cites more than 100 studies of situations where “a strategic message generates the opposite attitude or behavior than was originally intended.” The evidence they cite looks overwhelming, but as I sorted through the underlying literature, I began to wonder if some of these supposed boomerang effects might be weaker than they seemed.

Take the Archie Bunker paper. When the same psychologists ran their survey on a second group of people up in Canada, they did not find the same result. And going by subsequent research on the TV show, published in the 1970s, it seemed that Archie’s antics on All in the Family may have helped diminish prejudice, not increase it.

The study of the poolside flyers, which Byrne and Hart called “one of the most famous research examples of the boomerang effect,” also seemed a little flimsy. The original paper goes through three versions of the same experiment; where the first one seems to show a real effect, the others look like replication failures, with no clear evidence for backfire.

As I poked around these and other studies, I began to feel a sort of boomerang effect vis-à-vis my thinking about boomerangs: Somehow the published evidence was making me less convinced of the soundness of the theory. What if this field of research, like so many others in the social sciences, had been tilted toward producing false positive results?

For decades now, it’s been commonplace for scientists to run studies with insufficient sample sizes or to dig around in datasets with lots of different tools, hoping they might turn up a finding of statistical significance. It’s clear that this approach can gin up phantom signals from a bunch of noise. But it’s worse than that: When researchers go out hunting subtle, true effects with imprecise experiments, their standard ways of testing for significance may exaggerate their findings, or even flip them in the wrong direction. Statistician (and Slate contributor) Andrew Gelman calls this latter research hazard a “type-S” error: one that leads a scientist to assert, with confidence, a relationship that is actually inverse to the truth. When a scientist makes a type-S error, she doesn’t end up with a false positive result so much as an “anti-positive” one; she’s turned the real effect upside down. If she were studying, say, the effect of passing out flyers at a public pool, she might end up thinking that telling people not to litter makes them litter more, instead of less.

It’s easy to imagine how these type-S errors might slither into textbooks. A scientist who found an upside-down result might go on to make a novel and surprising claim, such as: If you tell people one thing, they’ll believe the opposite; or facts can make us dumber; or debunking doesn’t work. Since editors at top-tier scientific journals are often drawn to unexpected data, this mistake might then be published as a major finding in the field, with all the press reports and academic accolades that follow. Gelman, for his part, thinks type-S errors might not be the problem here—that the real issue could be that different people might respond to something like a “don’t litter” flyer in different ways in different contexts, for reasons researchers don’t understand. But no matter the underlying reason, in an environment where surprising data thrive and boring studies wither in obscurity, a theory based on boomerangs will have a clear advantage over other, more mundane hypotheses.

IV.

The first study highlighted by the Post’s Vedantam—the piece of research that helped kick off the modern wave of post-fact panic—is a mess of contradictions.

In late 2004 or early 2005, Ian Skurnik showed a set of undergrads the CDC’s poster about flu vaccine “facts and myths.” According to a data table from a draft version of the study posted on the website of co-author Carolyn Yoon, Skurnik found the students’ memories were very good when they were tested right way: They labeled the flyer’s “myths” as being true in just 3 percent of their responses. Thirty minutes later, though, that figure jumped to 13 percent. By that point, they’d grown foggy on the details—and the flyer’s message backfired.

CDC Flu Facts

This made sense to Skurnik and his colleagues. He already knew from prior research that the more you hear a thing repeated, the more reliable it seems: Familiarity breeds truthiness. Now the study of the flyer suggested this effect would hold even when the thing you’ve heard before has been explicitly negated. Imagine a debunking like one shown on the CDC flyer: The flu shot doesn’t cause the flu. Over half an hour, Skurnik’s study argued, the word doesn’t fades away, while the rest of the message sounded ever more familiar—and thus more true.

the flu shot does not cause the flu / the flu shot does cause the flu

His CDC flyer data suggested this all happens very quickly—that debunking can boomerang in minutes.

But that notion didn’t fit with data from another study from the same researchers. For that earlier experiment, published in the Journal of Consumer Research in 2005, Skurnik, Yoon, and Norbert Schwarz looked at how college students and senior citizens remembered health claims that were labeled either “true” or “false.” The team found no sign of backfire among the college students after 30 minutes or even after three days. (They did find a boomerang effect for older subjects.)

Meanwhile, the study of the CDC flyer never made its way into a peer-reviewed academic journal. (The research would be summarized in an academic book chapter from 2007.) Vedantam’s write-up for the Post, which claims the study had just been published in a journal, seems to have conflated it with the paper published two years earlier, saying the CDC flyer had been presented both to younger and older subjects and at both a 30-minute and three-day delay.

I asked Skurnik, who’s now an associate professor of marketing at the University of Utah, why his famous flyer study never ended up in print. He said that he and Schwarz had submitted it to Science, but the influential journal decided to reject it because the work had already been described by the New York Times. (I could find no such story in the Times.)

As Skurnik moved along in his career, he says, he allowed “that line of research to get on the back burner.” When others tried to reproduce his research, though, they didn’t always get the same result. Kenzie Cameron, a public health researcher and communications scholar at Northwestern’s Feinberg School of Medicine, tried a somewhat similar experiment in 2009. She set up her study as a formal clinical trial; instead of testing college undergrads as Skurnik, Yoon, and Schwarz had done, she recruited a racially diverse group of patients over the age of 50, selecting only those who hadn’t gotten vaccinated in the prior year. She mailed each of her subjects a version of the CDC flyer a week before they were due to come in for a checkup. Some of these flyers listed facts and myths in simple statements, others listed only facts, and still others gave specific refutations of the false information.

Cameron had her subjects tested on their knowledge of the flu vaccine on two occasions, once before they’d seen the flyers and again when they came in to see their doctors. She found that every version of the flyer worked: Overall, the patients ended up more informed about the flu vaccine. In fact, the version of the CDC flyer that was closest to the one that Schwarz and Skurnik used ended up the most effective at debunking myths. “We found no evidence that presenting both facts and myths is counterproductive,” Cameron concluded in her paper, which got little notice when it was published in 2013.

There have been other failed attempts to reproduce the Skurnik, Yoon, and Schwarz finding. For a study that came out last June, Briony Swire, Ullrich Ecker, and “Debunking Handbook” co-author Stephan Lewandowsky showed college undergrads several dozen statements of ambiguous veracity (e.g. “Humans can regrow the tips of fingers and toes after they have been amputated”). The students rated their beliefs in each assertion on a scale from 0 to 10, then found out which were facts and which were myths. Finally, the students had to rate their beliefs again, either after waiting 30 minutes or one week. If Skurnik, Yoon, and Schwarz were right, then the debunkings would cause their answers to rebound in the wrong direction: If you tell people one thing, they’ll believe the opposite. But the new study found no sign of this effect. The students’ belief in false statements dropped from a baseline score of 6 down to less than 2 after 30 minutes. While their belief crept back up a bit as time went by, the subjects always remained more skeptical of falsehoods than they’d been at the start. The labels never backfired.

A second study from Ecker and Lewandowsky (along with Joshua Hogan), also out last June, found that corrections to news stories were most effective when they repeated the original misinformation in the context of refuting it. This runs counter to the older theory, that mere exposure to a lie—through a facts-and-myths debunking flyer, for example—makes it harder to unseat. The authors noted that the traditional logic of “effective myth debunking may thus need to be revised.”

In other words, at least one variation of the end-of-facts thesis—that debunking sometimes backfires—had lost its grounding in the data. “I’ve tried reasonably hard to find [this backfire effect] myself, and I haven’t been able to,” Ecker told me recently. Unless someone can provide some better evidence, it may be time to ask if this rule of thumb from social science could represent its own variety of rumor: a myth about how myths spread.

V.

Brendan Nyhan and Jason Reifler described their study, called “When Corrections Fail,” as “the first to directly measure the effectiveness of corrections in a realistic context.” Its results were grim: When the researchers presented conservative-leaning subjects with evidence that cut against their prior points of view—that there were no stockpiled weapons in Iraq just before the U.S. invasion, for example—the information sometimes made them double-down on their pre-existing beliefs. It looked as though the human tendency to engage in motivated reasoning might be worse than anyone imagined. (Eventually this would form the basis for another section of “The Debunking Handbook.”)

Death panels are not part of the ACA / Death panels are a part of the ACA

With an election looming in the fall of 2008, Nyhan and Reifler’s work went viral in the media. (The final version of their paper would not be published in an academic journal until 2010.) Vedantam wrote up their findings for the Post, and the story spread from there. It soon became the go-to explanation for partisan recalcitrance. “Perception is reality.
Facts don’t matter
,” wrote Jonathan Chait in the New Republic, linking up the new research to presidential candidate John McCain’s “postmodern” disregard for truth. “If [Nyhan and Reifler’s] finding is broadly correct,” Chait wrote, “then the media’s new-found willingness to fact-check McCain will only succeed in rallying the GOP base to his side.”

Political scientists were just as taken by the Nyhan-Reifler findings. A pair of political science graduate students at the University of Chicago, Tom Wood and Ethan Porter, found the study dazzling. “It really stood out as being among the most provocative possible claims” about the science of public opinion, Wood told me in a recent interview. He and Porter had been reviewing old research on how we’re more responsive to the facts that support our pre-existing points of view. The new paper took this idea a full step further. “It said that your factual ignorance could actually be compounded by exposure to factual information,” Wood says. The implications for democracy were calamitous.

By the time he and Porter had funding for their own study of this phenomenon, in 2015, the idea had grown in scope. Aside from all the media coverage, papers had by then been published showing that the facts could boomerang when Republicans were told that Obamacare’s “death panels” didn’t exist or that climate change could lead to more disease. And the original Nyhan-Reifler paper had become a “citation monster,” Wood says. “It’s four times as cited as any comparably aged paper from the same journal.”

He and Porter decided to do a blow-out survey of the topic. Instead of limiting their analysis to just a handful of issues—like Iraqi WMDs, the safety of vaccines, or the science of global warming—they tried to find backfire effects across 52 contentious issues. Their study would provide corrections of false statements from Hillary Clinton on the effects of gun violence, for instance, and from Donald Trump on the rate of crimes committed by undocumented immigrants. They also increased the sample size from the Nyhan-Reifler study more than thirtyfold, recruiting more than 10,000 subjects for their five experiments.

In spite of all this effort, and to the surprise of Wood and Porter, the massive replication effort came up with nothing. That’s not to say that Wood and Porter’s subjects were altogether free of motivated reasoning.
The people in the study did give a bit more credence to corrections that fit with their beliefs; in those situations, the new information led them to update their positions more emphatically. But they never showed the effect that made the Nyhan-Reifler paper famous: People’s views did not appear to boomerang against the facts. Among the topics tested in the new research—including whether Saddam had been hiding WMDs—not one produced a backfire. “We were mugged by the evidence,” says Wood.

Meanwhile, Columbia University graduate students Andy Guess and Alex Coppock were chewing over a similar idea: If you tell people one thing, will they end up believing the opposite? Guess and Coppock had come across the 1979 study by Lord, Ross, and Lepper, which showed that adding facts to a discussion of the death penalty only curdles students’ disagreements. But when the grad students looked more closely at that old paper, they were appalled. “We realized it was not a properly randomized experiment,” says Guess.

“We thought it was BS,” says Coppock.

In 2014, the two of them updated the classic study using what they thought was better methodology. Where Lord, Ross, and Lepper tested 48 undergrads on their views about capital punishment, Guess and Coppock assessed that question with the help of 683 subjects recruited via the internet. For follow-up experiments, they tested how different kinds of evidence affected the views of another 1,170 subjects on the minimum wage, and 2,122 more on gun control. In none of these conditions did they find evidence that people grew more stubborn in their views when presented with disconfirming information.

Instead, the studies showed what Coppock calls “gorgeous parallel updating,” by which he means that people on either side of any issue will adjust their beliefs to better fit the facts. If boomerangs occur, he says, they’re the exception, not the rule. The backfire effect “is a truth-y hypothesis,” he told me. “It feels right, that arguing with idiots just makes them stupider.”

Guess also began to wonder about a third axiom of truthiness: Is it really the case that the internet divides us?

For all the influence of the echo chamber theory, Guess found there was not a lot of real-world data to support it. In 2015, he gained access to a potent data set from an online polling firm, which included three weeks’ worth of website tracking for almost 1,400 individuals, tagged with their demographic info and political affiliations. That meant Guess could test the echo chamber theory in the wild—and he found it didn’t hold. Other recent studies—one by Levi Boxell, Matthew Gentzkow, and Jesse M. Shapiro; another by Jacob L. Nelson and James G. Webster—have supported this result: News consumption on the internet does not appear to be as fractured as we thought.

It wasn’t that the standard work on “partisan exposure” had been wrong. Like-minded people do tend to congregate on social networks, said Guess, and they tend to gab about whatever suits their group. But this clumping up and screening out is not unique to online settings; it happens just as much when we get together in the offline world, watch TV, or scan headlines at the newsstand.

Nor are the basic facts about persuasion all that controversial. Yes, people do engage in motivated reasoning. Yes, it’s true that we prefer to cling to our beliefs. Yes, we do give extra credence to the facts we’ve heard repeated. But each of these ideas has also spawned a more extreme (and more disturbing) corollary—that facts can force the human mind to switch into reverse, that facts can drive us even further from the truth. It’s those latter theories, of boomerangs and backfires, that have grown in prominence in recent years, and it’s those latter theories that have lately had to be revised.

VI.

Even as new facts accumulate in the science of post-facts, the field will likely be slow to change its course. Norbert Schwarz, for one, has been a vocal critic of the replication movement in social psychology, comparing those who question old ideas to global warming denialists: “You can think of this as psychology’s version of the climate-change debate,” he told Nature in 2012, when doubts emerged about research into social priming. “The consensus of the vast majority of psychologists closely familiar with work in this area gets drowned out by claims of a few persistent priming skeptics.”

There were no WMDs in Iraq / There were WMDs in Iraq

Skeptics of the boomerang effect have also run afoul of consensus thinking in their field. Guess and Coppock sent their study to the same journal that published the original Lord, Ross, and Lepper paper in 1979, and it was rejected. Then it was passed over four more times. “We’ve reframed it over and over,” Coppock says. “It’s never rejected on the evidence—they don’t dispute the data. It’s that they don’t believe the inference, that backlash doesn’t happen, is licensed from those data.” As a result, their work remains in purgatory, as a posted manuscript that hasn’t made its way to print. (Guess has only just submitted his paper re-examining the echo chamber theory; it’s now under review for the first time.)

Wood and Porter’s study also faced a wall of opposition during the peer review process; after two rejections, it was finally accepted by a journal just last week.

I asked Coppock: Might there be echo chambers in academia, where scholars keep themselves away from new ideas about the echo chamber? And what if presenting evidence against the backfire effect itself produced a sort of backfire? “I really do believe my finding,” Coppock said. “I think other people believe me, too.” But if his findings were correct, then wouldn’t all those peer reviewers have updated their beliefs in support of his conclusion? He paused for a moment. “In a way,” he said, “the best evidence against our paper is that it keeps getting rejected.”

While some colleagues have been reluctant to believe that backfire effects might be rare or nonexistent, there are some notable exceptions. Nyhan and Reifler, in particular, were open to the news that their original work on the subject had failed to replicate. They ended up working with Wood and Porter on a collaborative research project, which came out last summer, and again found no sign of backfire from correcting misinformation. (Wood describes them as “the heroes of this story.”) Meanwhile, Nyhan and Reifler have found some better evidence of the effect, or something like it, in other settings. And another pair of scholars, Brian Schaffner and Cameron Roche, showed something that looks a bit like backfire in a recent, very large study of how Republicans and Democrats responded to a promising monthly jobs report in 2012. But when Nyhan looks at all the evidence together, he concedes that both the prevalence and magnitude of backfire effects could have been overstated and that it will take careful work to figure out exactly when and how they come in play.

Nyhan has been a champion of the replication movement and of using better research methods. He’s written up the newer data on debunking, and the evidence against the echo chamber theory, for the New York Times. And he’s the one who pointed me to the work from Guess and Coppock, calling it “impressive and important.” In terms of reckoning with recent data, says Nyhan, “it would be ironic if I dug in my heels.”

Yet even if boomerangs turn out to be unusual, he says, there’s little cause for optimism. Facts are, at best, “sometimes mildly effective” at displacing grabby lies, and corrections clearly aren’t working “if the standard is getting rid of misperceptions in the world.”

Ullrich Ecker, the debunking expert who failed to reproduce Schwarz and Skurnik’s finding on the boomerang effect for facts and myths, agrees with Nyhan. “If there’s a strong motivation to hold on to a misconception, then often the corrections are ineffective. Whether or not they backfire, that’s up for debate,” he says. “But look, if it’s ineffective, that’s pretty much the same story as if there’s a small backfire effect.”

There’s a vast difference, though, between these two scenarios. In a world where fact-checking doesn’t work, we may get caught in knots of disagreement. In a world where facts can boomerang, those knots may tighten even as we try to pull away. One is frustrating to imagine. The other is horrifying.

Why, then, has the end-of-facts idea gained so much purchase in both academia and the public mind? It could be an example of what the World War II–era misinformation experts referred to as a “bogie” rumor—a false belief that gives expression to our deepest fears and offers some catharsis. It’s the kind of story that we tell one another even as we hope it isn’t true. Back then, there were bogie rumors that the Japanese had sunk America’s entire fleet of ships or that thousands of our soldiers’ bodies had washed ashore in France. Now, perhaps, we blurt out the bogie rumor that a rumor can’t be scotched—that debunking only makes things worse.

Or it could be that our declarations of a post-truth age are more akin to another form of rumor catalogued during the 1940s: the “pipe dream” tale. These are the stories—the Japanese are out of oil; Adolf Hitler is about to be deposed—we tell to make ourselves feel better. Today’s proclamations about the end of facts could reflect some wishful thinking, too. They let us off the hook for failing to arrive at common ground and say it’s not our fault when people think there really is a war on Christmas or a plague of voter fraud. In this twisted pipe-dream vision of democracy, we needn’t bother with the hard and heavy work of changing people’s minds, since disagreement is a product of our very nature or an unpleasant but irresolvable feature of our age.

It’s time we came together to reject this bogus story and the ersatz science of post-truth. If we can agree on anything it should be this: The end of facts is not a fact.