While discussing the federal budget late last month on “Fox News Sunday,” Treasury Secretary Steven Mnuchin said, “I think they should give the president a line-item veto.” Mnuchin was following up on a comment made two days earlier by President Trump as he reluctantly signed a $1.3 trillion omnibus spending bill. Vowing he’d never do it again, Trump said, “I’m calling on Congress to give me a line-item veto for all government spending bills.”
The president took no questions, but Mnuchin made his pitch in an interview with Fox News host Chris Wallace, who respectfully took issue with the Treasury secretary’s premise. “But that’s been ruled unconstitutional by the Supreme Court, sir.”
“Well, again,” Mnuchin replied, “Congress could pass a rule, okay, that allows them to do it.” Wallace held his ground. “No, no sir, it would [require] a constitutional amendment.”
This appeared to be a difference of opinion, but as the Washington Post revealed in a textbook example of quality fact-checking, it was really a difference of fact: Steve Mnuchin was mistaken. Two weeks earlier, Snopes, an online fact-checking site, was asked to arbitrate the assertion that President Obama had complimented Vladimir Putin on his 2012 election – something Trump was being roundly criticized for doing recently. The Snopes fact-check was another example of neutral and informed review. Obama had indeed congratulated Russia’s president. Snopes noted the different historical context at the time -- pointing out that the U.S.-Russia relationship was already being described as “contentious” – while providing authoritative sourcing.
So it can be done. Yet it isn’t always. On March 1, a website called The Babylon Bee (tagline: “Your trusted source for Christian news satire”) published an obvious parody of the mainstream media headlined “CNN Purchases Industrial-Sized Washing Machine to Spin News Before Publication.” Taking no chances, the spoof was accompanied by a photograph of a large coin-operated washing machine. Yet Snopes somehow decided this story needed fact-checking to determine if it was true, offering that “some readers missed” that it was satire and “interpreted it literally” to mean that CNN had actually purchased an oversize washing machine into which its reporters physically inserted printouts of stories.
It would be easy to dismiss this “fact check” as a waste of time, but not after the 2016 presidential campaign brought to the fore the concept known as “fake news” – a term popularized by both Donald Trump and Facebook founder Mark Zuckerberg. The corrective produced its own set of conundrums. Seeking to bolster fact-checking, Facebook may be undermining it. Snopes’ conclusions are used by the social media site to flag “fake news.” This meant that once Snopes published its “fact check” on the CNN-washing machine send-up, anyone trying to share the original satire on Facebook was warned that it had been flagged as false. Meanwhile The Babylon Bee was simultaneously notified by Facebook that its piece “contains info disputed by (Snopes.com), an independent fact checker” and that if any of its future satirical pieces are flagged it “will see [its] distribution reduced” and its “ability to monetize and advertised [sic] removed.” If the original piece was meant to make people smile, Facebook’s threat was no laughing matter: The Babylon Bee reported an 80 percent drop in traffic. Although Facebook eventually unflagged the article, while laying the blame on Snopes, this episode raised profound questions: What is the basic purpose of fact-checking? How is it done properly? Can it ever be truly free of bias?
For example, is there merit to claims that fact checkers seem to overly focus on conservatives, especially the current president? After all, Americans’ obsession with “fake news,” “filter bubbles” and Facebook came about as a reaction to Trump’s election, so there is merit to asking whether the fact-checking renaissance is some kind of cathartic release. Among the six fact-checking organizations examined by RealClearPolitics, over one-third of the claims reviewed during the last five months mentioned “Trump” in the title or argument. The numbers varied significantly by organization. A full 65 percent of Washington Post fact checks -- and 62 percent of the New York Times’ – dealt with Trump, compared to 23 percent of Politifact’s and 22 percent of those done by Snopes. It’s clear that some sites are more fixated on the president than others.
What about the argument that fact-checking too often veers into the territory of “opinion checking” -- investigating claims for which there is no definitive true or false? This is not a new concern and it, in turn, raises the question about bias among fact checkers. BuzzFeed’s Ben Smith addressed these questions in 2011 when he argued that “the new professional fact-checking class is … doing opinion journalism under pseudo-scientific banners, something that's really corrosive to actual journalism, which, if it’s any good, is about reported fact in the first place.” Smith’s concerns proved prescient, as this chart by the RealClearPolitics Fact Check Review illustrates.
As prominent journalistic outlets turn to fact checkers to arbitrate truth, to what degree are those fact checkers turning back to those same journalists as the source of that truth? Or taking at face value the statements of partisans? A few Second Amendment advocates asserted recently that the nationwide protest marches against guns were organized by the same progressive activists who rallied against Trump in the January 21, 2017 women’s marches. To examine this claim, Politifact interviewed march organizers, who denied any link. Nothing nefarious about that approach: It’s called basic reporting. But Buzzfeed revealed that one organizer of the West Coast women’s march also filled out the parade permit request for the big anti-gun march in Washington, D.C. This represents a higher level of fact-checking: going to original source material.
One might ask why it really matters what fact checkers do: Consumers can simply choose to ignore them if they don’t agree with their findings. The problem lies in how major platforms like Facebook are starting to outsource decisions about false and misleading news to these fact-checking organizations. This means, as the Babylon Bee learned, that the decision of a single fact checker can have very real consequences for the distribution of a given story or even the funding streams of its creator.
Yet no central database exists to track all the claims being investigated by the major fact-checking organizations. Nor is there yet any apparatus that puts them all in one place, evaluates the results, and codifies them. The result is that immense power is being delegated to define truth from fiction, “fake news” from real news, without any real visibility into how these sites function, the processes they use to select which stories to fact-check, the investigative methods they employ, and the kinds of evidence they treat as authoritative verification.
That is the goal of RCP’s new Fact Check Review -- to codify this increasingly influential landscape into a single centralized database to allow quantitative and qualitative assessment of the fact-checking world.
Checking the Checkers
The RealClearPolitics Fact Check Review is an initiative to explore how the flagship fact-checking organizations operate in practice (as opposed to their self-reported descriptions), from their claim and verification sourcing to their topical focus to just what constitutes a “fact.” To answer these questions, we’ve created a centralized searchable database, updated weekly, that codifies key characteristics of all fact checks bearing on issues of civic and public concern published by six highly prominent fact-checking organizations: Factcheck.org, New York Times, Politifact, Snopes, Washington Post, and Weekly Standard. These fact checkers were selected due to their influence in the fact-checking landscape and the reliance of major internet platforms like Facebook on their decisions.
Each week we’ll review every fact check published the previous week on those six sites and compile a list of those that bear on civic and public concern, given their importance to the functioning of democracy. In practice, we define “civic and public concern” as any topic that relates to politics or American civil life. Issues involving past or present public officeholders or topics of national or international discussion are included under this heading, while run-of-the-mill urban legends are not. A fact check about how many votes a particular U.S. senator has missed would be included, but a fact check about reports of UFOs landing in Area 51 this week would not.
A single fact check can investigate multiple distinct claims (sometimes a dozen or more), each of which may have a different verdict, verification sources and other characteristics. Thus, instead of building our database at the level of the fact check that would lump all of this together, we break each fact check into the set of claims individually investigated by the fact checker. Here’s an example, from June 2017, when Politifact examined a statement made by Ivanka Trump about the so-called “skills gap” in the U.S. labor market by splitting it into two core claims the fact checker examined separately: the first was that there are 6 million unfilled jobs in the United States; and second was that the positions remained unfilled due in part to the skills gap.
For each claim, we record a variety of characteristics, including the fact checker’s summary of the claim, where it originally came from, the verdict, the sources the fact checker used to determine that verdict, whether it was a fact or an opinion being evaluated and whether the fact checker stated that the verdict was based on a lack of information or a belief that the claim, while technically true, might be misleading to some readers.
The full list of fields is available in our methodology section. For the original source of the claim and all sources used to verify or refute it, we further categorize them as Business (a commercial enterprise), Campaign (political campaign), Fact Checker (when citing a previous fact check), Government (governmental source of any kind), Media (a news article), Nonprofit (any nonprofit entity), Reference (a dedicated general reference source), Think Tank (any self-identified think tank), University (institution of higher education) or Other for sources that don’t fit into the above categories such as where a source is referred to merely as “an expert” without additional information.
We evaluate each claim to determine whether it is a statement of fact that can be definitively proven true or false, as opposed to rhetorical hyperbole or outright opinion the falls outside the true purview of fact-checking. For example, Trump’s statement that Obama “did nothing” about Russian election interference might reasonably be interpreted by many readers as stating that Obama “did not do enough” rather than as a literal statement that Obama did not even speak a word about the interference to anyone in his administration. As Politifact itself noted (before assigning the statement a rating of “Mostly False”), “people can argue that the Obama administration didn’t do enough to immediately crack down on Russia for meddling in the election.”
We also record whether the fact checker specifically noted that the verdict was based on a lack of evidence rather than on the existence of evidence that bolstered, or disproved, the assertion. We also plan to note cases in which the fact checker slams a statement, even while acknowledging that it is technically true. For example, the Washington Post assigned four “Pinocchios” to a series of claims by Trump regarding “sanctuary cities” on that grounds that the president “neglects to provide crucial context.”
This example arose from the president’s March 10 radio address, in which he made numerous statements about Democratic officials, most of them in California, who have erected barriers to local law enforcement cooperating with federal Immigration and Customs Enforcement officers. The Post focused on two statements by the president, including this one: “Last week, the mayor of Oakland warned criminal aliens of a coming ICE enforcement action -- giving them time to scatter and hide from authorities. The mayor’s conduct directly threatened the safety of federal immigration officers and the law-abiding Americans in her community.”
The Post acknowledged that the mayor had done this. So why the four Pinocchios, a category the paper defines as “whoppers”? The fact checker asserted that Trump had omitted mitigating information, such as the detail that crime fell 1 percent in Oakland in 2017 over the previous year. But if the point of a fact check is to confirm or refute the factual details of a claim, assigning a claim four Pinocchios while affirming it to be factually correct would seem to reinforce concerns about subjectivity.
Collating & Classifying Conclusions
Using the interactive search feature of RealClearPolitics’ Fact Check Review, anyone can explore the data, such as narrowing down to just those claims mentioning a particular keyword or those sourced from a particular website. We knew from the start that just tossing a set of spreadsheets up on a file server would make it too difficult for most users to explore this dataset of more than 1,000 claims (and growing weekly), so our interactive dashboard allows you to investigate your own questions about the fact-checking landscape.
Despite our best efforts, no system is infallible and some of the determinations we make about claims are themselves subjective. Readers will find ratings with which they disagree. Our goal here is to generate informed scrutiny about the state of fact checking today and provide the tools to improve it.
As fact checkers increasingly act as the online world’s ultimate arbitrator of truth -- with the power to directly influence the visibility of a given claim on major platforms such as Facebook – we hope this resource will prove a valuable contribution to understanding the myriad decisions that go into deciding what is truth and what is destined to be banished to the digital memory hole.
The purpose of the RealClearPolitics Fact Check Review is to collate and classify the conclusions made by the major players in this marketplace. Although judging the accuracy or fairmindedness of those calls is beyond our scope, we believe we have built a tool that can show trends while providing users the information they may use to evaluate the nation’s most prominent fact-checking operations. We do plan each week to highlight trends we spot and to comment on instructive examples that arise.
The Review is intended to be a critical self-reflective resource that provides quantitative assessments of the state of fact-checking today – and one that will give the public, journalists, scholars and fact checkers themselves a better understanding of just how “truth” is decided.
RealClear Media Fellow Kalev Leetaru is a senior fellow at the George Washington University Center for Cyber & Homeland Security. His past roles include fellow in residence at Georgetown University’s Edmund A. Walsh School of Foreign Service and member of the World Economic Forum’s Global Agenda Council on the Future of Government.