Memo to the New York Times: Definitions of ‘Fake News’ Are Subjective

Originally Published in National Review

In 2007, the New York Times editorial board railed against Verizon for preventing an abortion-rights group from sending mass text messages. It warned against “the potential threat to free speech . . . as communications migrate from old-fashioned telephone lines, TV broadcasts and printing presses to digital networks controlled by unregulated private companies,” noting that “if newspapers were delivered over mobile phones, a company could simply cut them off because it did not like a particular article.” Though there was no government censorship, “our democracy is built on basic freedoms not being left to . . . individual companies.”

Sixty-two percent of Americans receive news from social media, and two-thirds of those receive it from Facebook. Yet now the Times demands that powerful digital networks cut off articles it does not like. In “Facebook and the Digital Virus Called Fake News,” the editors argue that “blocking misinformation will help protect the company’s brand and credibility” and, as a warning, cite the supposed financial consequences of Twitter’s failure to eliminate “hate speech.” Many traditional media outlets and prominent Democratic politicians, including Hillary Clinton and Barack Obama, joined the crusade against the supposed scourge of fake news on Facebook. Last week, the company partially acquiesced. Unfortunately, this will lead to more political tug-of-war over the platform, and it raises serious legal issues.

Facebook users can now flag articles as fake. Then certified “fact-checkers” will evaluate them. Facebook will add a “disputed by 3rd party Fact-Checkers” disclaimer to posts deemed false. Conservatives have long accused these fact-checkers of liberal bias, and their new powers exacerbate these concerns.

Fake news is just the latest political spat involving social media. Former Facebook contractors claimed they shadow-banned conservative articles from trending, and Twitter faces irreconcilable accusations of both political censorship and facilitating hate speech. Ultimately, though, these debates revolve on whether social media are neutral platforms.

Mark Zuckerberg’s answer to this question is evolving. In October, he insisted that Facebook was “a tech company, not a media company.” However, on Wednesday, he described the platform as neither a traditional media nor technology company, adding, “We feel responsible for how it’s used. We don’t write the news that people read on the platform. But at the same time we also know that we do a lot more than just distribute news.”

This view is at odds with Senator John Thune, who has argued that “any attempt by a neutral and inclusive social media platform to censor or manipulate political discussion is an abuse of trust and inconsistent with the values of an open Internet.” In contrast, Vox’s Timothy Lee has claimed that Facebook was never neutral because it makes “editorial” decisions, prioritizing trending stories based on “how many times is something shared,” and “people like to share inflammatory stories.” He suggested that Facebook place “reputable outlets” atop of trending news. Similarly, New York Times CEO Mark Thompson referred to Facebook’s “editorial choices and rankings” and complained that “popularity drives virtually everything” on the platform.

Lee and Thompson’s arguments strain credulity by redefining “editorial.” Determining what’s a reputable outlet or fake news requires subjective decisions on controversial issues. However, popularity is a neutral criterion. Even if this metric benefits inflammatory stories, the platform made no editorial decision to promote them.

Subjectively editing content risks important legal protections for social media. Section 230 of the Communications Decency Act gives “interactive computer services” such as Facebook immunity for user-created content. In contrast, publishers are generally liable for their reporters’ and, in some cases, even their advertisers’ content. Section 230’s stated purpose is to “encourage the development of technologies which maximize user control over what information is received” so they can “offer a forum for a true diversity of political discourse.” Suppressing perceived hate speech or fake news on these platforms frustrates those goals.

This said, services do not lose their immunity for removing objectionable content, so Facebook can censor posts without consequences under Section 230. However, a platform loses its immunity if it becomes “more than a passive transmitter of information provided by others.” Last month, the Seventh Circuit Court of Appeals held that a website may have crossed that line if it “‘edited,’ ‘shaped,’ and ‘choreographed’ the content . . . it received” or “‘selected’ for publication every comment that appeared.” The court did not significantly elaborate on how to apply this standard. However, it demonstrates how making subjective choices to edit (such as appending a fake-news disclaimer) or prioritize (like manipulating trending news) user-created content could threaten a company’s immunity.

Facebook’s attorneys will likely thread the needle to maintain its immunity while fact-checking its users. However, the New York Times will keep finding new national crises of unacceptable speech for social-media companies to quell. Americans will never agree on the definition of “hate speech,” “fake news,” or the latest manufactured outrage. So long as the platforms are held responsible for their users’ content, they will remain embroiled in an inexorable political struggle.

Leave a Reply

Fill in your details below or click an icon to log in: Logo

You are commenting using your account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s