On algorithms and fake news


The biggest “algorithms in the news” story of the past couple months has been whether Facebook, Twitter, and Google’s ad-targeting algorithms facilitated, however inadvertently, Russian interference in the 2016 United States Presidential election.  For those who have been sleeping under a rock, hundreds of thousands of targeted advertisements containing links to fake political “news” stories were delivered to users of the three behemoths’ social media and web services.  Many of the ads were microtargeted–specifically aimed to reach specific voters in specific geographic regions.

This story–which has been bubbling under the surface for months–came to the forefront this past week as executives from the three companies were hauled in front of a Congressional committee and grilled about whether they were responsible for (or, at the very least, whether they did enough to stop) the spread of Russian misinformation.  The Economist’s cover story this week is on “Social media’s threat to democracy,” complete with a cover image of a human hand wielding Facebook’s iconic “f” like a gun, complete with smoke drifting off the end of the “barrel” (see below).

This is a fascinating story precisely because it throws into such sharp relief the thorny ethical and policy considerations that arise in the age of algorithms.  Designing an algorithm requires making a set of implicit values choices, however much developers may not think of their work in those terms.

In the context of social media ads, the tough choices start at the highest level.  Should the algorithm be designed purely to deliver content that the users will find most “relevant”–which, in the context of ads, means ads that the users are most likely to click on?  If so, then the algorithm will be facilitating the creation of social media echo chambers.  The algorithms will analyze what ads and links users clicked on in the past, and try to deliver ads based on that information.  That, by and large, means delivering ads similar to those the user previously clicked on.  And that, in turn, means homogenizing the content to which individual users are exposed.  And that helps facilitate the atomization of political discourse, with everyone living in their own social and political bubbles, running into different perspectives less and less often as time goes by.

That’s bad enough on its own.  It’s made even worse, however, when the people purchasing ads are purposely attempting to deliver misinformation in an effort to interfere with the democratic process, and worse still when those people are agents of a hostile foreign power.  That happened on a massive scale during the most recent U.S. election.

Obviously, that’s not healthy for democracy, to say nothing of political discourse.  So should the algorithms instead be designed with a certain level of social responsibility?  Well, that represents problems of its own, because the line between social responsibility and censorship is tough to draw in the context of political ads.

Let’s say that we were to pass a law making it illegal for a social media service to deliver advertisements containing links to “misinformation.”  That sounds great in principle, but what counts as misinformation?  Do we only count stories that have been conclusively proven to be false?  That would get rid of some of the most obvious of falsities, but would leave out many obviously false conspiracy theories, which are by their nature almost impossible to disprove (like the ones claiming that Hillary Clinton was running a child sex ring out of a DC pizza parlor).  It also might inadvertently sweep in links to stories intended to be satirical.

In theory, the legal rules governing defamation could provide a reasonable and familiar set of standards.  But defamation is notoriously difficult to prove, particularly in the United States, where the protection of free expression in the First Amendment places limits on how far prohibitions on defamation can reach.  Similar obstacles would undoubtedly arise if courts were asked to enforce laws that would effectively require social media companies to police political advertisements and stories that purport to be “news.”

Constitutional practicalities aside, the key ethical issues we have to confront are: (1) whether we want the people who design and deploy algorithms deciding what information people see, and (2) if so, how they should decide when political free speech turns into propaganda, when propaganda turns into fake news, and when the falsity of an ad link becomes sufficiently obvious to warrant blocking it?

As with so many things in the world of law and emerging technologies, there are more questions than answers.  But with post-truth politics getting perilously close to becoming a permanent state of affairs, we need to start coming up with answers fast.  Nothing will make people lose faith in the benefits of A.I. faster than seeing algorithms ruin democracy.


Hat tip to my former law school classmate Diana Hickey van Houwelingen for suggesting I write a post on this topic.

One comment

  • Daniel Schiff

    Helpful overview and telling for the kinds of issues we’ll face trying to moderate and regulate algorithms towards ‘social responsibility’ or even ‘neutrality.’ In the news space, this is exacerbated by the likelihood that malevolent users will game whatever moderation system we adduce.

    Policy is always a trade-off between Type I and Type II errors, but I think we have some breathing room here. Starting with blocking blatantly disreputable sources and hostile government propaganda might be a start.

    Thanks for this piece.
    ~Daniel

Leave a Reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.