Click-Gap, paternalism, and tech giants’ relationships with their users

Abstract

The spread of misinformation and fake news raises important problems for our society and for our democracy. From the January 6 attack on the U.S. Capitol to vaccine hesitancy, from suppressing voter turnout to peddling conspiracy theories, we know that these problems are real and need to be taken seriously. While misinformation is not a new problem for democracy, it can spread more quickly and easily because of new media’s design and popularity. Given these problems, it is encouraging that some technology companies are taking steps to reduce the spread of misinformation and fake news on the platforms they manage. Despite this seemingly positive development, some scholars have criticized some interventions designed to combat the spread of misinformation and fake news as paternalistic. For example, a 2019 Facebook intervention called Click-Gap aimed to reduce the amount of low-quality content (including fake news and misinformation) that users see on their NewsFeed. Click-Gap has been criticized as an instance of epistemic paternalism because it was adopted (1) with the goal of improving the epistemic status of its users and (2) irrespective of what the company believed the wishes of its users to be. If interventions like Click-Gap are problematic because paternalistic, those of us interested in the ethics of technology would face a dilemma—either endorse technology companies treating their users paternalistically or endorse their failing to act to combat the spread of misinformation and fake news on their platforms. Both options seem to me to be problematic. While paternalism may sometimes be permissible, I think we should be very hesitant to endorse a paternalistic relationship between technology companies and their users. The relationship does not seem to bear the right sort of structure to one in which paternalism might be appropriate, if it ever is. The second option seems, if anything worse: surely technology companies should not stand by and change nothing about their platforms despite the spread of misinformation and fake news in those spaces. In this paper, I argue that Click-Gap and interventions like it are not paternalistic, contrary to the conclusion of other scholars. Further, I will argue that the focus on paternalism itself is actually a red herring here. While not just any intervention or strategy that purports to reduce fake news and misinformation is permissible, we should want technology companies to take user well-being seriously and be able to take that well-being as a direct reason for action. Their doing so is not paternalistic nor even morally problematic, and it should not be criticized as such

    Similar works