99 research outputs found

    Is the Attention Economy Noxious?

    Get PDF
    A growing amount of media is paid for by its consumers through their very consumption of it. This new media is highly interactive and requires some form of computing for its operation (Logan, 2010). Examples include the services offered by Facebook, Instagram, Snapchat, and YouTube. As these examples suggest, much new media is funded primarily through advertising, which has been optimized using Big Data

    Is the Attention Economy Noxious?

    Get PDF
    A growing amount of media is paid for by its consumers through their very consumption of it. This new media is highly interactive and requires some form of computing for its operation (Logan, 2010). Examples include the services offered by Facebook, Instagram, Snapchat, and YouTube. As these examples suggest, much new media is funded primarily through advertising, which has been optimized using Big Data

    Epistemic Paternalism Online

    Get PDF
    New media (highly interactive digital technology for creating, sharing, and consuming information) affords users a great deal of control over their informational diets. As a result, many users of new media unwittingly encapsulate themselves in epistemic bubbles (epistemic structures, such as highly personalized news feeds, that leave relevant sources of information out (Nguyen forthcoming)). Epistemically paternalistic alterations to new media technologies could be made to pop at least some epistemic bubbles. We examine one such alteration that Facebook has made in an effort to fight fake news and conclude that it is morally permissible. We further argue that many epistemically paternalistic policies can (and should) be a perennial part of the internet information environment

    Algorithms and Autonomy

    Get PDF
    Algorithms influence every facet of modern life. However, delegating important decisions to machines gives rise to deep moral concerns about responsibility, transparency, fairness, and democracy. This book examines these concerns by connecting them to the human value of autonomy. This title is also available as Open Access on Cambridge Core

    What We Informationally Owe Each Other

    Get PDF
    ABSTRACT: One important criticism of algorithmic systems is that they lack transparency. Such systems can be opaque because they are complex, protected by patent or trade secret, or deliberately obscure. In the EU, there is a debate about whether the General Data Protection Regulation (GDPR) contains a “right to explanation,” and if so what such a right entails. Our task in this chapter is to address this informational component of algorithmic systems. We argue that information access is integral for respecting autonomy, and transparency policies should be tailored to advance autonomy. To make this argument we distinguish two facets of agency (i.e., capacity to act). The first is practical agency, or the ability to act effectively according to one’s values. The second is what we call cognitive agency, which is the ability to exercise what Pamela Hieronymi calls “evaluative control” (i.e., the ability to control our affective states, such as beliefs, desires, and attitudes). We argue that respecting autonomy requires providing persons sufficient information to exercise evaluative control and properly interpret the world and one’s place in it. We draw this distinction out by considering algorithmic systems used in background checks, and we apply the view to key cases involving risk assessment in criminal justice decisions and K-12 teacher evaluation. The link below is to an open access version of the chapter

    Just Machines

    Get PDF
    A number of findings in the field of machine learning have given rise to questions about what it means for automated scoring- or decisionmaking systems to be fair. One center of gravity in this discussion is whether such systems ought to satisfy classification parity (which requires parity in accuracy across groups, defined by protected attributes) or calibration (which requires similar predictions to have similar meanings across groups, defined by protected attributes). Central to this discussion are impossibility results, owed to Kleinberg et al. (2016), Chouldechova (2017), and Corbett-Davies et al. (2017), which show that classification parity and calibration are often incompatible. This paper aims to argue that classification parity, calibration, and a newer, interesting measure called counterfactual fairness are unsatisfactory measures of fairness, offer a general diagnosis of the failure of these measures, and sketch an alternative approach to understanding fairness in machine learning

    Is there a Duty to Be a Digital Minimalist?

    Get PDF
    The harms associated with wireless mobile devices (e.g. smartphones) are well documented. They have been linked to anxiety, depression, diminished attention span, sleep disturbance, and decreased relationship satisfaction. Perhaps what is most worrying from a moral perspective, however, is the effect these devices can have on our autonomy. In this article, we argue that there is an obligation to foster and safeguard autonomy in ourselves, and we suggest that wireless mobile devices pose a serious threat to our capacity to fulfill this obligation. We defend the existence of an imperfect duty to be a ‘digital minimalist’. That is, we have a moral obligation to be intentional about how and to what extent we use these devices. The empirical findings already justify prudential reasons in favor of digital minimalism, but the moral duty is distinct from and independent of prudential considerations

    On the Duty to Be an Attention Ecologist

    Get PDF

    Kantian Ethics and the Attention Economy

    Get PDF
    In this open access book, Timothy Aylsworth and Clinton Castro draw on the deep well of Kantian ethics to argue that we have moral duties, both to ourselves and to others, to protect our autonomy from the threat posed by the problematic use of technology. The problematic use of technologies like smartphones threatens our autonomy in a variety of ways, and critics have only begun to appreciate the vast scope of this problem. In the last decade, we have seen a flurry of books making “self-help” arguments about how we could live happier, more fulfilling lives if we were less addicted to our phones. But none of these authors see this issue as one involving a moral duty to protect our autonomy

    The Fair Chances in Algorithmic Fairness: A Response to Holm

    Get PDF
    Holm (2022) argues that a class of algorithmic fairness measures, that he refers to as the ‘performance parity criteria’, can be understood as applications of John Broome’s Fairness Principle. We argue that the performance parity criteria cannot be read this way. This is because in the relevant context, the Fairness Principle requires the equalization of actual individuals’ individual-level chances of obtaining some good (such as an accurate prediction from a predictive system), but the performance parity criteria do not guarantee any such thing: the measures merely ensure that certain population-level ratios hold
    • 

    corecore