160 research outputs found

    The Market for User Data

    Get PDF
    Policymakers are today far more alert than ever before to the myriad ways in which tech companies collect and distribute consumers’ data with third-party data brokers and advertisers. We can attribute this new awareness to at least two major news stories from the past six or so years. The first came in 2013, when Edward Snowden, the former National Security Agency contractor, leaked highly classified materials that revealed the ways in which United States national security officials, with the indispensable cooperation of U.S. telecommunications companies, systematically monitored telephone conversations and electronic communications of U.S. citizens and foreign nationals. The story triggered a series of rebukes from civil rights groups, consumer advocates, and foreign leaders around the world. It is not clear whether or the extent to which the NSA or other government agencies have terminated those programs since Snowden’s revelation. The second came in early 2018, when another whistleblower revealed to journalists that researchers to whom Facebook had allowed to collect and study dozens of millions of users’ personal data, in turn, shared those troves of personal data with Cambridge Analytica, a political consultancy firm. Cambridge Analytica had promoted their access to this data to peddle “psychographic targeting” to political campaigns, including that of Donald Trump in 2016. This more recent revelation has exposed Facebook to what will likely be the largest fine imposed by the Federal Trade Commission (“FTC”) in history

    Slave to the Algorithm? Why a \u27Right to an Explanation\u27 Is Probably Not the Remedy You Are Looking For

    Get PDF
    Algorithms, particularly machine learning (ML) algorithms, are increasingly important to individuals’ lives, but have caused a range of concerns revolving mainly around unfairness, discrimination and opacity. Transparency in the form of a “right to an explanation” has emerged as a compellingly attractive remedy since it intuitively promises to open the algorithmic “black box” to promote challenge, redress, and hopefully heightened accountability. Amidst the general furore over algorithmic bias we describe, any remedy in a storm has looked attractive. However, we argue that a right to an explanation in the EU General Data Protection Regulation (GDPR) is unlikely to present a complete remedy to algorithmic harms, particularly in some of the core “algorithmic war stories” that have shaped recent attitudes in this domain. Firstly, the law is restrictive, unclear, or even paradoxical concerning when any explanation-related right can be triggered. Secondly, even navigating this, the legal conception of explanations as “meaningful information about the logic of processing” may not be provided by the kind of ML “explanations” computer scientists have developed, partially in response. ML explanations are restricted both by the type of explanation sought, the dimensionality of the domain and the type of user seeking an explanation. However, “subject-centric explanations (SCEs) focussing on particular regions of a model around a query show promise for interactive exploration, as do explanation systems based on learning a model from outside rather than taking it apart (pedagogical versus decompositional explanations) in dodging developers\u27 worries of intellectual property or trade secrets disclosure. Based on our analysis, we fear that the search for a “right to an explanation” in the GDPR may be at best distracting, and at worst nurture a new kind of “transparency fallacy.” But all is not lost. We argue that other parts of the GDPR related (i) to the right to erasure ( right to be forgotten ) and the right to data portability; and (ii) to privacy by design, Data Protection Impact Assessments and certification and privacy seals, may have the seeds we can use to make algorithms more responsible, explicable, and human-centered

    Computational Methods for Historical Research on Wikipedia’s Archives

    Get PDF
    This paper presents a novel study of geographic information implicit in the English Wikipedia archive. This project demonstrates a method to extract data from the archive with data mining, map the global distribution of Wikipedia editors through geocoding in GIS, and proceed with a spatial analysis of Wikipedia use in metropolitan cities

    Consumer Subject Review Boards: A Thought Experiment

    Get PDF
    The adequacy of consumer privacy law in America is a constant topic of debate. The majority position is that United States privacy law is a “patchwork,” that the dominant model of notice and choice has broken down, and that decades of self-regulation have left the fox in charge of the henhouse. A minority position chronicles the sometimes surprising efficacy of our current legal infrastructure. But the challenges posed by big data to consumer protection feel different. They seem to gesture beyond privacy’s foundations or buzzwords, beyond “fair information practice principles” or “privacy by design.” The challenges of big data may take us outside of privacy altogether into a more basic discussion of the ethics of information. The good news is that the scientific community has been heading down this road for thirty years. I explore a version of their approach here. Part I discusses why corporations study consumers so closely, and what harm may come of the resulting asymmetry of information and control. Part II explores how established ethical principles governing biomedical and behavioral science might interact with consumer privacy

    The Case for Establishing a Collective Perspective to Address the Harms of Platform Personalization

    Get PDF
    Personalization on digital platforms drives a broad range of harms, including misinformation, manipulation, social polarization, subversion of autonomy, and discrimination. In recent years, policy makers, civil society advocates, and researchers have proposed a wide range of interventions to address these challenges. This Article argues that the emerging toolkit reflects an individualistic view of both personal data and data-driven harms that will likely be inadequate to address growing harms in the global data ecosystem. It maintains that interventions must be grounded in an understanding of the fundamentally collective nature of data, wherein platforms leverage complex patterns of behaviors and characteristics observed across a large population to draw inferences and make predictions about individuals. Using the lens of the collective nature of data, this Article evaluates various approaches to addressing personalization-driven harms under current consideration. It also frames concrete guidance for future legislation in this space and for meaningful transparency that goes far beyond current transparency proposals. It offers a roadmap for what meaningful transparency must constitute: a collective perspective providing a third party with ongoing insight into the information gathered and observed about individuals and how it correlates with any personalized content they receive across a large, representative population. These insights would enable the third party to understand, identify, quantify, and address cases of personalization-driven harms. This Article discusses how such transparency can be achieved without sacrificing privacy and provides guidelines for legislation to support the development of such transparency

    Privacy, Vulnerability, and Affordance

    Get PDF
    This essay begins to unpack the complex, sometimes contradictory relationship between privacy and vulnerability. I begin by exploring how the law conceives of vulnerability — essentially, as a binary status meriting special consideration where present. Recent literature recognizes vulnerability not as a status but as a state — a dynamic and manipulable condition that everyone experiences to different degrees and at different times. I then discuss various ways in which vulnerability and privacy intersect. I introduce an analytic distinction between vulnerability rendering, i.e., making a person more vulnerable, and the exploitation of vulnerability whether manufactured or native. I also describe the relationship between privacy and vulnerability as a vicious or virtuous circle. The more vulnerable a person is — due to poverty, for instance — the less privacy they tend to enjoy; meanwhile, a lack of privacy opens the door to greater vulnerability and exploitation. Privacy can protect against vulnerability but it can also be invoked to engender it. I next describe how privacy supports the creation and exploitation of vulnerability in ways literal, rhetorical, and conceptual. An abuser may literally use privacy to hide his abuse from law enforcement. A legislature or group may invoke privacy rhetorically to justify discrimination, for instance, against transgender individuals who wish to use the bathroom consistent with their gender identity. And courts obscure vulnerability conceptually when they decide a case on the basis of privacy instead of the value that is more centrally at stake. Finally, building on previous work, I offer James Gibson’s theory of affordances as a theoretical lens by which to analyze the complex relationship that privacy mediates. Privacy understood as an affordance permits a more nuanced understanding of privacy and vulnerability and could perhaps lead to wiser privacy law and policy

    Code and Prejudice: Regulating Discriminatory Algorithms

    Full text link
    In an era dominated by efficiency-driven technology, algorithms have seamlessly integrated into every facet of daily life, wielding significant influence over decisions that impact individuals and society at large. Algorithms are deliberately portrayed as impartial and automated in order to maintain their legitimacy. However, this illusion crumbles under scrutiny, revealing the inherent biases and discriminatory tendencies embedded in ostensibly unbiased algorithms. This Note delves into the pervasive issues of discriminatory algorithms, focusing on three key areas of life opportunities: housing, employment, and voting rights. This Note systematically addresses the multifaceted issues arising from discriminatory algorithms, showcasing real-world instances of algorithmic abuse, and proposing comprehensive solutions to enhance transparency and promote fairness and justice

    Principles of Risk Assessment: Sentencing and Policing

    Get PDF
    Risk assessment — measuring an individual’s potential for offending — has long been an important aspect of criminal justice, especially in connection with sentencing, pretrial detention and police decision-making. To aid in the risk assessment inquiry, a number of states have recently begun relying on statistically-derived algorithms called “risk assessment instruments” (RAIs). RAIs are generally thought to be more accurate than the type of seat-of-the-pants risk assessment in which judges, parole boards and police officers have traditionally engaged. But RAIs bring with them their own set of controversies. In recognition of these concerns, this brief paper proposes three principles — the fit principle, the validity principle, and the fairness principle — that should govern risk assessment in criminal cases. After providing examples of RAIs, it elaborates on how the principles would affect their use in sentencing and policing. While space constraints preclude an analysis of pretrial detention, the discussion should make evident how the principles would work in that setting as well
    • …
    corecore