1,412 research outputs found

    Averting Robot Eyes

    Get PDF
    Home robots will cause privacy harms. At the same time, they can provide beneficial services—as long as consumers trust them. This Essay evaluates potential technological solutions that could help home robots keep their promises, avert their eyes, and otherwise mitigate privacy harms. Our goals are to inform regulators of robot-related privacy harms and the available technological tools for mitigating them, and to spur technologists to employ existing tools and develop new ones by articulating principles for avoiding privacy harms. We posit that home robots will raise privacy problems of three basic types: (1) data privacy problems; (2) boundary management problems; and (3) social/relational problems. Technological design can ward off, if not fully prevent, a number of these harms. We propose five principles for home robots and privacy design: data minimization, purpose specifications, use limitations, honest anthropomorphism, and dynamic feedback and participation. We review current research into privacy-sensitive robotics, evaluating what technological solutions are feasible and where the harder problems lie. We close by contemplating legal frameworks that might encourage the implementation of such design, while also recognizing the potential costs of regulation at these early stages of the technology

    The Hunt for Privacy Harms After \u3ci\u3eSpokeo\u3c/i\u3e

    Get PDF
    In recent years, due both to hacks that have leaked the personal information of hundreds of millions of people and to concerns about government surveillance, Americans have become more aware of the harms that can accompany the widespread collection of personal data. However, the law has not yet fully developed to recognize the concrete privacy harms that can result from what otherwise seems like ordinary economic activity involving the widespread aggregation and compilation of data. This Note examines cases in which lower federal courts have applied the Supreme Court’s directions for testing the concreteness of alleged intangible privacy injuries, and in particular how that inquiry has affected plaintiffs’ suits under statutes that implicate privacy concerns. This Note proposes that, in probing the concreteness of these alleged privacy harms, the courts, through the doctrine of standing, are engaging in work that could serve to revitalize the judiciary’s long-dormant analysis of the nature of privacy harms. It suggests that courts should look beyond the four traditional privacy torts to find standing for plaintiffs who bring claims against entities that collect and misuse personal information. This Note urges courts to make use of a nexus approach to identify overlapping privacy concerns sufficient for standing, which would allow the federal judiciary to more adequately address emerging privacy harms

    Privacy Harms

    Get PDF
    Privacy harms have become one of the largest impediments in privacy law enforcement. In most tort and contract cases, plaintiffs must establish that they have been harmed. Even when legislation does not require it, courts have taken it upon themselves to add a harm element. Harm is also a requirement to establish standing in federal court. In Spokeo v. Robins, the U.S. Supreme Court has held that courts can override Congress’s judgments about what harm should be cognizable and dismiss cases brought for privacy statute violations. The caselaw is an inconsistent, incoherent jumble, with no guiding principles. Countless privacy violations are not remedied or addressed on the grounds that there has been no cognizable harm. Courts conclude that many privacy violations, such as thwarted expectations, improper uses of data, and the wrongful transfer of data to other organizations, lack cognizable harm. Courts struggle with privacy harms because they often involve future uses of personal data that vary widely. When privacy violations do result in negative consequences, the effects are often small – frustration, aggravation, and inconvenience – and dispersed among a large number of people. When these minor harms are done at a vast scale by a large number of actors, they aggregate into more significant harms to people and society. But these harms do not fit well with existing judicial understandings of harm. This article makes two central contributions. The first is the construction of a road map for courts to understand harm so that privacy violations can be tackled and remedied in a meaningful way. Privacy harms consist of various different types, which to date have been recognized by courts in inconsistent ways. We set forth a typology of privacy harms that elucidates why certain types of privacy harms should be recognized as cognizable. The second contribution is providing an approach to when privacy harm should be required. In many cases, harm should not be required because it is irrelevant to the purpose of the lawsuit. Currently, much privacy litigations suffers from a misalignment of law enforcement goals and remedies. For example, existing methods of litigating privacy cases, such as class actions, often enrich lawyers but fail to achieve meaningful deterrence. Because the personal data of tens of millions of people could be involved, even small actual damages could put companies out of business without providing much of value to each individual. We contend that the law should be guided by the essential question: When and how should privacy regulation be enforced? We offer an approach that aligns enforcement goals with appropriate remedies

    Averting Robot Eyes

    Get PDF
    Home robots will cause privacy harms. At the same time, they can provide beneficial services—as long as consumers trust them. This Essay evaluates potential technological solutions that could help home robots keep their promises, avert their eyes, and otherwise mitigate privacy harms. Our goals are to inform regulators of robot-related privacy harms and the available technological tools for mitigating them, and to spur technologists to employ existing tools and develop new ones by articulating principles for avoiding privacy harms. We posit that home robots will raise privacy problems of three basic types: (1) data privacy problems; (2) boundary management problems; and (3) social/relational problems. Technological design can ward off, if not fully prevent, a number of these harms. We propose five principles for home robots and privacy design: data minimization, purpose specifications, use limitations, honest anthropomorphism, and dynamic feedback and participation. We review current research into privacy-sensitive robotics, evaluating what technological solutions are feasible and where the harder problems lie. We close by contemplating legal frameworks that might encourage the implementation of such design, while also recognizing the potential costs of regulation at these early stages of the technology

    Privacy, Visibility, Transparency, and Exposure

    Get PDF
    This essay considers the relationship between privacy and visibility in the networked information age. Visibility is an important determinant of harm to privacy, but a persistent tendency to conceptualize privacy harms and expectations in terms of visibility has created two problems. First, focusing on visibility diminishes the salience and obscures the operation of nonvisual mechanisms designed to render individual identity, behavior, and preferences transparent to third parties. The metaphoric mapping to visibility suggests that surveillance is simply passive observation, rather than the active production of categories, narratives, and, norms. Second, even a broader conception of privacy harms as a function of informational transparency is incomplete. Privacy has a spatial dimension as well as an informational dimension. The spatial dimension of the privacy interest, which the author characterizes as an interest in avoiding or selectively limiting exposure, concerns the structure of experienced space. It is not negated by the fact that people in public spaces expect to be visible to others present in those spaces, and it encompasses both the arrangement of physical spaces and the design of networked communications technologies. U.S. privacy law and theory currently do not recognize this interest at all. This essay argues that they should

    Bounded-Leakage Differential Privacy

    Get PDF
    We introduce and study a relaxation of differential privacy [Dwork et al., 2006] that accounts for mechanisms that leak some additional, bounded information about the database. We apply this notion to reason about two distinct settings where the notion of differential privacy is of limited use. First, we consider cases, such as in the 2020 US Census [Abowd, 2018], in which some information about the database is released exactly or with small noise. Second, we consider the accumulation of privacy harms for an individual across studies that may not even include the data of this individual. The tools that we develop for bounded-leakage differential privacy allow us reason about privacy loss in these settings, and to show that individuals preserve some meaningful protections

    Factors Influencing Perceived Legitimacy of Social Scoring Systems: Subjective Privacy Harms and the Moderating Role of Transparency

    Get PDF
    Technological advancements in recent years have enabled the spread of automated decision-making (ADM) systems. Social scoring systems are a specific instance of ADM system, using behavioral scores to encourage pro-social behaviors. Building on a survey following an experimental study, we present two structural equation models to determine the impacts of different levels of transparency on the perceived legitimacy of scoring systems, as well as on people’s intention to comply with the system. The models are built on well-established theories highlighting procedural justice and outcome favorabilities as key determinating factors. Our results suggest that the determinants of perceived legitimacy are strongly shaped by the level of transparency. However, transparency elevates subjective privacy harms. Our findings add to the ongoing debate on the transparency of ADM systems, by identifying a trade-off between the elimination of outcome favorabilities in determining perceptions of legitimacy, and increased subjective privacy harms, weakening people’s intention to comply
    corecore