3,977 research outputs found

    Privacy For Whom? A Multi-Stakeholder Exploration of Privacy Designs

    Get PDF
    Privacy is considered one of the fundamental human rights. Researchers have been investigating privacy issues in various domains, such as our physical privacy, data privacy, privacy as a legal right, and privacy designs. In the Human-Computer Interaction field, privacy researchers have been focusing on understanding people\u27s privacy concerns when they interact with computing systems, designing and building privacy-enhancing technologies to help people mitigate these concerns, and investigating how people\u27s privacy perceptions and the privacy designs influence people\u27s behaviors. Existing privacy research has been overwhelmingly focusing on the privacy needs of end-users, i.e., people who use a system or a product, such as Internet users and smartphone users. However, as our computing systems are becoming more and more complex, privacy issues within these systems have started to impact not only the end-users but also other stakeholders, and privacy-enhancing mechanisms designed for the end-users can also affect multiple stakeholders beyond the users. In this dissertation, I examine how different stakeholders perceive privacy-related issues and expect privacy designs to function across three application domains: online behavioral advertising, drones, and smart homes. I choose these three domains because they represent different multi-stakeholder environments with varying nature of complexity. In particular, these environments present the opportunities to study technology-mediated interpersonal relationships, i.e., the relationship between primary users (owners, end-users) and secondary users (bystanders), and to investigate how these relationships influence people\u27s privacy perceptions and their desired ways of privacy protection. Through a combination of qualitative, quantitative, and design methods, including interviews, surveys, participatory designs, and speculative designs, I present how multi-stakeholder considerations change our understandings of privacy and influence privacy designs. I draw design implications from the study results and guide future privacy designs to consider the needs of different stakeholders, e.g., cooperative mechanisms that aim to enhance the communication between primary and secondary users. In addition, this methodological approach allows researchers to directly and proactively engage with multiple stakeholders and explore their privacy perceptions and expected privacy designs. This is different from what has been commonly used in privacy literature and as such, points to a methodological contribution. Finally, this dissertation shows that when applying the theory of Contextual Integrity in a multi-stakeholder environment, there are hidden contextual factors that may alter the contextual informational norms. I present three examples from the study results and argue that it is necessary to carefully examine such factors in order to clearly identify the contextual norms. I propose a research agenda to explore best practices of applying the theory of Contextual Integrity in a multi-stakeholder environment

    Envisioning technology through discourse: a case study of biometrics in the National Identity Scheme in the United Kingdom

    Get PDF
    Around the globe, governments are pursuing policies that depend on information technology (IT). The United Kingdom’s National Identity Scheme was a government proposal for a national identity system, based on biometrics. These proposals for biometrics provide us with an opportunity to explore the diverse and shifting discourses that accompany the attempted diffusion of a controversial IT innovation. This thesis offers a longitudinal case study of these visionary discourses. I begin with a critical review of the literature on biometrics, drawing attention to the lack of in-depth studies that explore the discursive and organizational dynamics accompanying their implementation on a national scale. I then devise a theoretical framework to study these speculative and future-directed discourses based on concepts and ideas from organizing visions theory, the sociology of expectations, and critical approaches to studying the public’s understanding of technology. A methodological discussion ensues in which I explain my research approach and methods for data collection and analysis, including techniques for critical discourse analysis. After briefly introducing the case study, I proceed to the two-part analysis. First is an analysis of government actors’ discourses on biometrics, revolving around formal policy communications; second is an analysis of media discourses and parliamentary debates around certain critical moments for biometrics in the Scheme. The analysis reveals how the uncertain concept of biometrics provided a strategic rhetorical device whereby government spokespeople were able to offer a flexible yet incomplete vision for the technology. I contend that, despite being distinctive and offering some practical value to the proposals for national identity cards, the government’s discourses on biometrics remained insufficiently intelligible, uninformative, and implausible. The concluding discussion explains the unraveling visions for biometrics in the case, offers a theoretical contribution based on the case analysis, and provides insights about discourses on the ‘publics’ of new technology such as biometrics

    IRISS (Increasing Resilience in Surveillance Societies) FP7 European Research Project, Deliverable 4.2: Doing privacy in everyday encounters with surveillance.

    Get PDF
    The main idea of IRISS WP 4 was to analyse surveillance as an element of everyday life of citizens. The starting point was a broad understanding of surveillance, reaching beyond the narrowly defined and targeted (nonetheless encompassing) surveillance practices of state authorities, justified with the need to combat and prevent crime and terrorism. We were interested in the mundane effects of surveillance practices emerging in the sectors of electronic commerce, telecommunication, social media and other areas. The basic assumption of WP 4 was that being a citizen in modern surveillance societies amounts to being transformed into a techno-social hybrid, i.e. a human being inexorably linked with data producing technologies, becoming a data-leaking container. While this “ontological shift” is not necessarily reflected in citizens’ understanding of who they are, it nonetheless affects their daily lives in many different ways. Citizens may entertain ideas of privacy, autonomy and selfhood rooted in pre-electronic times while at the same time acting under a regime of “mundane governance”. We started to enquire about the use of modern technologies and in the course of the interviews focussed on issues of surveillance in a more explicit manner. Over 200 qualitative interviews were conducted in a way that produced narratives (stories) of individual experiences with different kinds of technologies and/or surveillance practices. These stories then were analysed against the background of theoretical hypotheses of what it means in objective terms to live in a surveillance society. We assume that privacy no longer is the default state of mundane living, but has to be actively created. We captured this with the term privacy labour. Furthermore we construed a number of dilemmas or trade-off situations to guide our analysis. These dilemmas address the issue of privacy as a state or “good” which is traded in for convenience (in electronic commerce), security (in law enforcement surveillance contexts), sociality (when using social media), mutual trust (in social relations at the workplace as well as in the relationship between citizens and the state), and engagement (in horizontal, neighbourhood watch-type surveillance relations). For each of these dilemmas we identified a number of stories demonstrating how our respondents as “heroes” in the narrative solved the problems they encountered, strived for the goals they were pursuing or simply handled a dilemmatic situation. This created a comprehensive and multi-dimensional account of the effects of surveillance in everyday life. Each of the main chapters does focus on one of these different dilemmas

    The Concealed Cost of Convenience: Protecting Personal Data Privacy in the Age of Alexa

    Get PDF
    In today’s interconnected, internet-dependent, global information economy, consumers willingly, but often unwittingly, divulge to tech companies their personal and private data—frequently with little regard for its safekeeping or intended future use. Enter Alexa, Amazon’s voice-activated, natural-language processing digital smart assistant. A sophisticated artificial intelligence (“AI”), Alexa insinuates itself into a user’s personal sphere, learns from and adapts to the surrounding environment, siphons personal information and data, and ultimately produces for the user a perfectly tailored, concierge experience. Convenience is the product. Data privacy is the cost. Over one half of American consumers own an Alexa-enabled device or other AI-powered digital smart assistant. This rapid adoption of AI technology has created the potential for an untenable and unsustainable surveillance state in which private data brokers such as Amazon can control the flow of information and hold hostage the individual consumer. The existing U.S. legal framework—a sectoral regime heavily dependent upon the principals of “Notice and Choice,” under the ineffectual oversight of the Federal Trade Commission—is ill-equipped to deal with the privacy issues presented by the AI-based data collection of smart assistants. The time for comprehensive federal data privacy reform is now. The states should not shoulder this burden. Instead, Congress must act to establish a uniform system of rules that will federally regulate the collection and retention practices of data brokers and safeguard the autonomy and data privacy of the individual
    • 

    corecore