1,938 research outputs found

    A Risk Management Process for Consumers

    Get PDF
    Simply by using information technology, consumers expose themselves to considerable security risks. Because no technical or legal solutions are readily available, the only remedy is to develop a risk management process for consumers, similar to the process executed by enterprises. Consumers need to consider the risks in a structured way, and take action, not once, but iteratively. Such a process is feasible: enterprises already execute such processes, and time-saving tools can support the consumer in her own process. In fact, given our society's emphasis on individual responsibilities, skills and devices, a risk management process for consumers is the logical next step in improving information security

    SoK: Anti-Facial Recognition Technology

    Full text link
    The rapid adoption of facial recognition (FR) technology by both government and commercial entities in recent years has raised concerns about civil liberties and privacy. In response, a broad suite of so-called "anti-facial recognition" (AFR) tools has been developed to help users avoid unwanted facial recognition. The set of AFR tools proposed in the last few years is wide-ranging and rapidly evolving, necessitating a step back to consider the broader design space of AFR systems and long-term challenges. This paper aims to fill that gap and provides the first comprehensive analysis of the AFR research landscape. Using the operational stages of FR systems as a starting point, we create a systematic framework for analyzing the benefits and tradeoffs of different AFR approaches. We then consider both technical and social challenges facing AFR tools and propose directions for future research in this field.Comment: Camera-ready version for Oakland S&P 202

    The Effect of Developer-Specified Explanations for Permission Requests on Smartphone User Behavior

    Get PDF
    In Apple’s iOS 6, when an app requires access to a protected resource (e.g., location or photos), the user is prompted with a permission request that she can allow or deny. These permission request dialogs include space for developers to optionally include strings of text to explain to the user why access to the resource is needed. We examine how app developers are using this mechanism and the effect that it has on user behavior. Through an online survey of 772 smartphone users, we show that permission requests that include explanations are significantly more likely to be approved. At the same time, our analysis of 4,400 iOS apps shows that the adoption rate of this feature by developers is relatively small: around 19 % of permission requests include developer-specified explanations. Finally, we surveyed 30 iOS developers to better understand why they do or do not use this feature

    Privacy-Privacy Tradeoffs

    Get PDF
    Legal and policy debates about privacy revolve around conflicts between privacy and other goods. But privacy also conflicts with itself. Whenever securing privacy on one margin compromises privacy on another margin, a privacy-privacy tradeoff arises. This Essay introduces the phenomenon of privacy-privacy tradeoffs, with particular attention to their role in NSA surveillance. After explaining why these tradeoffs are pervasive in modern society and developing a typology, the Essay shows that many of the arguments made by the NSA\u27s defenders appeal not only to a national-security need but also to a privacy-privacy tradeoff. An appreciation of these tradeoffs, the Essay contends, illuminates the structure and the stakes of debates over surveillance law specifically and privacy policy generally

    Privacy as Product Safety

    Get PDF
    Online social media confound many of our familiar expectaitons about privacy. Contrary to popular myth, users of social software like Facebook do care about privacy, deserve it, and have trouble securing it for themselves. Moreover, traditional database-focused privacy regulations on the Fair Information Practices model, while often worthwhile, fail to engage with the distinctively social aspects of these online services. Instead, online privacy law should take inspiration from a perhaps surprising quarter: product-safety law. A web site that directs users\u27 personal information in ways they don\u27t expect is a defectively designed product, and many concepts from products liability law could usefully be applied to the structurally similar problem of privacy in social software. After setting the scene with a discussion of how people use Facebook and why standard assumptions about privacy and privacy law fail, this essay examines the parallel between physically safe products and privacy-safe social software. It illustrates the value of the product-safety approach by considering another ripped-from-the-headlines example: Google Buzz

    Societal Computing

    Get PDF
    As Social Computing has increasingly captivated the general public, it has become a popular research area for computer scientists. Social Computing research focuses on online social behavior and using artifacts derived from it for providing recommendations and other useful community knowledge. Unfortunately, some of that behavior and knowledge incur societal costs, particularly with regards to Privacy, which is viewed quite differently by different populations as well as regulated differently in different locales. But clever technical solutions to those challenges may impose additional societal costs, e.g., by consuming substantial resources at odds with Green Computing, another major area of societal concern. We propose a new crosscutting research area, Societal Computing, that focuses on the technical tradeoffs among computational models and application domains that raise significant societal issues. We highlight some of the relevant research topics and open problems that we foresee in Societal Computing. We feel that these topics, and Societal Computing in general, need to gain prominence as they will provide useful avenues of research leading to increasing benefits for society as a whole. This thesis will consist of the following four projects that aim to address the issues of Societal Computing. First, privacy in the context of ubiquitous social computing systems has become a major concern for society at large. As the number of online social computing systems that collect user data grows, concerns with privacy are further exacerbated. Examples of such online systems include social networks, recommender systems, and so on. Approaches to addressing these privacy concerns typically require substantial extra computational resources, which might be beneficial where privacy is concerned, but may have significant negative impact with respect to Green Computing and sustainability, another major societal concern. Spending more computation time results in spending more energy and other resources that make the software system less sustainable. Ideally, what we would like are techniques for designing software systems that address these privacy concerns but which are also sustainable — systems where privacy could be achieved “for free,” i.e., without having to spend extra computational effort. We describe how privacy can indeed be achieved for free — an accidental and beneficial side effect of doing some existing computation — in web applications and online systems that have access to user data. We show the feasibility, sustainability, and utility of our approach and what types of privacy threats it can mitigate. Second, we aim to understand what the expectations and needs to end-users and software developers are, with respect to privacy in social systems. Some questions that we want to answer are: Do end-users care about privacy? What aspects of privacy are the most important to end-users? Do we need different privacy mechanisms for technical vs. non-technical users? Should we customize privacy settings and systems based on the geographic location of the users? We have created a large scale user study using an online questionnaire to gather privacy requirements from a variety of stakeholders. We also plan to conduct follow-up semistructured interviews. This user study will help us answer these questions. Third, a related challenge to above, is to make privacy more understandable in complex systems that may have a variety of user interface options, which may change often. Our approach is to use crowdsourcing to find out how other users deal with privacy and what settings are commonly used to give users feedback on aspects like how public/private their settings are, what common settings are typically used by others, where do a certain users’ settings differ from a trusted group of friends, etc. We have a large dataset of privacy settings for over 500 users on Facebook and we plan to create a user study that will use the data to make privacy settings more understandable. Finally, end-users of such systems find it increasingly hard to understand complex privacy settings. As software evolves over time, this might introduce bugs that breach users’ privacy. Further, there might be system-wide policy changes that could change users’ settings to be more or less private than before. We present a novel technique that can be used by end-users for detecting changes in privacy, i.e., regression testing for privacy. Using a social approach for detecting privacy bugs, we present two prototype tools. Our evaluation shows the feasibility and utility of our approach for detecting privacy bugs. We highlight two interesting case studies on the bugs that were discovered using our tools. To the best of our knowledge, this is the first technique that leverages regression testing for detecting privacy bugs from an end-user perspective
    • …
    corecore