1,938 research outputs found
Recommended from our members
Exploring Societal Computing based on the Example of Privacy
Data privacy when using online systems like Facebook and Amazon has become an increasingly popular topic in the last few years. This thesis will consist of the following four projects that aim to address the issues of privacy and software engineering.
First, only a little is known about how users and developers perceive privacy and which concrete measures would mitigate their privacy concerns. To investigate privacy requirements, we conducted an online survey with closed and open questions and collected 408 valid responses. Our results show that users often reduce privacy to security, with data sharing and data breaches being their biggest concerns. Users are more concerned about the content of their documents and their personal data such as location than about their interaction data. Unlike users, developers clearly prefer technical measures like data anonymization and think that privacy laws and policies are less effective. We also observed interesting differences between people from different geographies. For example, people from Europe are more concerned about data breaches than people from North America. People from Asia/Pacific and Europe believe that content and metadata are more critical for privacy than people from North America. Our results contribute to developing a user-driven privacy framework that is based on empirical evidence in addition to the legal, technical, and commercial perspectives.
Second, a related challenge to above, is to make privacy more understandable in complex systems that may have a variety of user interface options, which may change often. As social network platforms have evolved, the ability for users to control how and with whom information is being shared introduces challenges concerning the configuration and comprehension of privacy settings. To address these concerns, our crowd sourced approach simplifies the understanding of privacy settings by using data collected from 512 users over a 17 month period to generate visualizations that allow users to compare their personal settings to an arbitrary subset of individuals of their choosing. To validate our approach we conducted an online survey with closed and open questions and collected 59 valid responses after which we conducted follow-up interviews with 10 respondents. Our results showed that 70% of respondents found visualizations using crowd sourced data useful for understanding privacy settings, and 80% preferred a crowd sourced tool for configuring their privacy settings over current privacy controls.
Third, as software evolves over time, this might introduce bugs that breach users' privacy. Further, there might be system-wide policy changes that could change users' settings to be more or less private than before. We present a novel technique that can be used by end-users for detecting changes in privacy, i.e., regression testing for privacy. Using a social approach for detecting privacy bugs, we present two prototype tools. Our evaluation shows the feasibility and utility of our approach for detecting privacy bugs. We highlight two interesting case studies on the bugs that were discovered using our tools. To the best of our knowledge, this is the first technique that leverages regression testing for detecting privacy bugs from an end-user perspective.
Fourth, approaches to addressing these privacy concerns typically require substantial extra computational resources, which might be beneficial where privacy is concerned, but may have significant negative impact with respect to Green Computing and sustainability, another major societal concern. Spending more computation time results in spending more energy and other resources that make the software system less sustainable. Ideally, what we would like are techniques for designing software systems that address these privacy concerns but which are also sustainable - systems where privacy could be achieved "for free", i.e., without having to spend extra computational effort. We describe how privacy can indeed be achieved for free an accidental and beneficial side effect of doing some existing computation - in web applications and online systems that have access to user data. We show the feasibility, sustainability, and utility of our approach and what types of privacy threats it can mitigate.
Finally, we generalize the problem of privacy and its tradeoffs. As Social Computing has increasingly captivated the general public, it has become a popular research area for computer scientists. Social Computing research focuses on online social behavior and using artifacts derived from it for providing recommendations and other useful community knowledge. Unfortunately, some of that behavior and knowledge incur societal costs, particularly with regards to Privacy, which is viewed quite differently by different populations as well as regulated differently in different locales. But clever technical solutions to those challenges may impose additional societal costs, e.g., by consuming substantial resources at odds with Green Computing, another major area of societal concern. We propose a new crosscutting research area, Societal Computing, that focuses on the technical tradeoffs among computational models and application domains that raise significant societal issues. We highlight some of the relevant research topics and open problems that we foresee in Societal Computing. We feel that these topics, and Societal Computing in general, need to gain prominence as they will provide useful avenues of research leading to increasing benefits for society as a whole
A Risk Management Process for Consumers
Simply by using information technology, consumers expose themselves to considerable security risks. Because no technical or legal solutions are readily available, the only remedy is to develop a risk management process for consumers, similar to the process executed by enterprises. Consumers need to consider the risks in a structured way, and take action, not once, but iteratively. Such a process is feasible: enterprises already execute such processes, and time-saving tools can support the consumer in her own process. In fact, given our society's emphasis on individual responsibilities, skills and devices, a risk management process for consumers is the logical next step in improving information security
SoK: Anti-Facial Recognition Technology
The rapid adoption of facial recognition (FR) technology by both government
and commercial entities in recent years has raised concerns about civil
liberties and privacy. In response, a broad suite of so-called "anti-facial
recognition" (AFR) tools has been developed to help users avoid unwanted facial
recognition. The set of AFR tools proposed in the last few years is
wide-ranging and rapidly evolving, necessitating a step back to consider the
broader design space of AFR systems and long-term challenges. This paper aims
to fill that gap and provides the first comprehensive analysis of the AFR
research landscape. Using the operational stages of FR systems as a starting
point, we create a systematic framework for analyzing the benefits and
tradeoffs of different AFR approaches. We then consider both technical and
social challenges facing AFR tools and propose directions for future research
in this field.Comment: Camera-ready version for Oakland S&P 202
The Effect of Developer-Specified Explanations for Permission Requests on Smartphone User Behavior
In Apple’s iOS 6, when an app requires access to a protected resource (e.g., location or photos), the user is prompted with a permission request that she can allow or deny. These permission request dialogs include space for developers to optionally include strings of text to explain to the user why access to the resource is needed. We examine how app developers are using this mechanism and the effect that it has on user behavior. Through an online survey of 772 smartphone users, we show that permission requests that include explanations are significantly more likely to be approved. At the same time, our analysis of 4,400 iOS apps shows that the adoption rate of this feature by developers is relatively small: around 19 % of permission requests include developer-specified explanations. Finally, we surveyed 30 iOS developers to better understand why they do or do not use this feature
Privacy-Privacy Tradeoffs
Legal and policy debates about privacy revolve around conflicts between privacy and other goods. But privacy also conflicts with itself. Whenever securing privacy on one margin compromises privacy on another margin, a privacy-privacy tradeoff arises.
This Essay introduces the phenomenon of privacy-privacy tradeoffs, with particular attention to their role in NSA surveillance. After explaining why these tradeoffs are pervasive in modern society and developing a typology, the Essay shows that many of the arguments made by the NSA\u27s defenders appeal not only to a national-security need but also to a privacy-privacy tradeoff. An appreciation of these tradeoffs, the Essay contends, illuminates the structure and the stakes of debates over surveillance law specifically and privacy policy generally
Privacy as Product Safety
Online social media confound many of our familiar expectaitons about privacy. Contrary to popular myth, users of social software like Facebook do care about privacy, deserve it, and have trouble securing it for themselves. Moreover, traditional database-focused privacy regulations on the Fair Information Practices model, while often worthwhile, fail to engage with the distinctively social aspects of these online services.
Instead, online privacy law should take inspiration from a perhaps surprising quarter: product-safety law. A web site that directs users\u27 personal information in ways they don\u27t expect is a defectively designed product, and many concepts from products liability law could usefully be applied to the structurally similar problem of privacy in social software. After setting the scene with a discussion of how people use Facebook and why standard assumptions about privacy and privacy law fail, this essay examines the parallel between physically safe products and privacy-safe social software. It illustrates the value of the product-safety approach by considering another ripped-from-the-headlines example: Google Buzz
Recommended from our members
Emerging Adults' Views on Masspersonal Self-Disclosure and their Bridging Social Capital on Facebook
Masspersonal self-disclosure on social network sites entails new risks and benefits for bridging social capital, defined as social resources such as a connection to and investment in large and heterogeneous collectives, which are important to develop during the transition to adulthood in democratic societies. To better understand motivations and social capital consequences of masspersonal self-disclosure among emerging adults, this mixed-method study examined how U.S. college students view various topics of masspersonal self-disclosure and whether values embedded in their views contributed to their perceived bridging social capital, after accounting for their Facebook use and the diversity of their networks. A total of 208 (110 women, 95 men, 3 non-binary, Mage = 20.28) students completed online questionnaires while referring to their Facebook profiles. Qualitative analyses showed how valuing self-expression, alongside other-focused values, informed participants’ decision-making about masspersonal self-disclosure. Quantitative results showed that valuing self-expression more frequently across topics of self-disclosure predicted bridging social capital; however, the use of Facebook privacy controls and indicators of ethnic and political diversity in students’ networks did not. We discuss the importance of values in understanding emerging adults’ behaviors on social network sites, their generation of bridging social capital, and civic identity development
Societal Computing
As Social Computing has increasingly captivated the general public, it has become a popular research area for computer scientists. Social Computing research focuses on online social behavior and using artifacts derived from it for providing recommendations and other useful community knowledge. Unfortunately, some of that behavior and knowledge incur societal costs, particularly with regards to Privacy, which is viewed quite differently by different populations as well as regulated differently in different locales. But clever technical solutions to those challenges may impose additional societal costs, e.g., by consuming substantial resources at odds with Green Computing, another major area of societal concern. We propose a new crosscutting research area, Societal Computing, that focuses on the technical tradeoffs among computational models and application domains that raise significant societal issues. We highlight some of the relevant research topics and open problems that we foresee in Societal Computing. We feel that these topics, and Societal Computing in general, need to gain prominence as they will provide useful avenues of research leading to increasing benefits for society as a whole. This thesis will consist of the following four projects that aim to address the issues of Societal Computing.
First, privacy in the context of ubiquitous social computing systems has become a major concern for society at large. As the number of online social computing systems that collect user data grows, concerns with privacy are further exacerbated. Examples of such online systems include social networks, recommender systems, and so on. Approaches to addressing these privacy concerns typically require substantial extra computational resources, which might be beneficial where privacy is concerned, but may have significant negative impact with respect to Green Computing and sustainability, another major societal concern. Spending more computation time results in spending more energy and other resources that make the software system less sustainable. Ideally, what we would like are techniques for designing software systems that address these privacy concerns but which are also sustainable — systems where privacy could be achieved “for free,” i.e., without having to spend extra computational effort. We describe how privacy can indeed be achieved for free — an accidental and beneficial side effect of doing some existing computation — in web applications and online systems that have access to user data. We show the feasibility, sustainability, and utility of our approach and what types of privacy threats it can mitigate.
Second, we aim to understand what the expectations and needs to end-users and software developers are, with respect to privacy in social systems. Some questions that we want to answer are: Do end-users care about privacy? What aspects of privacy are the most important to end-users? Do we need different privacy mechanisms for technical vs. non-technical users? Should we customize privacy settings and systems based on the geographic location of the users? We have created a large scale user study using an online questionnaire to gather privacy requirements from a variety of stakeholders. We also plan to conduct follow-up semistructured interviews. This user study will help us answer these questions.
Third, a related challenge to above, is to make privacy more understandable in complex systems that may have a variety of user interface options, which may change often. Our approach is to use crowdsourcing to find out how other users deal with privacy and what settings are commonly used to give users feedback on aspects like how public/private their settings are, what common settings are typically used by others, where do a certain users’ settings differ from a trusted group of friends, etc. We have a large dataset of privacy settings for over 500 users on Facebook and we plan to create a user study that will use the data to make privacy settings more understandable.
Finally, end-users of such systems find it increasingly hard to understand complex privacy settings. As software evolves over time, this might introduce bugs that breach users’ privacy. Further, there might be system-wide policy changes that could change users’ settings to be more or less private than before. We present a novel technique that can be used by end-users for detecting changes in privacy, i.e., regression testing for privacy. Using a social approach for detecting privacy bugs, we present two prototype tools. Our evaluation shows the feasibility and utility of our approach for detecting privacy bugs. We highlight two interesting case studies on the bugs that were discovered using our tools. To the best of our knowledge, this is the first technique that leverages regression testing for detecting privacy bugs from an end-user perspective
- …