19 research outputs found
Recommended from our members
Supporting Location Privacy Management through Feedback and Control
Participation in modern, socially-focused digital systems involves a large degree of privacy management, i.e. controlling who may access what information under what circumstances. Effective privacy management (control) requires that mobile systemsâ users be able to make informed privacy decisions as their experience and knowledge of a system progresses. By informed, we mean users be aware of the actual information flow. Moreover, privacy preferences vary across the context and it is hard to define privacy policy that reflects the dynamic nature of our lives.
This research explores the problem of supporting awareness of information flow and designing usable interfaces for maintaining privacy policies ad-hoc. We borrow from the world of Computer Supported Collaborative Work (CSCW) and propose to incorporate social translucence, a design approach that âsupports coherent behaviour by making participants and their activities visible to one anotherâ. We use the characteristics of social translucence, namely visibility, awareness and accountability in order to introduce social norms in spatially dispersed systems. Our research is driven by two questions: (1) how can artifacts from real world social interaction, such as responsibility, be embedded into mobile interaction; and (2) can systems be designed in which both privacy violations and the burden of privacy management is minimized.
The contributions of our work are: (1) an implementation of Buddy Tracker, privacy-aware location-sharing application based on the social translucence; (2) the design and evaluation of the concept of real-time feedback as a means of incorporating social translucence in location-sharing scenarios; and finally (3) a novel interface for ad-hoc privacy management called Privacy-Shake.
We explore the role of real-time feedback for privacy management in the context of Buddy Tracker. Informed by focus group discussions, interviews, surveys and two field trials of Buddy Tracker we found that when using a system that provided real-time feedback, people were more accountable for their actions and reduced the number of unreasonable location requests. From our observations we develop concrete design guidelines for incorporating real-time feedback into information sharing applications in a manner that ensures social acceptance of the technology
Dark Patterns after the GDPR: Scraping Consent Pop-ups and Demonstrating their Influence
New consent management platforms (CMPs) have been introduced to the web to
conform with the EU's General Data Protection Regulation, particularly its
requirements for consent when companies collect and process users' personal
data. This work analyses how the most prevalent CMP designs affect people's
consent choices. We scraped the designs of the five most popular CMPs on the
top 10,000 websites in the UK (n=680). We found that dark patterns and implied
consent are ubiquitous; only 11.8% meet the minimal requirements that we set
based on European law. Second, we conducted a field experiment with 40
participants to investigate how the eight most common designs affect consent
choices. We found that notification style (banner or barrier) has no effect;
removing the opt-out button from the first page increases consent by 22--23
percentage points; and providing more granular controls on the first page
decreases consent by 8--20 percentage points. This study provides an empirical
basis for the necessary regulatory action to enforce the GDPR, in particular
the possibility of focusing on the centralised, third-party CMP services as an
effective way to increase compliance.Comment: 13 pages, 3 figures. To appear in the Proceedings of CHI '20 CHI
Conference on Human Factors in Computing Systems, April 25--30, 2020,
Honolulu, HI, US
Privacy policy analysis : a scoping review and research agenda
Online users often neglect the importance of privacy policies - a critical aspect of digital privacy and data protection. This scoping review addresses this oversight by delving into privacy policy analysis, aiming to establish a comprehensive research agenda. The study's objective was to explore the analytic techniques employed in privacy policy analysis and to identify the associated challenges. Following the Preferred Reporting Items for Systematic Reviews and Meta-Analyses for Scoping Reviews (PRISMA-ScR) checklist, the review selected n = 97 relevant studies. The findings reveal a diverse array of techniques used, encompassing automated machine learning and natural language processing, and manual content analysis. Notably, researchers grapple with challenges like linguistic nuances, ambiguity, and complex data harvesting methods. Additionally, the lack of privacy-centric theoretical frameworks and a dearth of user evaluations in many studies limit their real-world applicability. The review concludes by proposing a set of research recommendations to shape the future research agenda in privacy policy analysis
Interactive privacy management: towards enhancing privacy awareness and control in internet of things
The balance between protecting user privacy while providing cost-effective devices that are functional and usable is a key challenge in the burgeoning Internet of Things (IoT). While in traditional desktop and mobile contexts, the primary user interface is a screen, in IoT devices, screens are rare or very small, invalidating many existing approaches to protecting user privacy. Privacy visualisations are a common approach for assisting users in understanding the privacy implications of web and mobile services. To gain a thorough understanding of IoT privacy, we examine existing web, mobile, and IoT visualisation approaches. Following that, we define five major privacy factors in the IoT context: (i) type, (ii) usage, (iii) storage, (iv) retention period, and (v) access. We then describe notification methods used in various contexts as reported in the literature. We aim to highlight key approaches that developers and researchers can use for creating effective IoT privacy notices that improve user privacy management (awareness and control). Using a toolkit, a use case scenario, and two examples from the literature, we demonstrate how privacy visualisation approaches can be supported in practice
Security Aspects in Web of Data Based on Trust Principles. A brief of Literature Review
Within scientific community, there is a certain consensus to define "Big Data" as a global set, through a complex integration that embraces several dimensions from using of research data, Open Data, Linked Data, Social Network Data, etc. These data are scattered in different sources, which suppose a mix that respond to diverse philosophies, great diversity of structures, different denominations, etc. Its management faces great technological and methodological challenges: The discovery and selection of data, its extraction and final processing, preservation, visualization, access possibility, greater or lesser structuring, between other aspects, that allow showing a huge domain of study at the level of analysis and implementation in different knowledge domains. However, given the data availability and its possible opening: What problems do the data opening face? This paper shows a literature review about these security aspects
Security Aspects in Web of Data Based on Trust Principles. A brief of Literature Review
Within scientific community, there is a certain consensus to define "Big Data" as a global set, through a complex integration that embraces several dimensions from using of research data, Open Data, Linked Data, Social Network Data, etc. These data are scattered in different sources, which suppose a mix that respond to diverse philosophies, great diversity of structures, different denominations, etc. Its management faces great technological and methodological challenges: The discovery and selection of data, its extraction and final processing, preservation, visualization, access possibility, greater or lesser structuring, between other aspects, which allow showing a huge domain of study at the level of analysis and implementation in different knowledge domains. However, given the data availability and its possible opening: What problems do the data opening face? This paper shows a literature review about these security aspects
âPopcorn Tastes Goodâ: Participatory Policymaking and Redditâs âAMAgeddonâ
In human-computer interaction research and practice, policy concerns can sometimes fall to the margins, orbiting at the periphery of the traditionally core interests of design and practice. This perspective ignores the important ways that policy is bound up with the technical and behavioral elements of the HCI universe. Policy concerns are triggered as a matter of course in social computing, CSCW, systems engineering, UX, and related contexts because technological design, social practice and policy are dynamically entangled and mutually constitutive. Through this research, we demonstrate the value of a stronger emphasis on policy in HCI by exploring a recent controversy on Reddit: âAMAgeddon.â Applying Hirschmanâs exit, voice and loyalty framework, we argue that the sustainability of online communities like Reddit will require successful navigation of the complex and often murky intersections among technical design and human interaction through a distributed participatory policymaking process that promotes user loyalty
Recommended from our members
Toward Usable Access Control for End-users: A Case Study of Facebook Privacy Settings
Many protection mechanisms in computer security are designed to enforce a configurable policy. The security policy captures high-level goals and intentions, and is managed by a policy author tasked with translating these goals into an implementable policy. In our work, we focus on access control policies where errors in the specified policy can result in the mechanism incorrectly denying a request to access a resource, or incorrectly allowing access to a resource that they should not have access to. Due to the need for correct policies, it is critical that organizations and individuals have usable tools to manage security policies. Policy management encompasses several subtasks including specifying the initial security policy, modifying an existing policy, and comprehending the effective policy. The policy author must understand the configurable options well enough to accurately translate the desired policy into the implemented policy. Specifying correct security policies is known to be a difficult task, and prior work has contributed policy authoring tools that are more usable than the prior art and other work has also shown the importance of the policy author being able to quickly understand the effective policy. Specifying a correct policy is difficult enough for technical users, and now, increasingly, end-users are being asked to make access control decisions in regard to who can access their personal data. We focus on the need for an access control mechanism that is usable for end-users. We investigated end-users who are already managing an access control policy, namely social network site (SNS) users. We first looked at how they manage the access control policy that defines who can access their shared content. We accomplish this by empirically evaluating how Facebook users utilize the available privacy controls to implement an access control policy for their shared content and found that many users have policies are inconsistent with their sharing intentions. Upon discovering that many participants claim they will not take corrective action in response to inconsistencies in their existing settings, we collected quantitative and qualitative data to measure whether SNS users are concerned with the accessibility of their shared content. After confirming that users do in fact care about who accesses their content, we hypothesize that we can increase the correctness of users' SNS privacy settings by introducing contextual information and specific guidance based on their preferences. We found that the combination of viewership feedback, a sequence of direct questions to audit the user's sharing preferences, and specific guidance motivates some users to modify their privacy settings to more closely approximate their desired settings. Our results demonstrate the weaknesses of ACL-based access control mechanisms, and also provide support that it is possible to improve the usability of such mechanisms. We conclude by outlining the implications of our results for the design of a usable access control mechanism for end-users
Recommended from our members
Using Machine Learning to improve Internet Privacy
Internet privacy lacks transparency, choice, quantifiability, and accountability, especially, as the deployment of machine learning technologies becomes mainstream. However, these technologies can be both privacy-invasive as well as privacy-protective. This dissertation advances the thesis that machine learning can be used for purposes of improving Internet privacy. Starting with a case study that shows how the potential of a social network to learn ethnicity and gender of its users from geotags can be estimated, various strands of machine learning technologies to further privacy are explored. While the quantification of privacy is the subject of well-known privacy metrics, such as k-anonymity or differential privacy, I discuss how some of those metrics can be leveraged in tandem with machine learning algorithms for purposes of quantifying the privacy-invasiveness of data collection practices. Further, I demonstrate how the current notice-and-choice paradigm can be realized by automatic machine learning privacy policy analysis. The implemented system notifies users efficiently and accurately on applicable data practices. Further, by analyzing software data flows users are enabled to compare actual to described data practices and regulators can enforce those at scale. The emerging cross-device tracking practices of ad networks, analytics companies, and others can be supplemented by machine learning technologies as well to notify users of privacy practices across devices and give them the choice they are entitled to by law. Ultimately, cross-device tracking is a harbinger of the emerging Internet of Things, for which I envision intelligent personal assistants that help users navigating through the increasing complexity of privacy notices and choices