649 research outputs found

    A Semantic Framework for the Analysis of Privacy Policies

    Get PDF

    Consent Verification Under Evolving Privacy Policies

    Get PDF

    How Dangerous Permissions are Described in Android Apps' Privacy Policies?

    Get PDF
    Google requires Android apps which handle users' personal data such as photos and contacts information to post a privacy policy which describes comprehensively how the app collects, uses and shares users' information. Unfortunately, while knowing why the app wants to access specific users' information is considered very useful, permissions screen in Android does not provide such pieces of information. Accordingly, users reported their concerns about apps requiring permissions that seem to be not related to the apps' functions. To advance toward practical solutions that can assist users in protecting their privacy, a technique to automatically discover the rationales of dangerous permissions requested by Android apps, by extracting them from apps' privacy policies, could be a great advantage. However, before being able to do so, it is important to bridge the gap between technical terms used in Android permissions and natural language terminology in privacy policies. In this paper, we recorded the terminology used in Android apps' privacy policies which describe usage of dangerous permissions. The semi-automated approach employs NLP and IE techniques to map privacy policies' terminologies to Android dangerous permissions. The mapping links 128 information types to Android dangerous permissions. This mapping produces semantic information which can then be used to extract the rationales of dangerous permissions from apps' privacy policies

    Modeling of Personalized Privacy Disclosure Behavior: A Formal Method Approach

    Full text link
    In order to create user-centric and personalized privacy management tools, the underlying models must account for individual users' privacy expectations, preferences, and their ability to control their information sharing activities. Existing studies of users' privacy behavior modeling attempt to frame the problem from a request's perspective, which lack the crucial involvement of the information owner, resulting in limited or no control of policy management. Moreover, very few of them take into the consideration the aspect of correctness, explainability, usability, and acceptance of the methodologies for each user of the system. In this paper, we present a methodology to formally model, validate, and verify personalized privacy disclosure behavior based on the analysis of the user's situational decision-making process. We use a model checking tool named UPPAAL to represent users' self-reported privacy disclosure behavior by an extended form of finite state automata (FSA), and perform reachability analysis for the verification of privacy properties through computation tree logic (CTL) formulas. We also describe the practical use cases of the methodology depicting the potential of formal technique towards the design and development of user-centric behavioral modeling. This paper, through extensive amounts of experimental outcomes, contributes several insights to the area of formal methods and user-tailored privacy behavior modeling

    Perennial semantic data terms of use for decentralized web

    Get PDF
    In today’s digital landscape, the Web has become increasingly centralized, raising concerns about user privacy violations. Decentralized Web architectures, such as Solid, offer a promising solution by empowering users with better control over their data in their personal ‘Pods’. However, a significant challenge remains: users must navigate numerous applications to decide which application can be trusted with access to their data Pods. This often involves reading lengthy and complex Terms of Use agreements, a process that users often find daunting or simply ignore. This compromises user autonomy and impedes detection of data misuse. We propose a novel formal description of Data Terms of Use (DToU), along with a DToU reasoner. Users and applications specify their own parts of the DToU policy with local knowledge, covering permissions, requirements, prohibitions and obligations. Automated reasoning verifies compliance, and also derives policies for output data. This constitutes a “perennial” DToU language, where the policy authoring only occurs once, and we can conduct ongoing automated checks across users, applications and activity cycles. Our solution is built on Turtle, Notation 3 and RDF Surfaces, for the language and the reasoning engine. It ensures seamless integration with other semantic tools for enhanced interoperability. We have successfully integrated this language into the Solid framework, and conducted performance benchmark. We believe this work demonstrates a practicality of a perennial DToU language and the potential of a paradigm shift to how users interact with data and applications in a decentralized Web, offering both improved privacy and usability

    The Conflict Notion and its Static Detection: a Formal Survey

    Get PDF
    The notion of policy is widely used to enable a flexible control of many systems: access control, privacy, accountability, data base, service, contract , network configuration, and so on. One important feature is to be able to check these policies against contradictions before the enforcement step. This is the problem of the conflict detection which can be done at different steps and with different approaches. This paper presents a review of the principles for conflict detection in related security policy languages. The policy languages, the notions of conflict and the means to detect conflicts are various, hence it is difficult to compare the different principles. We propose an analysis and a comparison of the five static detection principles we found in reviewing more than forty papers of the literature. To make the comparison easier we develop a logical model with four syntactic types of systems covering most of the literature examples. We provide a semantic classification of the conflict notions and thus, we are able to relate the detection principles, the syntactic types and the semantic classification. Our comparison shows the exact link between logical consistency and the conflict notions, and that some detection principles are subject to weaknesses if not used with the right conditions

    A Rule of Persons, Not Machines: The Limits of Legal Automation

    Get PDF

    From Data Flows to Privacy-Benefit Trade-offs: A User-Centric Semantic Model

    Get PDF
    In today's highly connected cyber-physical world, people are constantly disclosing personal and sensitive data to different organizations and other people through the use of online and physical services. This is because sharing personal information can bring various benefits for themselves and others. However, data disclosure activities can lead to unexpected privacy issues, and there is a general lack of tools that help to improve users' awareness of the subtle privacy-benefit trade-offs and to make more informed decisions on their data disclosure activities in wider contexts. To fill this gap, this paper presents a novel user-centric, data-flow graph based semantic model, which can show how a given user's personal and sensitive data have been disclosed to different entities and what benefits the user gained through such data disclosures. The model allows automatic analysis of privacy-benefit trade-offs around a target user's data sharing activities, therefore it can support development of user-centric software tools for people to better manage their data disclosure activities to achieve a better balance between privacy and benefits in the cyber-physical world

    Disagreeable Privacy Policies: Mismatches between Meaning and Users’ Understanding

    Get PDF
    Privacy policies are verbose, difficult to understand, take too long to read, and may be the least-read items on most websites even as users express growing concerns about information collection practices. For all their faults, though, privacy policies remain the single most important source of information for users to attempt to learn how companies collect, use, and share data. Likewise, these policies form the basis for the self-regulatory notice and choice framework that is designed and promoted as a replacement for regulation. The underlying value and legitimacy of notice and choice depends, however, on the ability of users to understand privacy policies. This paper investigates the differences in interpretation among expert, knowledgeable, and typical users and explores whether those groups can understand the practices described in privacy policies at a level sufficient to support rational decision-making. The paper seeks to fill an important gap in the understanding of privacy policies through primary research on user interpretation and to inform the development of technologies combining natural language processing, machine learning and crowdsourcing for policy interpretation and summarization. For this research, we recruited a group of law and public policy graduate students at Fordham University, Carnegie Mellon University, and the University of Pittsburgh (“knowledgeable users”) and presented these law and policy researchers with a set of privacy policies from companies in the e-commerce and news & entertainment industries. We asked them nine basic questions about the policies’ statements regarding data collection, data use, and retention. We then presented the same set of policies to a group of privacy experts and to a group of non-expert users. The findings show areas of common understanding across all groups for certain data collection and deletion practices, but also demonstrate very important discrepancies in the interpretation of privacy policy language, particularly with respect to data sharing. The discordant interpretations arose both within groups and between the experts and the two other groups. The presence of these significant discrepancies has critical implications. First, the common understandings of some attributes of described data practices mean that semi-automated extraction of meaning from website privacy policies may be able to assist typical users and improve the effectiveness of notice by conveying the true meaning to users. However, the disagreements among experts and disagreement between experts and the other groups reflect that ambiguous wording in typical privacy policies undermines the ability of privacy policies to effectively convey notice of data practices to the general public. The results of this research will, consequently, have significant policy implications for the construction of the notice and choice framework and for the US reliance on this approach. The gap in interpretation indicates that privacy policies may be misleading the general public and that those policies could be considered legally unfair and deceptive. And, where websites are not effectively conveying privacy policies to consumers in a way that a “reasonable person” could, in fact, understand the policies, “notice and choice” fails as a framework. Such a failure has broad international implications since websites extend their reach beyond the United States
    • 

    corecore