20 research outputs found

    Interleaving Semantic Web Reasoning and Service Discovery to Enforce Context-Sensitive Security and Privacy Policies

    No full text
    Enforcing rich policies in open environments will increasingly require the ability to dynamically identify external sources of information necessary to enforce different policies (e.g. finding an appropriate source of location information to enforce a location-sensitive access control policy). In this paper, we introduce a semantic web framework and a metra-control model for dynamically interleaving policy reasoning and external service discovery and access. Within this framework, external sources of information are wrapped as web services with rich semantic profiles allowing for the dynamic discovery and comparison of relevant sources of information. Each entity (e.g. user, sensor, application, or organization) relies on one or more Policy Enforcing Agents responsible for enforcing relevant privacy and security policies in response to incoming requests. These agents implement meta-control strategies to dynamically interleave semantic web reasoning and service discovery and access. The paper also presents preliminary empirical results. This research has been conducted in the context of myCampus, a pervasive computing environment aimed at enhancing everyday campus life at Carnegie Mellon University. The framework presented can be extended to a range of other applications requiring the enforcement of context-sensitive policies (e.g. virtual enterprises, coalition forces, homeland security, etc.)

    Reconciling Mobile App Privacy and Usability on Smartphones: Could User Privacy Profiles Help? (CMU-CS-13-128, CMU-ISR-13-114)

    No full text
    <p>As they compete for developers, mobile app ecosystems have been exposing a growing number of APIs through their software development kits. Many of these APIs involve accessing sensitive functionality and/or user data and require approval by users. Android for instance allows developers to select from over 130 possible permissions. Expecting users to review and possibly adjust settings related to these permissions has proven unrealistic.</p> <p>In this paper, we report on the results of a study analyzing people’s privacy preferences when it comes to granting permissions to different mobile apps. Our results suggest that, while people’s mobile app privacy preferences are diverse, a relatively small number of profiles can be identified that offer the promise of significantly simplifying the decisions mobile users have to make.</p> <p>Specifically, our results are based on the analysis of settings of 4.8 million smartphone users of a mobile security and privacy platform. The platform relies on a rooted version of Android where users are allowed to choose between “granting”, “denying” or “requesting to be dynamically prompted” when it comes to granting 12 different Android permissions to mobile apps they have downloaded.</p

    Understanding People’s Place Naming Preferences in Location Sharing (CMU-CyLab-09-010)

    No full text
    Many existing location sharing applications provide coordinate-based location estimates and display them on a map. However, people use a rich variety of terms to convey their location to others, such as “home,” “Starbucks,” or even “the bus stop near my apartment.” Our long-term goal is to create a system that can automatically generate useful place names based on real-time context. Towards this end, we present the results of a week-long study with 30 participants to understand people’s preferences for place naming. We propose a hierarchical classification on place naming methods. We further conclude that people’s place naming preferences are complex and dynamic, but fairly predictable using machine learning techniques. Two factors influence the way people name a place: their routines and their willingness to share location information. The new findings provide important implications to location sharing applications and other location based services

    What do they know about me? Contents and Concerns of Online Behavioral Profiles (CMU-CyLab-14-011)

    No full text
    <p>Data aggregators collect large amounts of information about individual users from multiple sources, and create detailed online behavioral profiles of individuals. Behavioral profiles benefit users by improving products and services. However, they have also raised privacy concerns. To increase transparency, some companies are allowing users to access their behavioral profiles. In this work, we investigated behavioral profiles of users by utilizing these access mechanisms. Using in-person interviews (n=8), we analyzed the data shown in the profiles and compared it with the data that companies have about users. We elicited surprises and concerns from users about the data in their profiles, and estimated the accuracy of profiles. We conducted an online survey (n=100) to confirm observed surprises and concerns. Our results show a large gap between data shown in profiles and data possessed by companies. We also find that large number of profiles contain inaccuracies with levels as high as 80%. Participants expressed several concerns including collection of sensitive data such as credit and health information, extent of data collection and how their data may be used.</p

    User-Controllable Learning of Location Privacy Policies with Gaussian Mixture Models

    No full text
    With smart-phones becoming increasingly commonplace, there has been a subsequent surge in applications that continuously track the location of users. However, serious privacy concerns arise as people start to widely adopt these applications. Users will need to maintain policies to determine under which circumstances to share their location. Specifying these policies however, is a cumbersome task, suggesting that machine learning might be helpful. In this paper, we present a user-controllable method for learning location sharing policies. We use a classifier based on multivariate Gaussian mixtures that is suitably modified so as to restrict the evolution of the underlying policy to favor incremental and therefore human-understandable changes as new data arrives. We evaluate the model on real location-sharing policies collected from a live location-sharing social network, and we show that our method can learn policies in a user-controllable setting that are just as accurate as policies that do not evolve incrementally. Additionally, we highlight the strength of the generative modeling approach we take, by showing how our model easily extends to the semi-supervised setting.</p

    Privacy as Part of the App Decision-Making Process (CMU-CyLab-13-003)

    No full text
    <p>Smartphones have unprecedented access to sensitive personal information. While users report having privacy concerns, they may not actively consider privacy while downloading apps from smartphone application marketplaces. Currently, Android users have only the Android permissions display, which appears after they have selected an app to download, to help them understand how applications access their information. We investigate how permissions and privacy could play a more active role in app-selection decisions. We designed a short "Privacy Facts" display, which we tested in a 20-participant lab study and a 366-participant online experiment. We found that by bringing privacy information to the user when they were making the decision and by presenting it in a clearer fashion, we could assist users in choosing applications that request fewer permissions. </p

    A Step Towards Usable Privacy Policy: Automatic Alignment of Privacy Statements

    No full text
    <p>With the rapid development of web-based services, concerns about user privacy have heightened. The privacy policies of online websites, which serve as a legal agreement between service providers and users, are not easy for people to understand and therefore offer an opportunity for natural language processing. In this paper, we consider a corpus of these policies, and tackle the problem of aligning or grouping segments of policies based on the privacy issues they address. A dataset of pairwise judgments from humans is used to evaluate two methods, one based on clustering and another based on a hidden Markov model. Our analysis suggests a five-point gap between system and median-human levels of agreement with a consensus annotation, of which half can be closed with bag of words representations and half requires more sophistication.</p

    Modeling Users’ Mobile App Privacy Preferences: Restoring Usability in a Sea of Permission Settings

    No full text
    <p>In this paper, we investigate the feasibility of identifying a small set of privacy profiles as a way of helping users manage their mobile app privacy preferences. Our analysis does not limit itself to looking at permissions people feel comfortable granting to an app. Instead it relies on static code analysis to determine the purpose for which an app requests each of its permissions, distinguishing for instance between apps relying on particular permissions to deliver their core functionality and apps requesting these permissions to share information with advertising networks or social networks. Using privacy preferences that reflect people’s comfort with the purpose for which different apps request their permissions, we use clustering techniques to identify privacy profiles. A major contribution of this work is to show that, while people’s mobile app privacy preferences are diverse, it is possible to identify a small number of privacy profiles that collectively do a good job at capturing these diverse preferences.</p

    When Are Users Comfortable Sharing Locations with Advertisers? (CMU-CyLab-10-017)

    No full text
    As smartphones and other mobile computing devices have increased in ubiquity, advertisers have begun to realize a more effective way of targeting users and a promising area for revenue growth: location-based advertising. This trend brings to bear new questions about whether or not users will adopt products involving this potentially invasive form of advertising and what sorts of protections should be given to users. Our real-world user study of 27 participants echoes earlier findings that users have significant privacy concerns regarding sharing their locations with advertisers. However, we examine these concerns in more detail and find that they are complex (e.g., relating to not only the quantity of ads, but the locations they receive them at). With advanced privacy settings users stated they would feel more comfortable and share more information than with a simple opt-in/opt-out mechanism

    Expecting the Unexpected: Understanding Mismatched Privacy Expectations Online

    No full text
    Online privacy policies are the primary mechanism for informing users about data practices of online services. In practice, users ignore privacy policies as policies are long and complex to read. Since users do not read privacy policies, their expectations regarding data practices of online services may not match a service's actual data practices. Mismatches may result in users exposing themselves to unanticipated privacy risks such as unknowingly sharing personal information with online services. One approach for mitigating privacy risks is to provide simplified privacy notices, in addition to privacy policies, that highlight unexpected data practices. However, identifying mismatches between user expectations and services' practices is challenging. We propose and validate a practical approach for studying Web users' privacy expectations and identifying mismatches with practices stated in privacy policies. We conducted a user study with 240 participants and 16 websites, and identified mismatches in collection, sharing and deletion data practices. We discuss the implications of our results for the design of usable privacy notices, service providers, as well as public policy
    corecore