1,291 research outputs found

    Empowering users to control their privacy in context-aware system through interactive consent

    Get PDF
    Context-aware systems adapt their behaviour based on the context a user is in. Since context is potentially privacy sensitive information, users should be empowered to control how much of their context they are willing to share, under what conditions and for what purpose. We propose an interactive consent mechanism that allows this. It is interactive in the sense that users are asked for consent when a request for their context information is received. Our interactive consent mechanism complements a more traditional pre-configuration approach. We describe the architecture, the implementation of our interactive consent mechanism and a use case

    Ubiquitous systems and the family: Thoughts about the networked home

    Get PDF
    Developments in ubiquitous and pervasive computing herald a future in which computation is embedded into our daily lives. Such a vision raises important questions about how people, especially families, will be able to engage with and trust such systems whilst maintaining privacy and individual boundaries. To begin to address such issues, we have recently conducted a wide reaching study eliciting trust, privacy and identity concerns about pervasive computing. Over three hundred UK citizens participated in 38 focus groups. The groups were shown Videotaped Activity Scenarios [11] depicting pervasive or ubiquitous computing applications in a number of contexts including shopping. The data raises a number of important issues from a family perspective in terms of access, control, responsibility, benefit and complexity. Also findings highlight the conflict between increased functionality and the subtle social interactions that sustain family bonds. We present a Pre-Concept Evaluation Tool (PRECET) for use in design and implementation of ubicomp systems

    A realisation of ethical concerns with smartphone personal health monitoring apps

    Get PDF
    The pervasiveness of smartphones has facilitated a new way in which owners of devices can monitor their health using applications (apps) that are installed on their smartphones. Smartphone personal health monitoring (SPHM) collects and stores health related data of the user either locally or in a third party storing mechanism. They are also capable of giving feedback to the user of the app in response to conditions are provided to the app therefore empowering the user to actively make decisions to adjust their lifestyle. Regardless of the benefits that this new innovative technology offers to its users, there are some ethical concerns to the user of SPHM apps. These ethical concerns are in some way connected to the features of SPHM apps. From a literature survey, this paper attempts to recognize ethical issues with personal health monitoring apps on smartphones, viewed in light of general ethics of ubiquitous computing. The paper argues that there are ethical concerns with the use of SPHM apps regardless of the benefits that the technology offers to users due to SPHM apps’ ubiquity leaving them open to known and emerging ethical concerns. The paper then propose a need further empirical research to validate the claim

    Keeping ubiquitous computing to yourself: a practical model for user control of privacy

    Get PDF
    As with all the major advances in information and communication technology, ubiquitous computing (ubicomp) introduces new risks to individual privacy. Our analysis of privacy protection in ubicomp has identified four layers through which users must navigate: the regulatory regime they are currently in, the type of ubicomp service required, the type of data being disclosed, and their personal privacy policy. We illustrate and compare the protection afforded by regulation and by some major models for user control of privacy. We identify the shortcomings of each and propose a model which allows user control of privacy levels in a ubicomp environment. Our model balances the user's privacy preferences against the applicable privacy regulations and incorporates five types of user controlled 'noise' to protect location privacy by introducing ambiguities. We also incorporate an economics-based approach to assist users in balancing the trade-offs between giving up privacy and receiving ubicomp services. We conclude with a scenario and heuristic evaluation which suggests that regulation can have both positive and negative influences on privacy interfaces in ubicomp and that social translucence is an important heuristic for ubicomp privacy interface functionality

    Data degradation to enhance privacy for the Ambient Intelligence

    Get PDF
    Increasing research in ubiquitous computing techniques towards the development of an Ambient Intelligence raises issues regarding privacy. To gain the required data needed to enable application in this Ambient Intelligence to offer smart services to users, sensors will monitor users' behavior to fill personal context histories. Those context histories will be stored on database/information systems which we consider as honest: they can be trusted now, but might be subject to attacks in the future. Making this assumption implies that protecting context histories by means of access control might be not enough. To reduce the impact of possible attacks, we propose to use limited retention techniques. In our approach, we present applications a degraded set of data with a retention delay attached to it which matches both application requirements and users privacy wishes. Data degradation can be twofold: the accuracy of context data can be lowered such that the less privacy sensitive parts are retained, and context data can be transformed such that only particular abilities for application remain available. Retention periods can be specified to trigger irreversible removal of the context data from the system

    Expressing Privacy Preferences in terms of Invasiveness

    Get PDF
    Dynamic context aware systems need highly flexible privacy protection mechanisms. We describe an extension to an existing RBAC-based mechanism that utilises a dynamic measure of invasiveness to determine whether contextual information should be released

    A privacy awareness system for ubiquitous computing environments

    Get PDF
    www.inf.ethz.ch/˜langhein Abstract. Protecting personal privacy is going to be a prime concern for the deployment of ubiquitous computing systems in the real world. With daunting Orwellian visions looming, it is easy to conclude that tamper-proof technical protection mechanisms such as strong anonymization and encryption are the only solutions to such privacy threats. However, we argue that such perfect protection for personal information will hardly be achievable, and propose instead to build systems that help others respect our personal privacy, enable us to be aware of our own privacy, and to rely on social and legal norms to protect us from the few wrongdoers. We introduce a privacy awareness system targeted at ubiquitous computing environments that allows data collectors to both announce and implement data usage policies, as well as providing data subjects with technical means to keep track of their personal information as it is stored, used, and possibly removed from the system. Even though such a system cannot guarantee our privacy, we believe that it can create a sense of accountability in a world of invisible services that we will be comfortable living in and interacting with.

    Exploring personalized life cycle policies

    Get PDF
    Ambient Intelligence imposes many challenges in protecting people's privacy. Storing privacy-sensitive data permanently will inevitably result in privacy violations. Limited retention techniques might prove useful in order to limit the risks of unwanted and irreversible disclosure of privacy-sensitive data. To overcome the rigidness of simple limited retention policies, Life-Cycle policies more precisely describe when and how data could be first degraded and finally be destroyed. This allows users themselves to determine an adequate compromise between privacy and data retention. However, implementing and enforcing these policies is a difficult problem. Traditional databases are not designed or optimized for deleting data. In this report, we recall the formerly introduced life cycle policy model and the already developed techniques for handling a single collective policy for all data in a relational database management system. We identify the problems raised by loosening this single policy constraint and propose preliminary techniques for concurrently handling multiple policies in one data store. The main technical consequence for the storage structure is, that when allowing multiple policies, the degradation order of tuples will not always be equal to the insert order anymore. Apart from the technical aspects, we show that personalizing the policies introduces some inference breaches which have to be further investigated. To make such an investigation possible, we introduce a metric for privacy, which enables the possibility to compare the provided amount of privacy with the amount of privacy required by the policy
    corecore