184 research outputs found

    Implanting Life-Cycle Privacy Policies in a Context Database

    Get PDF
    Ambient intelligence (AmI) environments continuously monitor surrounding individuals' context (e.g., location, activity, etc.) to make existing applications smarter, i.e., make decision without requiring user interaction. Such AmI smartness ability is tightly coupled to quantity and quality of the available (past and present) context. However, context is often linked to an individual (e.g., location of a given person) and as such falls under privacy directives. The goal of this paper is to enable the difficult wedding of privacy (automatically fulfilling users' privacy whishes) and smartness in the AmI. interestingly, privacy requirements in the AmI are different from traditional environments, where systems usually manage durable data (e.g., medical or banking information), collected and updated trustfully either by the donor herself, her doctor, or an employee of her bank. Therefore, proper information disclosure to third parties constitutes a major privacy concern in the traditional studies

    Balancing smartness and privacy for the Ambient Intelligence

    Get PDF
    Ambient Intelligence (AmI) will introduce large privacy risks. Stored context histories are vulnerable for unauthorized disclosure, thus unlimited storing of privacy-sensitive context data is not desirable from the privacy viewpoint. However, high quality and quantity of data enable smartness for the AmI, while less and coarse data benefit privacy. This raises a very important problem to the AmI, that is, how to balance the smartness and privacy requirements in an ambient world. In this article, we propose to give to donors the control over the life cycle of their context data, so that users themselves can balance their needs and wishes in terms of smartness and privacy

    Data Degradation: Making Private Data Less Sensitive Over Time

    Get PDF
    Trail disclosure is the leakage of privacy sensitive data, resulting from negligence, attack or abusive scrutinization or usage of personal digital trails. To prevent trail disclosure, data degradation is proposed as an alternative to the limited retention principle. Data degradation is based on the assumption that long lasting purposes can often be satisfied with a less accurate, and therefore less sensitive, version of the data. Data will be progressively degraded such that it still serves application purposes, while decreasing accuracy and thus privacy sensitivity

    SWYSWYK: A Privacy-by-Design Paradigm for Personal Information Management Systems

    Get PDF
    Pushed by recent legislation and smart disclosure initiatives, Personal Information Management Systems (PIMS) emerge and hold the promise of giving the control back to the individual on her data. However, this shift leaves the privacy and security issues in user\u27s hands, a role that few people can properly endorse. Indeed, existing sharing models are difficult to administrate and securing their implementation in user\u27s computing environment is an unresolved challenge. This paper advocates the definition of a Privacy-by-Design sharing paradigm, called SWYSWYK (Share What You See with Who You Know), dedicated to the PIMS context. This paradigm allows each user to physically visualize the net effects of sharing rules on her PIMS and automatically provides tangible guarantees about the enforcement of the defined sharing policies. Finally, we demonstrate the practicality of the approach through a performance evaluation conducted on a real PIMS platform

    A Manifest-Based Framework for Organizing the Management of Personal Data at the Edge of the Network

    Get PDF
    Smart disclosure initiatives and new regulations such as GDPR allow individuals to get the control back on their data by gathering their entire digital life in a Personal Data Management Systems (PDMS). Multiple PDMS architectures exist, from centralized web hosting solutions to self-data hosting at home. These solutions strongly differ on their ability to preserve data privacy and to perform collective computations crossing data of multiple individuals (e.g., epidemiological or social studies) but none of them satisfy both objectives. The emergence of Trusted Execution Environments (TEE) changes the game. We propose a solution called Trusted PDMS, combining the TEE and PDMS properties to manage the data of each individual, and a Manifest-based framework to securely execute collective computation on top of them. We demonstrate the practicality of the solution through a real case-study being conducted over 10.000 patients in the healthcare field

    Gestion sécurisée de données personnelles

    Get PDF
    National audienceDans cet article, nous présentons certains travaux de l’équipe SMIS les plus enlien avec le nouveau modèle du Web Personnel Sécurisé. Nous introduisons d’aborden section 2 une famille d’architectures radicalement différente de celle du Web actuel où l’individu exerce un contrôle sur ses données personnelles depuis des composants personnels sécurisés situés aux extrémités du réseau, avec de fortes garantiesde non contournement de ses directives. Puis, nous présentons en section 3 deuxtypes de problèmes de recherche sous-jacents à ces architectures : d’une part, la problématique de gestion de données embarquées sur du matériel sécurisé, et d’autrepart, la problématique de calcul distribué sécurisé mettant en jeu de grandes populations de serveurs personnels. Nous illustrons cette approche en section 4 avec unexemple d’expérimentation terrain, le « Dossier Médico-Social Partagé » (DMSP),qui exploite ces concepts. Enfin, nous concluons en section 5 avec une discussionsur l’adoptabilité de cette approch

    Data Leakage Mitigation of User-Defined Functions on Secure Personal Data Management Systems

    Get PDF
    National audiencePersonal Data Management Systems (PDMSs) arrive at a rapid pace providing individuals with appropriate tools to collect, manage and share their personal data. At the same time, the emergence of Trusted Execution Environments (TEEs) opens new perspectives in solving the critical and conflicting challenge of securing users' data while enabling a rich ecosystem of data-driven applications. In this paper, we propose a PDMS architecture leveraging TEEs as a basis for security. Unlike existing solutions, our architecture allows for data processing extensiveness through the integration of any userdefined functions, albeit untrusted by the data owner. In this context, we focus on aggregate computations of large sets of database objects and provide a first study to mitigate the very large potential data leakage. We introduce the necessary security building blocks and show that an upper bound on data leakage can be guaranteed to the PDMS user. We then propose practical evaluation strategies ensuring that the potential data leakage remains minimal with a reasonable performance overhead. Finally, we validate our proposal with an Intel SGX-based PDMS implementation on real data sets

    SGBD embarqué dans une puce : retour d'expérience

    Get PDF
    National audienceLa carte à puce est aujourd'hui l'objet portable sécurisé le plus répandu. Il y a 4 ans, nous avons jeté les bases d'une étude portant sur l'embarquement de techniques bases de données dans une carte à puce. Cette étude a conduit à la définition de principes de conception pour ce que nous avons appelé alors PicoDBMS, un système de gestion de bases de données (SGBD) complet intégré dans une carte à puce. Depuis, grâce au progrès du matériel et aux efforts conjoints de notre équipe et de notre partenaire industriel, les principes définis initialement ont donné naissance à un prototype complet tournant sur une plate-forme carte à puce expérimentale. Cet article reconsidère la formulation du problème initial à la lumière des évolutions matérielles et applicatives. Il introduit ensuite un banc d'essai dédié aux bases de données embarquées dans des puces et présente une analyse de performance détaillée de notre prototype. Enfin, il dresse des perspectives de recherche dans le domaine de la gestion de données dans les puces sécurisées

    The Life-Cycle Policy model

    Get PDF
    Our daily life activity leaves digital trails in an increasing number of databases (commercial web sites, internet service providers, search engines, location tracking systems, etc). Personal digital trails are commonly exposed to accidental disclosures resulting from negligence or piracy and to ill-intentioned scrutinization and abusive usages fostered by fuzzy privacy policies. No one is sheltered because a single event (e.g., applying for a job or a credit) can suddenly make our history a precious asset. By definition, access control fails preventing trail disclosures, motivating the integration of the Limited Data Retention principle in legislations protecting data privacy. By this principle, data is withdrawn from a database after a predefined time period. However, this principle is difficult to apply in practice, leading to retain useless sensitive information for years in databases. In this paper, we propose a simple and practical data degradation model where sensitive data undergoes a progressive and irreversible degradation from an accurate state at collection time, to intermediate but still informative degraded states, up to complete disappearance when the data becomes useless. The benefits of data degradation is twofold: (i) by reducing the amount of accurate data, the privacy offence resulting from a trail disclosure is drastically reduced and (ii) degrading the data in line with the application purposes offers a new compromise between privacy preservation and application reach. We introduce in this paper a data degradation model, analyze its impact over core database techniques like storage, indexation and transaction management and propose degradation-aware techniques
    • …
    corecore