1,820 research outputs found

    Exploring personalized life cycle policies

    Get PDF
    Ambient Intelligence imposes many challenges in protecting people's privacy. Storing privacy-sensitive data permanently will inevitably result in privacy violations. Limited retention techniques might prove useful in order to limit the risks of unwanted and irreversible disclosure of privacy-sensitive data. To overcome the rigidness of simple limited retention policies, Life-Cycle policies more precisely describe when and how data could be first degraded and finally be destroyed. This allows users themselves to determine an adequate compromise between privacy and data retention. However, implementing and enforcing these policies is a difficult problem. Traditional databases are not designed or optimized for deleting data. In this report, we recall the formerly introduced life cycle policy model and the already developed techniques for handling a single collective policy for all data in a relational database management system. We identify the problems raised by loosening this single policy constraint and propose preliminary techniques for concurrently handling multiple policies in one data store. The main technical consequence for the storage structure is, that when allowing multiple policies, the degradation order of tuples will not always be equal to the insert order anymore. Apart from the technical aspects, we show that personalizing the policies introduces some inference breaches which have to be further investigated. To make such an investigation possible, we introduce a metric for privacy, which enables the possibility to compare the provided amount of privacy with the amount of privacy required by the policy

    Energy efficient privacy preserved data gathering in wireless sensor networks having multiple sinks

    Get PDF
    Wireless sensor networks (WSNs) generally have a many-to-one structure so that event information flows from sensors to a unique sink. In recent WSN applications, many-tomany structures are evolved due to need for conveying collected event information to multiple sinks at the same time. This study proposes an anonymity method bases on k-anonymity for preventing record disclosure of collected event information in WSNs. Proposed method takes the anonymity requirements of multiple sinks into consideration by providing different levels of privacy for each destination sink. Attributes, which may identify of an event owner, are generalized or encrypted in order to meet the different anonymity requirements of sinks. Privacy guaranteed event information can be multicasted to all sinks instead of sending to each sink one by one. Since minimization of energy consumption is an important design criteria for WSNs, our method enables us to multicast the same event information to multiple sinks and reduce energy consumption

    Anonymizing cybersecurity data in critical infrastructures: the CIPSEC approach

    Get PDF
    Cybersecurity logs are permanently generated by network devices to describe security incidents. With modern computing technology, such logs can be exploited to counter threats in real time or before they gain a foothold. To improve these capabilities, logs are usually shared with external entities. However, since cybersecurity logs might contain sensitive data, serious privacy concerns arise, even more when critical infrastructures (CI), handling strategic data, are involved. We propose a tool to protect privacy by anonymizing sensitive data included in cybersecurity logs. We implement anonymization mechanisms grouped through the definition of a privacy policy. We adapt said approach to the context of the EU project CIPSEC that builds a unified security framework to orchestrate security products, thus offering better protection to a group of CIs. Since this framework collects and processes security-related data from multiple devices of CIs, our work is devoted to protecting privacy by integrating our anonymization approach.Peer ReviewedPostprint (published version

    Synthetic sequence generator for recommender systems - memory biased random walk on sequence multilayer network

    Full text link
    Personalized recommender systems rely on each user's personal usage data in the system, in order to assist in decision making. However, privacy policies protecting users' rights prevent these highly personal data from being publicly available to a wider researcher audience. In this work, we propose a memory biased random walk model on multilayer sequence network, as a generator of synthetic sequential data for recommender systems. We demonstrate the applicability of the synthetic data in training recommender system models for cases when privacy policies restrict clickstream publishing.Comment: The new updated version of the pape

    Towards trajectory anonymization: a generalization-based approach

    Get PDF
    Trajectory datasets are becoming popular due to the massive usage of GPS and locationbased services. In this paper, we address privacy issues regarding the identification of individuals in static trajectory datasets. We first adopt the notion of k-anonymity to trajectories and propose a novel generalization-based approach for anonymization of trajectories. We further show that releasing anonymized trajectories may still have some privacy leaks. Therefore we propose a randomization based reconstruction algorithm for releasing anonymized trajectory data and also present how the underlying techniques can be adapted to other anonymity standards. The experimental results on real and synthetic trajectory datasets show the effectiveness of the proposed techniques

    Individual Privacy vs Population Privacy: Learning to Attack Anonymization

    Full text link
    Over the last decade there have been great strides made in developing techniques to compute functions privately. In particular, Differential Privacy gives strong promises about conclusions that can be drawn about an individual. In contrast, various syntactic methods for providing privacy (criteria such as kanonymity and l-diversity) have been criticized for still allowing private information of an individual to be inferred. In this report, we consider the ability of an attacker to use data meeting privacy definitions to build an accurate classifier. We demonstrate that even under Differential Privacy, such classifiers can be used to accurately infer "private" attributes in realistic data. We compare this to similar approaches for inferencebased attacks on other forms of anonymized data. We place these attacks on the same scale, and observe that the accuracy of inference of private attributes for Differentially Private data and l-diverse data can be quite similar

    PRIVAS - automatic anonymization of databases

    Get PDF
    Currently, given the technological evolution, data and information are increasingly valuable in the most diverse areas for the most various purposes. Although the information and knowledge discovered by the exploration and use of data can be very valuable in many applications, people have been increasingly concerned about the other side, that is, the privacy threats that these processes bring. The system Privas, described in this paper, will aid the Data Publisher to pre-process the database before publishing. For that, a DSL is used to define the database schema description, identify the sensitive data and the desired privacy level. After that a Privas processor will process the DSL program and interpret it to automatically transform the repository schema. The automatization of the anonymization process is the main contribution and novelty of this work.info:eu-repo/semantics/publishedVersio

    Privacy-Preserving Publishing of Knowledge Graphs

    Get PDF
    openOnline social networks (OSNs) attract a huge number of users sharing their data every day. These data can be shared with third parties for various usage purposes, such as data analytics and machine learning. Unfortunately, adversaries can exploit shared data to infer users’ sensitive information. Various anonymization solutions have been presented to anonymize shared data such that it is harder for adversaries to infer users’ personal information. Whereas OSNs contain both users’ attributes and relationships, previous work only consider anonymizing either attributes, illustrated in relational data or relationships, represented in directed graphs. To cope with this issue, in this thesis, we consider the research challenge of anonymizing knowledge graphs (KGs), due to their flexibility in representing both attributes’ values and relationships of users. The anonymization of KGs is not trivial since adversaries can exploit both attributes and relationships of their victims. In the era of big data, these solutions are significant as they allow data providers to share attributes’ values and relationships together. Over the last three years, we have done important research efforts which has resulted in the definition of different anonymization solutions for KGs for many relevant scenarios, i.e., anonymization of static KGs, sequential anonymization of KGs, and personalized anonymization of KGs. Since KGs are directed graphs, we started our research by investigating anonymization solutions for directed graphs. As anonymization algorithms proposed in the literature (i.e., the Paired k-degree) cannot always anonymize graphs, we first presented the Cluster-Based Directed Graph Anonymization Algorithm (CDGA). We proved that CDGA can always generate anonymized directed graphs. We analyzed an attacking scenario where an adversary can exploit attributes’ values and relationships of his/her victims to re-identify these victims in anonymized KGs. To protect users in this scenario, we presented the k-Attribute Degree (k-ad) protection model to ensure that users cannot be re-identified with a confidence higher than 1 k . We proposed the Cluster-Based Knowledge Graph Anonymization Algorithm (CKGA) to anonymize KGs for this scenario. CKGA has been designed for a scenario where KGs are statically anonymized. Unfortunately, the adversary can still re-identify his/her victims if he/she has access to many versions of the anonymized KG. To cope with this issue, we further presented the k w-Time-Varying Attribute Degree to give users the same protection of k-ad even if the adversary gains access to w continuous anonymized KGs. In addition, we proposed the Cluster-based Time-Varying Knowledge Graph Anonymization Algorithm to anonymize KGs while allowing data providers to insert/re-insert/remove/update nodes and edges of their KGs. However, users are not allowed to specify their privacy preferences which are crucial to for those users requiring strong privacy protection, such as influencers. To this end, we proposed the Personalized k-Attribute Degree to allow users to specify their own value of k. The effectiveness of the proposed algorithms has been tested with experiments on real-life datasets.openInformatica e matematica del calcoloHOANG ANH-TUHoang, ANH-T
    • 

    corecore