31,635 research outputs found

    Secure and Private Cloud Storage Systems with Random Linear Fountain Codes

    Full text link
    An information theoretic approach to security and privacy called Secure And Private Information Retrieval (SAPIR) is introduced. SAPIR is applied to distributed data storage systems. In this approach, random combinations of all contents are stored across the network. Our coding approach is based on Random Linear Fountain (RLF) codes. To retrieve a content, a group of servers collaborate with each other to form a Reconstruction Group (RG). SAPIR achieves asymptotic perfect secrecy if at least one of the servers within an RG is not compromised. Further, a Private Information Retrieval (PIR) scheme based on random queries is proposed. The PIR approach ensures the users privately download their desired contents without the servers knowing about the requested contents indices. The proposed scheme is adaptive and can provide privacy against a significant number of colluding servers.Comment: 8 pages, 2 figure

    An Information Theoretic approach to Post Randomization Methods under Differential Privacy

    Get PDF
    Post Randomization Methods (PRAM) are among the most popular disclosure limitation techniques for both categorical and continuous data. In the categorical case, given a stochastic matrix M and a specified variable, an individual belonging to category i is changed to category j with probability Mi,j . Every approach to choose the randomization matrix M has to balance between two desiderata: 1) preserving as much statistical information from the raw data as possible; 2) guaranteeing the privacy of individuals in the dataset. This trade-off has generally been shown to be very challenging to solve. In this work, we use recent tools from the computer science literature and propose to choose M as the solution of a constrained maximization problems. Specifically, M is chosen as the solution of a constrained maximization problem, where we maximize the Mutual Information between raw and transformed data, given the constraint that the transformation satisfies the notion of Differential Privacy. For the general Categorical model, it is shown how this maximization problem reduces to a convex linear programming and can be therefore solved with known optimization algorithms

    Exploratory Study of the Privacy Extension for System Theoretic Process Analysis (STPA-Priv) to elicit Privacy Risks in eHealth

    Full text link
    Context: System Theoretic Process Analysis for Privacy (STPA-Priv) is a novel privacy risk elicitation method using a top down approach. It has not gotten very much attention but may offer a convenient structured approach and generation of additional artifacts compared to other methods. Aim: The aim of this exploratory study is to find out what benefits the privacy risk elicitation method STPA-Priv has and to explain how the method can be used. Method: Therefore we apply STPA-Priv to a real world health scenario that involves a smart glucose measurement device used by children. Different kinds of data from the smart device including location data should be shared with the parents, physicians, and urban planners. This makes it a sociotechnical system that offers adequate and complex privacy risks to be found. Results: We find out that STPA-Priv is a structured method for privacy analysis and finds complex privacy risks. The method is supported by a tool called XSTAMPP which makes the analysis and its results more profound. Additionally, we learn that an iterative application of the steps might be necessary to find more privacy risks when more information about the system is available later. Conclusions: STPA-Priv helps to identify complex privacy risks that are derived from sociotechnical interactions in a system. It also outputs privacy constraints that are to be enforced by the system to ensure privacy.Comment: author's post-prin

    Differential Privacy, Property Testing, and Perturbations

    Full text link
    Controlling the dissemination of information about ourselves has become a minefield in the modern age. We release data about ourselves every day and don’t always fully understand what information is contained in this data. It is often the case that the combination of seemingly innocuous pieces of data can be combined to reveal more sensitive information about ourselves than we intended. Differential privacy has developed as a technique to prevent this type of privacy leakage. It borrows ideas from information theory to inject enough uncertainty into the data so that sensitive information is provably absent from the privatised data. Current research in differential privacy walks the fine line between removing sensitive information while allowing non-sensitive information to be released. At its heart, this thesis is about the study of information. Many of the results can be formulated as asking a subset of the questions: does the data you have contain enough information to learn what you would like to learn? and how can I affect the data to ensure you can’t discern sensitive information? We will often approach the former question from both directions: information theoretic lower bounds on recovery and algorithmic upper bounds. We begin with an information theoretic lower bound for graphon estimation. This explores the fundamental limits of how much information about the underlying population is contained in a finite sample of data. We then move on to exploring the connection between information theoretic results and privacy in the context of linear inverse problems. We find that there is a discrepancy between how the inverse problems community and the privacy community view good recovery of information. Next, we explore black-box testing for privacy. We argue that the amount of information required to verify the privacy guarantee of an algorithm, without access to the internals of the algorithm, is lower bounded by the amount of information required to break the privacy guarantee. Finally, we explore a setting where imposing privacy is a help rather than a hindrance: online linear optimisation. We argue that private algorithms have the right kind of stability guarantee to ensure low regret for online linear optimisation.PHDMathematicsUniversity of Michigan, Horace H. Rackham School of Graduate Studieshttps://deepblue.lib.umich.edu/bitstream/2027.42/143940/1/amcm_1.pd
    • …
    corecore