2,440 research outputs found

    A roadmap towards improving managed security services from a privacy perspective

    Get PDF
    Published version of an article in the journal: Ethics and Information Technology. Also available from the publisher at: http://dx.doi.org/10.1007/s10676-014-9348-3This paper proposes a roadmap for how privacy leakages from outsourced managed security services using intrusion detection systems can be controlled. The paper first analyses the risk of leaking private or confidential information from signature-based intrusion detection systems. It then discusses how the situation can be improved by developing adequate privacy enforcement methods and privacy leakage metrics in order to control and reduce the leakage of private and confidential information over time. Such metrics should allow for quantifying how much information that is leaking, where these information leakages are, as well as showing what these leakages mean. This includes adding enforcement mechanisms ensuring that operation on sensitive information is transparent and auditable. The data controller or external quality assurance organisations can then verify or certify that the security operation operates in a privacy friendly manner. The roadmap furthermore outlines how privacy-enhanced intrusion detection systems should be implemented by initially providing privacy-enhanced alarm handling and then gradually extending support for privacy enhancing operation to other areas like digital forensics, exchange of threat information and big data analytics based attack detection

    Conclave: secure multi-party computation on big data (extended TR)

    Full text link
    Secure Multi-Party Computation (MPC) allows mutually distrusting parties to run joint computations without revealing private data. Current MPC algorithms scale poorly with data size, which makes MPC on "big data" prohibitively slow and inhibits its practical use. Many relational analytics queries can maintain MPC's end-to-end security guarantee without using cryptographic MPC techniques for all operations. Conclave is a query compiler that accelerates such queries by transforming them into a combination of data-parallel, local cleartext processing and small MPC steps. When parties trust others with specific subsets of the data, Conclave applies new hybrid MPC-cleartext protocols to run additional steps outside of MPC and improve scalability further. Our Conclave prototype generates code for cleartext processing in Python and Spark, and for secure MPC using the Sharemind and Obliv-C frameworks. Conclave scales to data sets between three and six orders of magnitude larger than state-of-the-art MPC frameworks support on their own. Thanks to its hybrid protocols, Conclave also substantially outperforms SMCQL, the most similar existing system.Comment: Extended technical report for EuroSys 2019 pape

    Context-Aware Generative Adversarial Privacy

    Full text link
    Preserving the utility of published datasets while simultaneously providing provable privacy guarantees is a well-known challenge. On the one hand, context-free privacy solutions, such as differential privacy, provide strong privacy guarantees, but often lead to a significant reduction in utility. On the other hand, context-aware privacy solutions, such as information theoretic privacy, achieve an improved privacy-utility tradeoff, but assume that the data holder has access to dataset statistics. We circumvent these limitations by introducing a novel context-aware privacy framework called generative adversarial privacy (GAP). GAP leverages recent advancements in generative adversarial networks (GANs) to allow the data holder to learn privatization schemes from the dataset itself. Under GAP, learning the privacy mechanism is formulated as a constrained minimax game between two players: a privatizer that sanitizes the dataset in a way that limits the risk of inference attacks on the individuals' private variables, and an adversary that tries to infer the private variables from the sanitized dataset. To evaluate GAP's performance, we investigate two simple (yet canonical) statistical dataset models: (a) the binary data model, and (b) the binary Gaussian mixture model. For both models, we derive game-theoretically optimal minimax privacy mechanisms, and show that the privacy mechanisms learned from data (in a generative adversarial fashion) match the theoretically optimal ones. This demonstrates that our framework can be easily applied in practice, even in the absence of dataset statistics.Comment: Improved version of a paper accepted by Entropy Journal, Special Issue on Information Theory in Machine Learning and Data Scienc

    Metrics for Differential Privacy in Concurrent Systems

    Get PDF
    Part 3: Security AnalysisInternational audienceOriginally proposed for privacy protection in the context of statistical databases, differential privacy is now widely adopted in various models of computation. In this paper we investigate techniques for proving differential privacy in the context of concurrent systems. Our motivation stems from the work of Tschantz et al., who proposed a verification method based on proving the existence of a stratified family between states, that can track the privacy leakage, ensuring that it does not exceed a given leakage budget. We improve this technique by investigating a state property which is more permissive and still implies differential privacy. We consider two pseudometrics on probabilistic automata: The first one is essentially a reformulation of the notion proposed by Tschantz et al. The second one is a more liberal variant, relaxing the relation between them by integrating the notion of amortisation, which results into a more parsimonious use of the privacy budget. We show that the metrical closeness of automata guarantees the preservation of differential privacy, which makes the two metrics suitable for verification. Moreover we show that process combinators are non-expansive in this pseudometric framework. We apply the pseudometric framework to reason about the degree of differential privacy of protocols by the example of the Dining Cryptographers Protocol with biased coins

    Privacy-enhanced network monitoring

    Get PDF
    This PhD dissertation investigates two necessary means that are required for building privacy-enhanced network monitoring systems: a policy-based privacy or confidentiality enforcement technology; and metrics measuring leakage of private or confidential information to verify and improve these policies. The privacy enforcement mechanism is based on fine-grained access control and reversible anonymisation of XML data to limit or control access to sensitive information from the monitoring systems. The metrics can be used to support a continuous improvement process, by quantifying leakages of private or confidential information, locating where they are, and proposing how these leakages can be mitigated. The planned actions can be enforced by applying a reversible anonymisation policy, or by removing the source of the information leakages. The metrics can subsequently verify that the planned privacy enforcement scheme works as intended. Any significant deviations from the expected information leakage can be used to trigger further improvement actions. The most significant results from the dissertation are: a privacy leakage metric based on the entropy standard deviation of given data (for example IDS alarms), which measures how much sensitive information that is leaking and where these leakages occur; a proxy offering policy-based reversible anonymisation of information in XML-based web services. The solution supports multi-level security, so that only authorised stakeholders can get access to sensitive information; a methodology which combines privacy metrics with the reversible anonymisation scheme to support a continuous improvement process with reduced leakage of private or confidential information over time. This can be used to improve management of private or confidential information where managed security services have been outsourced to semi-trusted parties, for example for outsourced managed security services monitoring health institutions or critical infrastructures. The solution is based on relevant standards to ensure backwards compatibility with existing intrusion detection systems and alarm databases
    corecore