13 research outputs found

    Information-Theoretic Secure Outsourced Computation in Distributed Systems

    Get PDF
    Secure multi-party computation (secure MPC) has been established as the de facto paradigm for protecting privacy in distributed computation. One of the earliest secure MPC primitives is the Shamir\u27s secret sharing (SSS) scheme. SSS has many advantages over other popular secure MPC primitives like garbled circuits (GC) -- it provides information-theoretic security guarantee, requires no complex long-integer operations, and often leads to more efficient protocols. Nonetheless, SSS receives less attention in the signal processing community because SSS requires a larger number of honest participants, making it prone to collusion attacks. In this dissertation, I propose an agent-based computing framework using SSS to protect privacy in distributed signal processing. There are three main contributions to this dissertation. First, the proposed computing framework is shown to be significantly more efficient than GC. Second, a novel game-theoretical framework is proposed to analyze different types of collusion attacks. Third, using the proposed game-theoretical framework, specific mechanism designs are developed to deter collusion attacks in a fully distributed manner. Specifically, for a collusion attack with known detectors, I analyze it as games between secret owners and show that the attack can be effectively deterred by an explicit retaliation mechanism. For a general attack without detectors, I expand the scope of the game to include the computing agents and provide deterrence through deceptive collusion requests. The correctness and privacy of the protocols are proved under a covert adversarial model. Our experimental results demonstrate the efficiency of SSS-based protocols and the validity of our mechanism design

    Privacy-preserving social network analysis

    Get PDF
    Data privacy in social networks is a growing concern that threatens to limit access to important information contained in these data structures. Analysis of the graph structure of social networks can provide valuable information for revenue generation and social science research, but unfortunately, ensuring this analysis does not violate individual privacy is difficult. Simply removing obvious identifiers from graphs or even releasing only aggregate results of analysis may not provide sufficient protection. Differential privacy is an alternative privacy model, popular in data-mining over tabular data, that uses noise to obscure individuals\u27 contributions to aggregate results and offers a strong mathematical guarantee that individuals\u27 presence in the data-set is hidden. Analyses that were previously vulnerable to identification of individuals and extraction of private data may be safely released under differential-privacy guarantees. However, existing adaptations of differential privacy to social network analysis are often complex and have considerable impact on the utility of the results, making it less likely that they will see widespread adoption in the social network analysis world. In fact, social scientists still often use the weakest form of privacy protection, simple anonymization, in their social network analysis publications. ^ We review the existing work in graph-privatization, including the two existing standards for adapting differential privacy to network data. We then proposecontributor-privacy and partition-privacy , novel standards for differential privacy over network data, and introduce simple, powerful private algorithms using these standards for common network analysis techniques that were infeasible to privatize under previous differential privacy standards. We also ensure that privatized social network analysis does not violate the level of rigor required in social science research, by proposing a method of determining statistical significance for paired samples under differential privacy using the Wilcoxon Signed-Rank Test, which is appropriate for non-normally distributed data. ^ Finally, we return to formally consider the case where differential privacy is not applied to data. Naive, deterministic approaches to privacy protection, including anonymization and aggregation of data, are often used in real world practice. De-anonymization research demonstrates that some naive approaches to privacy are highly vulnerable to reidentification attacks, and none of these approaches offer the robust guarantee of differential privacy. However, we propose that these methods fall across a range of protection: Some are better than others. In cases where adding noise to data is especially problematic, or acceptance and adoption of differential privacy is especially slow, it is critical to have a formal understanding of the alternatives. We define De Facto Privacy, a metric for comparing the relative privacy protection provided by deterministic approaches
    corecore