2 research outputs found

    KloakDB: A Platform for Analyzing Sensitive Data with KK-anonymous Query Processing

    Full text link
    A private data federation enables data owners to pool their information for querying without disclosing their secret tuples to one another. Here, a client queries the union of the records of all data owners. The data owners work together to answer the query using privacy-preserving algorithms that prevent them from learning unauthorized information about the inputs of their peers. Only the client, and a federation coordinator, learn the query's output. KloakDB is a private data federation that uses trusted hardware to process SQL queries over the inputs of two or more parties. Currently private data federations compute their queries fully-obliviously, guaranteeing that no information is revealed about the sensitive inputs of a data owner to their peers by observing the query's instruction traces and memory access patterns. Oblivious querying almost always exacts multiple orders of magnitude slowdown in query runtimes compared to plaintext execution, making it impractical for many applications. KloakDB offers a semi-oblivious computing framework, kk-anonymous query processing. We make the query's observable transcript kk-anonymous because it is a popular standard for data release in many domains including medicine, educational research, and government data. KloakDB's queries run such that each data owner may deduce information about no fewer than kk individuals in the data of their peers. In addition, stakeholders set kk, creating a novel trade-off between privacy and performance. Our results show that KloakDB enjoys speedups of up to 117117X using k-anonymous query processing over full-oblivious evaluation

    A Shuffling Framework for Local Differential Privacy

    Full text link
    ldp deployments are vulnerable to inference attacks as an adversary can link the noisy responses to their identity and subsequently, auxiliary information using the order of the data. An alternative model, shuffle DP, prevents this by shuffling the noisy responses uniformly at random. However, this limits the data learnability -- only symmetric functions (input order agnostic) can be learned. In this paper, we strike a balance and show that systematic shuffling of the noisy responses can thwart specific inference attacks while retaining some meaningful data learnability. To this end, we propose a novel privacy guarantee, d-sigma-privacy, that captures the privacy of the order of a data sequence. d-sigma-privacy allows tuning the granularity at which the ordinal information is maintained, which formalizes the degree the resistance to inference attacks trading it off with data learnability. Additionally, we propose a novel shuffling mechanism that can achieve \name-privacy and demonstrate the practicality of our mechanism via evaluation on real-world datasets
    corecore