12 research outputs found

    Differentially Private Fractional Frequency Moments Estimation with Polylogarithmic Space

    Get PDF
    We prove that Fp sketch, a well-celebrated streaming algorithm for frequency moments estimation, is differentially private as is when p ∈ (0, 1]. Fp sketch uses only polylogarithmic space, exponentially better than existing DP baselines and only worse than the optimal non-private baseline by a logarithmic factor. The evaluation shows that Fp sketch can achieve reasonable accuracy with differential privacy guarantee. The evaluation code is included in the supplementary material

    Frequency Estimation Under Multiparty Differential Privacy: One-shot and Streaming

    Get PDF
    We study the fundamental problem of frequency estimation under both privacy and communication constraints, where the data is distributed among kk parties. We consider two application scenarios: (1) one-shot, where the data is static and the aggregator conducts a one-time computation; and (2) streaming, where each party receives a stream of items over time and the aggregator continuously monitors the frequencies. We adopt the model of multiparty differential privacy (MDP), which is more general than local differential privacy (LDP) and (centralized) differential privacy. Our protocols achieve optimality (up to logarithmic factors) permissible by the more stringent of the two constraints. In particular, when specialized to the ε\varepsilon-LDP model, our protocol achieves an error of k/(eΘ(ε)1)\sqrt{k}/(e^{\Theta(\varepsilon)}-1) using O(kmax{ε,1ε})O(k\max\{ \varepsilon, \frac{1}{\varepsilon} \}) bits of communication and O(klogu)O(k \log u) bits of public randomness, where uu is the size of the domain

    Differentially Private Vertical Federated Clustering

    Full text link
    In many applications, multiple parties have private data regarding the same set of users but on disjoint sets of attributes, and a server wants to leverage the data to train a model. To enable model learning while protecting the privacy of the data subjects, we need vertical federated learning (VFL) techniques, where the data parties share only information for training the model, instead of the private data. However, it is challenging to ensure that the shared information maintains privacy while learning accurate models. To the best of our knowledge, the algorithm proposed in this paper is the first practical solution for differentially private vertical federated k-means clustering, where the server can obtain a set of global centers with a provable differential privacy guarantee. Our algorithm assumes an untrusted central server that aggregates differentially private local centers and membership encodings from local data parties. It builds a weighted grid as the synopsis of the global dataset based on the received information. Final centers are generated by running any k-means algorithm on the weighted grid. Our approach for grid weight estimation uses a novel, light-weight, and differentially private set intersection cardinality estimation algorithm based on the Flajolet-Martin sketch. To improve the estimation accuracy in the setting with more than two data parties, we further propose a refined version of the weights estimation algorithm and a parameter tuning strategy to reduce the final k-means utility to be close to that in the central private setting. We provide theoretical utility analysis and experimental evaluation results for the cluster centers computed by our algorithm and show that our approach performs better both theoretically and empirically than the two baselines based on existing techniques

    Counting Distinct Elements in the Turnstile Model with Differential Privacy under Continual Observation

    Full text link
    Privacy is a central challenge for systems that learn from sensitive data sets, especially when a system's outputs must be continuously updated to reflect changing data. We consider the achievable error for differentially private continual release of a basic statistic -- the number of distinct items -- in a stream where items may be both inserted and deleted (the turnstile model). With only insertions, existing algorithms have additive error just polylogarithmic in the length of the stream TT. We uncover a much richer landscape in the turnstile model, even without considering memory restrictions. We show that every differentially private mechanism that handles insertions and deletions has worst-case additive error at least T1/4T^{1/4} even under a relatively weak, event-level privacy definition. Then, we identify a parameter of the input stream, its maximum flippancy, that is low for natural data streams and for which we give tight parameterized error guarantees. Specifically, the maximum flippancy is the largest number of times that the contribution of a single item to the distinct elements count changes over the course of the stream. We present an item-level differentially private mechanism that, for all turnstile streams with maximum flippancy ww, continually outputs the number of distinct elements with an O(wpolylogT)O(\sqrt{w} \cdot poly\log T) additive error, without requiring prior knowledge of ww. We prove that this is the best achievable error bound that depends only on ww, for a large range of values of ww. When ww is small, the error of our mechanism is similar to the polylogarithmic in TT error in the insertion-only setting, bypassing the hardness in the turnstile model

    Fighting Fake News in Encrypted Messaging with the Fuzzy Anonymous Complaint Tally System (FACTS)

    Get PDF
    Recent years have seen a strong uptick in both the prevalence and real-world consequences of false information spread through online platforms. At the same time, encrypted messaging systems such as WhatsApp, Signal, and Telegram, are rapidly gaining popularity as users seek increased privacy in their digital lives. The challenge we address is how to combat the viral spread of misinformation without compromising privacy. Our FACTS system tracks user complaints on messages obliviously, only revealing the message\u27s contents and originator once sufficiently many complaints have been lodged. Our system is private, meaning it does not reveal anything about the senders or contents of messages which have received few or no complaints; secure, meaning there is no way for a malicious user to evade the system or gain an outsized impact over the complaint system; and scalable, as we demonstrate excellent practical efficiency for up to millions of complaints per day. Our main technical contribution is a new collaborative counting Bloom filter, a simple construction with difficult probabilistic analysis, which may have independent interest as a privacy-preserving randomized count sketch data structure. Compared to prior work on message flagging and tracing in end-to-end encrypted messaging, our novel contribution is the addition of a high threshold of multiple complaints that are needed before a message is audited or flagged. We present and carefully analyze the probabilistic performance of our data structure, provide a precise security definition and proof, and then measure the accuracy and scalability of our scheme via experimentation

    Differentially-Private Multi-Party Sketching for Large-Scale Statistics

    No full text
    We consider a scenario where multiple organizations holding large amounts of sensitive data from their users wish to compute aggregate statistics on this data while protecting the privacy of individual users. To support large-scale analytics we investigate how this privacy can be provided for the case of sketching algorithms running in time sub-linear of the input size. We begin with the well-known LogLog sketch for computing the number of unique elements in a data stream. We show that this algorithm already achieves differential privacy (even without adding any noise) when computed using a private hash function by a trusted curator. Next, we show how to eliminate this requirement of a private hash function by injecting a small amount of noise, allowing us to instantiate an efficient LogLog protocol for the multi-party setting. To demonstrate the practicality of this approach, we run extensive experimentation on multiple datasets, including the publicly available IP address data set from University of Michigan’s scans of internet IPv4 space, to determine the tradeoffs among efficiency, privacy and accuracy of our implementation for varying numbers of parties and input sizes. Finally, we generalize our approach for the LogLog sketch and obtain a general framework for constructing multi-party differentially private protocols for several other sketching algorithms

    Differentially-Private Multi-Party Sketching for Large-Scale Statistics

    No full text
    We consider a scenario where multiple organizations holding large amounts of sensitive data from their users wish to compute aggregate statistics on this data while protecting the privacy of individual users. To support large-scale analytics we investigate how this privacy can be provided for the case of sketching algorithms running in time sub-linear of the input size

    On the Privacy of Sublinear-Communication Jaccard Index Estimation via Min-hash Sketching

    Get PDF
    The min-hash sketch is a well-known technique for low-communication approximation of the Jaccard index between two input sets. Moreover, there is a folklore belief that min-hash sketch based protocols protect the privacy of the inputs. In this paper, we investigate this folklore to quantify the privacy of the min-hash sketch. We begin our investigation by considering the privacy of min-hash in a centralized setting where the hash functions are chosen by the min-hash functionality and are unknown to the participants. We show that in this case the min-hash output satisfies the standard definition of differential privacy (DP) without any additional noise. This immediately yields a privacy-preserving sublinear-communication semi-honest 2-PC protocol based on FHE where the hash function is evaluated homomorphically. To improve the efficiency of this protocol, we next consider an implementation in the random oracle model. Here, the protocol participants jointly sample public prefixes for domain separation of the random oracle, and locally evaluate the resulting hash functions on their input sets. Unfortunately, we show that in this public hash function setting, the min-hash output is no longer DP. We therefore consider the notion of distributional differential privacy (DDP) introduced by Bassily et al.~(FOCS 2013). We show that if the honest party\u27s set has sufficiently high min-entropy then the output of the min-hash functionality achieves DDP, again without any added noise. This yields a more efficient semi-honest two-party protocol in the random oracle model, where parties first locally hash their input sets and then perform a 2PC for comparison. By proving that our protocols satisfy DP and DDP respectively, our results formally confirm and qualify the folklore belief that min-hash based protocols protect the privacy of their inputs

    Differentially-Private Multi-Party Sketching for Large-Scale Statistics

    No full text
    corecore