2,563 research outputs found
Linear and Range Counting under Metric-based Local Differential Privacy
Local differential privacy (LDP) enables private data sharing and analytics
without the need for a trusted data collector. Error-optimal primitives (for,
e.g., estimating means and item frequencies) under LDP have been well studied.
For analytical tasks such as range queries, however, the best known error bound
is dependent on the domain size of private data, which is potentially
prohibitive. This deficiency is inherent as LDP protects the same level of
indistinguishability between any pair of private data values for each data
downer.
In this paper, we utilize an extension of -LDP called Metric-LDP or
-LDP, where a metric defines heterogeneous privacy guarantees for
different pairs of private data values and thus provides a more flexible knob
than does to relax LDP and tune utility-privacy trade-offs. We show
that, under such privacy relaxations, for analytical workloads such as linear
counting, multi-dimensional range counting queries, and quantile queries, we
can achieve significant gains in utility. In particular, for range queries
under -LDP where the metric is the -distance function scaled by
, we design mechanisms with errors independent on the domain sizes;
instead, their errors depend on the metric , which specifies in what
granularity the private data is protected. We believe that the primitives we
design for -LDP will be useful in developing mechanisms for other analytical
tasks, and encourage the adoption of LDP in practice
Privacy-Preserving Collaborative Learning through Feature Extraction
We propose a framework in which multiple entities collaborate to build a
machine learning model while preserving privacy of their data. The approach
utilizes feature embeddings from shared/per-entity feature extractors
transforming data into a feature space for cooperation between entities. We
propose two specific methods and compare them with a baseline method. In Shared
Feature Extractor (SFE) Learning, the entities use a shared feature extractor
to compute feature embeddings of samples. In Locally Trained Feature Extractor
(LTFE) Learning, each entity uses a separate feature extractor and models are
trained using concatenated features from all entities. As a baseline, in
Cooperatively Trained Feature Extractor (CTFE) Learning, the entities train
models by sharing raw data. Secure multi-party algorithms are utilized to train
models without revealing data or features in plain text. We investigate the
trade-offs among SFE, LTFE, and CTFE in regard to performance, privacy leakage
(using an off-the-shelf membership inference attack), and computational cost.
LTFE provides the most privacy, followed by SFE, and then CTFE. Computational
cost is lowest for SFE and the relative speed of CTFE and LTFE depends on
network architecture. CTFE and LTFE provide the best accuracy. We use MNIST, a
synthetic dataset, and a credit card fraud detection dataset for evaluations
Anonymization procedures for tabular data: an explanatory technical and legal synthesis
In the European Union, Data Controllers and Data Processors, who work with personal data, have to comply with the General Data Protection Regulation and other applicable laws. This affects the storing and processing of personal data. But some data processing in data mining or statistical analyses does not require any personal reference to the data. Thus, personal context can be removed. For these use cases, to comply with applicable laws, any existing personal information has to be removed by applying the so-called anonymization. However, anonymization should maintain data utility. Therefore, the concept of anonymization is a double-edged sword with an intrinsic trade-off: privacy enforcement vs. utility preservation. The former might not be entirely guaranteed when anonymized data are published as Open Data. In theory and practice, there exist diverse approaches to conduct and score anonymization. This explanatory synthesis discusses the technical perspectives on the anonymization of tabular data with a special emphasis on the European Union’s legal base. The studied methods for conducting anonymization, and scoring the anonymization procedure and the resulting anonymity are explained in unifying terminology. The examined methods and scores cover both categorical and numerical data. The examined scores involve data utility, information preservation, and privacy models. In practice-relevant examples, methods and scores are experimentally tested on records from the UCI Machine Learning Repository’s “Census Income (Adult)” dataset
OpBoost: A Vertical Federated Tree Boosting Framework Based on Order-Preserving Desensitization
Vertical Federated Learning (FL) is a new paradigm that enables users with
non-overlapping attributes of the same data samples to jointly train a model
without directly sharing the raw data. Nevertheless, recent works show that
it's still not sufficient to prevent privacy leakage from the training process
or the trained model. This paper focuses on studying the privacy-preserving
tree boosting algorithms under the vertical FL. The existing solutions based on
cryptography involve heavy computation and communication overhead and are
vulnerable to inference attacks. Although the solution based on Local
Differential Privacy (LDP) addresses the above problems, it leads to the low
accuracy of the trained model.
This paper explores to improve the accuracy of the widely deployed tree
boosting algorithms satisfying differential privacy under vertical FL.
Specifically, we introduce a framework called OpBoost. Three order-preserving
desensitization algorithms satisfying a variant of LDP called distance-based
LDP (dLDP) are designed to desensitize the training data. In particular, we
optimize the dLDP definition and study efficient sampling distributions to
further improve the accuracy and efficiency of the proposed algorithms. The
proposed algorithms provide a trade-off between the privacy of pairs with large
distance and the utility of desensitized values. Comprehensive evaluations show
that OpBoost has a better performance on prediction accuracy of trained models
compared with existing LDP approaches on reasonable settings. Our code is open
source
- …