247,534 research outputs found
Cloud-based Quadratic Optimization with Partially Homomorphic Encryption
The development of large-scale distributed control systems has led to the
outsourcing of costly computations to cloud-computing platforms, as well as to
concerns about privacy of the collected sensitive data. This paper develops a
cloud-based protocol for a quadratic optimization problem involving multiple
parties, each holding information it seeks to maintain private. The protocol is
based on the projected gradient ascent on the Lagrange dual problem and
exploits partially homomorphic encryption and secure multi-party computation
techniques. Using formal cryptographic definitions of indistinguishability, the
protocol is shown to achieve computational privacy, i.e., there is no
computationally efficient algorithm that any involved party can employ to
obtain private information beyond what can be inferred from the party's inputs
and outputs only. In order to reduce the communication complexity of the
proposed protocol, we introduced a variant that achieves this objective at the
expense of weaker privacy guarantees. We discuss in detail the computational
and communication complexity properties of both algorithms theoretically and
also through implementations. We conclude the paper with a discussion on
computational privacy and other notions of privacy such as the non-unique
retrieval of the private information from the protocol outputs
Sample Complexity Bounds on Differentially Private Learning via Communication Complexity
In this work we analyze the sample complexity of classification by
differentially private algorithms. Differential privacy is a strong and
well-studied notion of privacy introduced by Dwork et al. (2006) that ensures
that the output of an algorithm leaks little information about the data point
provided by any of the participating individuals. Sample complexity of private
PAC and agnostic learning was studied in a number of prior works starting with
(Kasiviswanathan et al., 2008) but a number of basic questions still remain
open, most notably whether learning with privacy requires more samples than
learning without privacy.
We show that the sample complexity of learning with (pure) differential
privacy can be arbitrarily higher than the sample complexity of learning
without the privacy constraint or the sample complexity of learning with
approximate differential privacy. Our second contribution and the main tool is
an equivalence between the sample complexity of (pure) differentially private
learning of a concept class (or ) and the randomized one-way
communication complexity of the evaluation problem for concepts from . Using
this equivalence we prove the following bounds:
1. , where is the Littlestone's (1987)
dimension characterizing the number of mistakes in the online-mistake-bound
learning model. Known bounds on then imply that can be much
higher than the VC-dimension of .
2. For any , there exists a class such that but .
3. For any , there exists a class such that the sample complexity of
(pure) -differentially private PAC learning is but
the sample complexity of the relaxed -differentially private
PAC learning is . This resolves an open problem of
Beimel et al. (2013b).Comment: Extended abstract appears in Conference on Learning Theory (COLT)
201
Killing Two Birds with One Stone: Quantization Achieves Privacy in Distributed Learning
Communication efficiency and privacy protection are two critical issues in
distributed machine learning. Existing methods tackle these two issues
separately and may have a high implementation complexity that constrains their
application in a resource-limited environment. We propose a comprehensive
quantization-based solution that could simultaneously achieve communication
efficiency and privacy protection, providing new insights into the correlated
nature of communication and privacy. Specifically, we demonstrate the
effectiveness of our proposed solutions in the distributed stochastic gradient
descent (SGD) framework by adding binomial noise to the uniformly quantized
gradients to reach the desired differential privacy level but with a minor
sacrifice in communication efficiency. We theoretically capture the new
trade-offs between communication, privacy, and learning performance
Towards Communication-Efficient Quantum Oblivious Key Distribution
Oblivious Transfer, a fundamental problem in the field of secure multi-party
computation is defined as follows: A database DB of N bits held by Bob is
queried by a user Alice who is interested in the bit DB_b in such a way that
(1) Alice learns DB_b and only DB_b and (2) Bob does not learn anything about
Alice's choice b. While solutions to this problem in the classical domain rely
largely on unproven computational complexity theoretic assumptions, it is also
known that perfect solutions that guarantee both database and user privacy are
impossible in the quantum domain. Jakobi et al. [Phys. Rev. A, 83(2), 022301,
Feb 2011] proposed a protocol for Oblivious Transfer using well known QKD
techniques to establish an Oblivious Key to solve this problem. Their solution
provided a good degree of database and user privacy (using physical principles
like impossibility of perfectly distinguishing non-orthogonal quantum states
and the impossibility of superluminal communication) while being loss-resistant
and implementable with commercial QKD devices (due to the use of SARG04).
However, their Quantum Oblivious Key Distribution (QOKD) protocol requires a
communication complexity of O(N log N). Since modern databases can be extremely
large, it is important to reduce this communication as much as possible. In
this paper, we first suggest a modification of their protocol wherein the
number of qubits that need to be exchanged is reduced to O(N). A subsequent
generalization reduces the quantum communication complexity even further in
such a way that only a few hundred qubits are needed to be transferred even for
very large databases.Comment: 7 page
- …