2,665 research outputs found
Approximately Truthful Multi-Agent Optimization Using Cloud-Enforced Joint Differential Privacy
Multi-agent coordination problems often require agents to exchange state
information in order to reach some collective goal, such as agreement on a
final state value. In some cases, it is feasible that opportunistic agents may
deceptively report false state values for their own benefit, e.g., to claim a
larger portion of shared resources. Motivated by such cases, this paper
presents a multi-agent coordination framework which disincentivizes
opportunistic misreporting of state information. This paper focuses on
multi-agent coordination problems that can be stated as nonlinear programs,
with non-separable constraints coupling the agents. In this setting, an
opportunistic agent may be tempted to skew the problem's constraints in its
favor to reduce its local cost, and this is exactly the behavior we seek to
disincentivize. The framework presented uses a primal-dual approach wherein the
agents compute primal updates and a centralized cloud computer computes dual
updates. All computations performed by the cloud are carried out in a way that
enforces joint differential privacy, which adds noise in order to dilute any
agent's influence upon the value of its cost function in the problem. We show
that this dilution deters agents from intentionally misreporting their states
to the cloud, and present bounds on the possible cost reduction an agent can
attain through misreporting its state. This work extends our earlier work on
incorporating ordinary differential privacy into multi-agent optimization, and
we show that this work can be modified to provide a disincentivize for
misreporting states to the cloud. Numerical results are presented to
demonstrate convergence of the optimization algorithm under joint differential
privacy.Comment: 17 pages, 3 figure
Distributed Private Online Learning for Social Big Data Computing over Data Center Networks
With the rapid growth of Internet technologies, cloud computing and social
networks have become ubiquitous. An increasing number of people participate in
social networks and massive online social data are obtained. In order to
exploit knowledge from copious amounts of data obtained and predict social
behavior of users, we urge to realize data mining in social networks. Almost
all online websites use cloud services to effectively process the large scale
of social data, which are gathered from distributed data centers. These data
are so large-scale, high-dimension and widely distributed that we propose a
distributed sparse online algorithm to handle them. Additionally,
privacy-protection is an important point in social networks. We should not
compromise the privacy of individuals in networks, while these social data are
being learned for data mining. Thus we also consider the privacy problem in
this article. Our simulations shows that the appropriate sparsity of data would
enhance the performance of our algorithm and the privacy-preserving method does
not significantly hurt the performance of the proposed algorithm.Comment: ICC201
Privacy-Preserving Distributed Optimization via Subspace Perturbation: A General Framework
As the modern world becomes increasingly digitized and interconnected,
distributed signal processing has proven to be effective in processing its
large volume of data. However, a main challenge limiting the broad use of
distributed signal processing techniques is the issue of privacy in handling
sensitive data. To address this privacy issue, we propose a novel yet general
subspace perturbation method for privacy-preserving distributed optimization,
which allows each node to obtain the desired solution while protecting its
private data. In particular, we show that the dual variables introduced in each
distributed optimizer will not converge in a certain subspace determined by the
graph topology. Additionally, the optimization variable is ensured to converge
to the desired solution, because it is orthogonal to this non-convergent
subspace. We therefore propose to insert noise in the non-convergent subspace
through the dual variable such that the private data are protected, and the
accuracy of the desired solution is completely unaffected. Moreover, the
proposed method is shown to be secure under two widely-used adversary models:
passive and eavesdropping. Furthermore, we consider several distributed
optimizers such as ADMM and PDMM to demonstrate the general applicability of
the proposed method. Finally, we test the performance through a set of
applications. Numerical tests indicate that the proposed method is superior to
existing methods in terms of several parameters like estimated accuracy,
privacy level, communication cost and convergence rate
Cloud-based Quadratic Optimization with Partially Homomorphic Encryption
The development of large-scale distributed control systems has led to the
outsourcing of costly computations to cloud-computing platforms, as well as to
concerns about privacy of the collected sensitive data. This paper develops a
cloud-based protocol for a quadratic optimization problem involving multiple
parties, each holding information it seeks to maintain private. The protocol is
based on the projected gradient ascent on the Lagrange dual problem and
exploits partially homomorphic encryption and secure multi-party computation
techniques. Using formal cryptographic definitions of indistinguishability, the
protocol is shown to achieve computational privacy, i.e., there is no
computationally efficient algorithm that any involved party can employ to
obtain private information beyond what can be inferred from the party's inputs
and outputs only. In order to reduce the communication complexity of the
proposed protocol, we introduced a variant that achieves this objective at the
expense of weaker privacy guarantees. We discuss in detail the computational
and communication complexity properties of both algorithms theoretically and
also through implementations. We conclude the paper with a discussion on
computational privacy and other notions of privacy such as the non-unique
retrieval of the private information from the protocol outputs
- …