22,387 research outputs found
Utilitarian Distributed Constraint Optimization Problems
Privacy has been a major motivation for distributed problem optimization.
However, even though several methods have been proposed to evaluate it, none of
them is widely used. The Distributed Constraint Optimization Problem (DCOP) is
a fundamental model used to approach various families of distributed problems.
As privacy loss does not occur when a solution is accepted, but when it is
proposed, privacy requirements cannot be interpreted as a criteria of the
objective function of the DCOP. Here we approach the problem by letting both
the optimized costs found in DCOPs and the privacy requirements guide the
agents' exploration of the search space. We introduce Utilitarian Distributed
Constraint Optimization Problem (UDCOP) where the costs and the privacy
requirements are used as parameters to a heuristic modifying the search
process. Common stochastic algorithms for decentralized constraint optimization
problems are evaluated here according to how well they preserve privacy.
Further, we propose some extensions where these solvers modify their search
process to take into account their privacy requirements, succeeding in
significantly reducing their privacy loss without significant degradation of
the solution quality
Distributed Constraint Problems for Utilitarian Agents with Privacy Concerns, Recast as POMDPs
Privacy has traditionally been a major motivation for distributed problem
solving. Distributed Constraint Satisfaction Problem (DisCSP) as well as
Distributed Constraint Optimization Problem (DCOP) are fundamental models used
to solve various families of distributed problems. Even though several
approaches have been proposed to quantify and preserve privacy in such
problems, none of them is exempt from limitations. Here we approach the problem
by assuming that computation is performed among utilitarian agents. We
introduce a utilitarian approach where the utility of each state is estimated
as the difference between the reward for reaching an agreement on assignments
of shared variables and the cost of privacy loss. We investigate extensions to
solvers where agents integrate the utility function to guide their search and
decide which action to perform, defining thereby their policy. We show that
these extended solvers succeed in significantly reducing privacy loss without
significant degradation of the solution quality
DisCSPs with Privacy Recast as Planning Problems for Utility-based Agents
Privacy has traditionally been a major motivation for decentralized problem
solving. However, even though several metrics have been proposed to quantify
it, none of them is easily integrated with common solvers. Constraint
programming is a fundamental paradigm used to approach various families of
problems. We introduce Utilitarian Distributed Constraint Satisfaction Problems
(UDisCSP) where the utility of each state is estimated as the difference
between the the expected rewards for agreements on assignments for shared
variables, and the expected cost of privacy loss. Therefore, a traditional
DisCSP with privacy requirements is viewed as a planning problem. The actions
available to agents are: communication and local inference. Common
decentralized solvers are evaluated here from the point of view of their
interpretation as greedy planners. Further, we investigate some simple
extensions where these solvers start taking into account the utility function.
In these extensions we assume that the planning problem is further restricting
the set of communication actions to only the communication primitives present
in the corresponding solver protocols. The solvers obtained for the new type of
problems propose the action (communication/inference) to be performed in each
situation, defining thereby the policy
Local Differential Privacy in Decentralized Optimization
Privacy concerns with sensitive data are receiving increasing attention. In
this paper, we study local differential privacy (LDP) in interactive
decentralized optimization. By constructing random local aggregators, we
propose a framework to amplify LDP by a constant. We take Alternating Direction
Method of Multipliers (ADMM), and decentralized gradient descent as two
concrete examples, where experiments support our theory. In an asymptotic view,
we address the following question: Under LDP, is it possible to design a
distributed private minimizer for arbitrary closed convex constraints with
utility loss not explicitly dependent on dimensionality? As an affiliated
result, we also show that with merely linear secret sharing, information
theoretic privacy is achievable for bounded colluding agents
AsymDPOP: Complete Inference for Asymmetric Distributed Constraint Optimization Problems
Asymmetric distributed constraint optimization problems (ADCOPs) are an
emerging model for coordinating agents with personal preferences. However, the
existing inference-based complete algorithms which use local eliminations
cannot be applied to ADCOPs, as the parent agents are required to transfer
their private functions to their children. Rather than disclosing private
functions explicitly to facilitate local eliminations, we solve the problem by
enforcing delayed eliminations and propose AsymDPOP, the first inference-based
complete algorithm for ADCOPs. To solve the severe scalability problems
incurred by delayed eliminations, we propose to reduce the memory consumption
by propagating a set of smaller utility tables instead of a joint utility
table, and to reduce the computation efforts by sequential optimizations
instead of joint optimizations. The empirical evaluation indicates that
AsymDPOP significantly outperforms the state-of-the-arts, as well as the
vanilla DPOP with PEAV formulation
Differentially Private Distributed Constrained Optimization
Many resource allocation problems can be formulated as an optimization
problem whose constraints contain sensitive information about participating
users. This paper concerns solving this kind of optimization problem in a
distributed manner while protecting the privacy of user information. Without
privacy considerations, existing distributed algorithms normally consist in a
central entity computing and broadcasting certain public coordination signals
to participating users. However, the coordination signals often depend on user
information, so that an adversary who has access to the coordination signals
can potentially decode information on individual users and put user privacy at
risk. We present a distributed optimization algorithm that preserves
differential privacy, which is a strong notion that guarantees user privacy
regardless of any auxiliary information an adversary may have. The algorithm
achieves privacy by perturbing the public signals with additive noise, whose
magnitude is determined by the sensitivity of the projection operation onto
user-specified constraints. By viewing the differentially private algorithm as
an implementation of stochastic gradient descent, we are able to derive a bound
for the suboptimality of the algorithm. We illustrate the implementation of our
algorithm via a case study of electric vehicle charging. Specifically, we
derive the sensitivity and present numerical simulations for the algorithm.
Through numerical simulations, we are able to investigate various aspects of
the algorithm when being used in practice, including the choice of step size,
number of iterations, and the trade-off between privacy level and
suboptimality.Comment: Submitted to the IEEE Transactions on Automatic Contro
REAP: An Efficient Incentive Mechanism for Reconciling Aggregation Accuracy and Individual Privacy in Crowdsensing
Incentive mechanism plays a critical role in privacy-aware crowdsensing. Most
previous studies on co-design of incentive mechanism and privacy preservation
assume a trustworthy fusion center (FC). Very recent work has taken steps to
relax the assumption on trustworthy FC and allows participatory users (PUs) to
add well calibrated noise to their raw sensing data before reporting them,
whereas the focus is on the equilibrium behavior of data subjects with binary
data. Making a paradigm shift, this paper aim to quantify the privacy
compensation for continuous data sensing while allowing FC to directly control
PUs. There are two conflicting objectives in such scenario: FC desires better
quality data in order to achieve higher aggregation accuracy whereas PUs prefer
adding larger noise for higher privacy-preserving levels (PPLs). To achieve a
good balance therein, we design an efficient incentive mechanism to REconcile
FC's Aggregation accuracy and individual PU's data Privacy (REAP).
Specifically, we adopt the celebrated notion of differential privacy to measure
PUs' PPLs and quantify their impacts on FC's aggregation accuracy. Then,
appealing to Contract Theory, we design an incentive mechanism to maximize FC's
aggregation accuracy under a given budget. The proposed incentive mechanism
offers different contracts to PUs with different privacy preferences, by which
FC can directly control PUs. It can further overcome the information asymmetry,
i.e., the FC typically does not know each PU's precise privacy preference. We
derive closed-form solutions for the optimal contracts in both complete
information and incomplete information scenarios. Further, the results are
generalized to the continuous case where PUs' privacy preferences take values
in a continuous domain. Extensive simulations are provided to validate the
feasibility and advantages of our proposed incentive mechanism.Comment: 11 pages, 6 figure
Optimal Noise-Adding Mechanism in Additive Differential Privacy
We derive the optimal -differentially private query-output
independent noise-adding mechanism for single real-valued query function under
a general cost-minimization framework. Under a mild technical condition, we
show that the optimal noise probability distribution is a uniform distribution
with a probability mass at the origin. We explicitly derive the optimal noise
distribution for general cost functions, including (for noise
magnitude) and (for noise power) cost functions, and show that the
probability concentration on the origin occurs when .
Our result demonstrates an improvement over the existing Gaussian mechanisms by
a factor of two and three for -differential privacy in the high
privacy regime in the context of minimizing the noise magnitude and noise
power, and the gain is more pronounced in the low privacy regime. Our result is
consistent with the existing result for -differential privacy in
the discrete setting, and identifies a probability concentration phenomenon in
the continuous setting.Comment: 10 pages, 5 figures. Accepted by the 22nd International Conference on
Artificial Intelligence and Statistics (AISTATS 2019
Federated Learning via Over-the-Air Computation
The stringent requirements for low-latency and privacy of the emerging
high-stake applications with intelligent devices such as drones and smart
vehicles make the cloud computing inapplicable in these scenarios. Instead,
edge machine learning becomes increasingly attractive for performing training
and inference directly at network edges without sending data to a centralized
data center. This stimulates a nascent field termed as federated learning for
training a machine learning model on computation, storage, energy and bandwidth
limited mobile devices in a distributed manner. To preserve data privacy and
address the issues of unbalanced and non-IID data points across different
devices, the federated averaging algorithm has been proposed for global model
aggregation by computing the weighted average of locally updated model at each
selected device. However, the limited communication bandwidth becomes the main
bottleneck for aggregating the locally computed updates. We thus propose a
novel over-the-air computation based approach for fast global model aggregation
via exploring the superposition property of a wireless multiple-access channel.
This is achieved by joint device selection and beamforming design, which is
modeled as a sparse and low-rank optimization problem to support efficient
algorithms design. To achieve this goal, we provide a
difference-of-convex-functions (DC) representation for the sparse and low-rank
function to enhance sparsity and accurately detect the fixed-rank constraint in
the procedure of device selection. A DC algorithm is further developed to solve
the resulting DC program with global convergence guarantees. The algorithmic
advantages and admirable performance of the proposed methodologies are
demonstrated through extensive numerical results
Distributed generation of privacy preserving data with user customization
Distributed devices such as mobile phones can produce and store large amounts
of data that can enhance machine learning models; however, this data may
contain private information specific to the data owner that prevents the
release of the data. We wish to reduce the correlation between user-specific
private information and data while maintaining the useful information. Rather
than learning a large model to achieve privatization from end to end, we
introduce a decoupling of the creation of a latent representation and the
privatization of data that allows user-specific privatization to occur in a
distributed setting with limited computation and minimal disturbance on the
utility of the data. We leverage a Variational Autoencoder (VAE) to create a
compact latent representation of the data; however, the VAE remains fixed for
all devices and all possible private labels. We then train a small generative
filter to perturb the latent representation based on individual preferences
regarding the private and utility information. The small filter is trained by
utilizing a GAN-type robust optimization that can take place on a distributed
device. We conduct experiments on three popular datasets: MNIST, UCI-Adult, and
CelebA, and give a thorough evaluation including visualizing the geometry of
the latent embeddings and estimating the empirical mutual information to show
the effectiveness of our approach.Comment: accepted in ICLR 2019 SafeML worksho
- …