7 research outputs found
Privacy-Preserving Push-Pull Method for Decentralized Optimization via State Decomposition
Distributed optimization is manifesting great potential in multiple fields,
e.g., machine learning, control, and resource allocation. Existing
decentralized optimization algorithms require sharing explicit state
information among the agents, which raises the risk of private information
leakage. To ensure privacy security, combining information security mechanisms,
such as differential privacy and homomorphic encryption, with traditional
decentralized optimization algorithms is a commonly used means. However, this
would either sacrifice optimization accuracy or incur heavy computational
burden. To overcome these shortcomings, we develop a novel privacy-preserving
decentralized optimization algorithm, called PPSD, that combines gradient
tracking with a state decomposition mechanism. Specifically, each agent
decomposes its state associated with the gradient into two substates. One
substate is used for interaction with neighboring agents, and the other
substate containing private information acts only on the first substate and
thus is entirely agnostic to other agents. For the strongly convex and smooth
objective functions, PPSD attains a -linear convergence rate. Moreover, the
algorithm can preserve the agents' private information from being leaked to
honest-but-curious neighbors. Simulations further confirm the results
Dynamics based Privacy Preservation in Decentralized Optimization
With decentralized optimization having increased applications in various
domains ranging from machine learning, control, sensor networks, to robotics,
its privacy is also receiving increased attention. Existing privacy-preserving
approaches for decentralized optimization achieve privacy preservation by
patching decentralized optimization with information-technology privacy
mechanisms such as differential privacy or homomorphic encryption, which either
sacrifices optimization accuracy or incurs heavy computation/communication
overhead. We propose an inherently privacy-preserving decentralized
optimization algorithm by exploiting the robustness of decentralized
optimization to uncertainties in optimization dynamics. More specifically, we
present a general decentralized optimization framework, based on which we show
that privacy can be enabled in decentralized optimization by adding randomness
in optimization parameters. We further show that the added randomness has no
influence on the accuracy of optimization, and prove that our inherently
privacy-preserving algorithm has -linear convergence when the global
objective function is smooth and strongly convex. We also rigorously prove that
the proposed algorithm can avoid the gradient of a node from being inferable by
other nodes. Numerical simulation results confirm the theoretical predictions
Tailoring Gradient Methods for Differentially-Private Distributed Optimization
Decentralized optimization is gaining increased traction due to its
widespread applications in large-scale machine learning and multi-agent
systems. The same mechanism that enables its success, i.e., information sharing
among participating agents, however, also leads to the disclosure of individual
agents' private information, which is unacceptable when sensitive data are
involved. As differential privacy is becoming a de facto standard for privacy
preservation, recently results have emerged integrating differential privacy
with distributed optimization. Although such differential-privacy based privacy
approaches for distributed optimization are efficient in both computation and
communication, directly incorporating differential privacy design in existing
distributed optimization approaches significantly compromises optimization
accuracy. In this paper, we propose to redesign and tailor gradient methods for
differentially-private distributed optimization, and propose two
differential-privacy oriented gradient methods that can ensure both privacy and
optimality. We prove that the proposed distributed algorithms can ensure almost
sure convergence to an optimal solution under any persistent and
variance-bounded differential-privacy noise, which, to the best of our
knowledge, has not been reported before. The first algorithm is based on
static-consensus based gradient methods and only shares one variable in each
iteration. The second algorithm is based on dynamic-consensus
(gradient-tracking) based distributed optimization methods and, hence, it is
applicable to general directed interaction graph topologies. Numerical
comparisons with existing counterparts confirm the effectiveness of the
proposed approaches
Accuracy-aware privacy mechanisms for distributed computation
Distributed computing systems involve a network of devices or agents that use locally stored private information to solve a common problem. Distributed algorithms fundamentally require communication between devices leaving the system vulnerable to "privacy attacks" perpetrated by adversarial agents. In this dissertation, we focus on designing privacy-preserving distributed algorithms for -- (a) solving distributed optimization problems, (b) computing equilibrium of network aggregate games, and (c) solving a distributed system of linear equations. Specifically, we propose a privacy definition for distributed computation "non-identifiability", that allow us to simultaneously guarantee privacy and the accuracy of the computed solution. This definition involves showing that information observed by the adversary is compatible with several distributed computing problems and the associated ambiguity provides privacy.
Distributed Optimization: We propose the Function Sharing strategy that involves using correlated random functions to obfuscate private objective functions followed by using a standard distributed optimization algorithm. We characterize a tight graph connectivity condition for proving privacy via non-identifiability of local objective functions. We also prove correctness of our algorithm and show that we can achieve privacy and accuracy simultaneously.
Network Aggregate Games: We design a distributed Nash equilibrium computation algorithm for network aggregate games. Our algorithm uses locally balanced correlated random perturbations to hide information shared with neighbors for aggregate estimation. This step is followed by descent along the negative gradient of the local cost function. We show that if the graph of non-adversarial agents is connected and non-bipartite, then our algorithm keeps private local cost information non-identifiable while asymptotically converging to the accurate Nash equilibrium.
Average Consensus and System of Linear Equations: Finally, we design a finite-time algorithm for solving the average consensus problem over directed graphs with information-theoretic privacy. We use this algorithm to solve a distributed system of linear equations in finite-time while protecting the privacy of local equations. We characterize computation, communication, memory and iteration cost of our algorithm and characterize graph conditions for guaranteeing information-theoretic privacy of local data
Socially Responsible Machine Learning: On the Preservation of Individual Privacy and Fairness
Machine learning (ML) techniques have seen significant advances over the last decade and are playing an increasingly critical role in people's lives. While their potential societal benefits are enormous, they can also inflict great harm if not developed or used with care. In this thesis, we focus on two critical ethical issues in ML systems, the violation of privacy and fairness, and explore mitigating approaches in various scenarios.
On the privacy front, when ML systems are developed with private data from individuals, it is critical to prevent privacy violation. Differential privacy (DP), a widely used notion of privacy, ensures that no one by observing the computational outcome can infer a particular individual’s data with high confidence. However, DP is typically achieved by randomizing algorithms (e.g., adding noise), which inevitably leads to a trade-off between individual privacy and outcome accuracy. This trade-off can be difficult to balance, especially in settings where the same or correlated data is repeatedly
used/exposed during the computation. In the first part of the thesis, we illustrate two key ideas that can be used to balance an algorithm's privacy-accuracy tradeoff: (1) the reuse of intermediate computational results to reduce information leakage; and (2) improving algorithmic robustness to accommodate more randomness. We introduce a number of randomized, privacy-preserving algorithms that leverage these ideas in various contexts such as distributed optimization and sequential computation. It is shown that our algorithms can significantly improve the privacy-accuracy tradeoff over existing solutions.
On the fairness front, ML systems trained with real-world data can inherit biases and exhibit discrimination against already-disadvantaged or marginalized social groups. Recent works have proposed many fairness notions to measure and remedy such biases. However, their effectiveness is mostly studied in a static framework without accounting for the interactions between individuals and ML systems.
Since individuals inevitably react to the algorithmic decisions they are subjected to, understanding the downstream impacts of ML decisions is critical to ensure that these decisions are socially responsible. In the second part of the thesis, we present our research on evaluating the long-term impacts of (fair) ML decisions. Specifically, we establish a number of theoretically rigorous frameworks to model the interactions and feedback between ML systems and individuals, and conduct equilibrium analysis to evaluate the impact they each have on the other. We will illustrate how ML decisions and individual behavior evolve in such a system, and how imposing common fairness criteria intended to promote fairness may nevertheless lead to undesirable pernicious effects. Aided with such understanding, mitigation approaches are also discussed.PHDElectrical and Computer EngineeringUniversity of Michigan, Horace H. Rackham School of Graduate Studieshttp://deepblue.lib.umich.edu/bitstream/2027.42/169960/1/xueru_1.pd