3 research outputs found

    Differentially Private Convex Optimization with Feasibility Guarantees

    Full text link
    This paper develops a novel differentially private framework to solve convex optimization problems with sensitive optimization data and complex physical or operational constraints. Unlike standard noise-additive algorithms, that act primarily on the problem data, objective or solution, and disregard the problem constraints, this framework requires the optimization variables to be a function of the noise and exploits a chance-constrained problem reformulation with formal feasibility guarantees. The noise is calibrated to provide differential privacy for identity and linear queries on the optimization solution. For many applications, including resource allocation problems, the proposed framework provides a trade-off between the expected optimality loss and the variance of optimization results

    Privacy-Preserving Distributed Zeroth-Order Optimization

    Full text link
    We develop a privacy-preserving distributed algorithm to minimize a regularized empirical risk function when the first-order information is not available and data is distributed over a multi-agent network. We employ a zeroth-order method to minimize the associated augmented Lagrangian function in the primal domain using the alternating direction method of multipliers (ADMM). We show that the proposed algorithm, named distributed zeroth-order ADMM (D-ZOA), has intrinsic privacy-preserving properties. Unlike the existing privacy-preserving methods based on the ADMM where the primal or the dual variables are perturbed with noise, the inherent randomness due to the use of a zeroth-order method endows D-ZOA with intrinsic differential privacy. By analyzing the perturbation of the primal variable, we show that the privacy leakage of the proposed D-ZOA algorithm is bounded. In addition, we employ the moments accountant method to show that the total privacy leakage grows sublinearly with the number of ADMM iterations. D-ZOA outperforms the existing differentially private approaches in terms of accuracy while yielding the same privacy guarantee. We prove that D-ZOA converges to the optimal solution at a rate of O(1/M)\mathcal{O}(1/M) where MM is the number of ADMM iterations. The convergence analysis also reveals a practically important trade-off between privacy and accuracy. Simulation results verify the desirable privacy-preserving properties of D-ZOA and its superiority over a state-of-the-art algorithm as well as its network-wide convergence to the optimal solution

    Local Differential Privacy in Decentralized Optimization

    Full text link
    Privacy concerns with sensitive data are receiving increasing attention. In this paper, we study local differential privacy (LDP) in interactive decentralized optimization. By constructing random local aggregators, we propose a framework to amplify LDP by a constant. We take Alternating Direction Method of Multipliers (ADMM), and decentralized gradient descent as two concrete examples, where experiments support our theory. In an asymptotic view, we address the following question: Under LDP, is it possible to design a distributed private minimizer for arbitrary closed convex constraints with utility loss not explicitly dependent on dimensionality? As an affiliated result, we also show that with merely linear secret sharing, information theoretic privacy is achievable for bounded colluding agents
    corecore