33,531 research outputs found

    Distributed Nonconvex Multiagent Optimization Over Time-Varying Networks

    Full text link
    We study nonconvex distributed optimization in multiagent networks where the communications between nodes is modeled as a time-varying sequence of arbitrary digraphs. We introduce a novel broadcast-based distributed algorithmic framework for the (constrained) minimization of the sum of a smooth (possibly nonconvex and nonseparable) function, i.e., the agents' sum-utility, plus a convex (possibly nonsmooth and nonseparable) regularizer. The latter is usually employed to enforce some structure in the solution, typically sparsity. The proposed method hinges on Successive Convex Approximation (SCA) techniques coupled with i) a tracking mechanism instrumental to locally estimate the gradients of agents' cost functions; and ii) a novel broadcast protocol to disseminate information and distribute the computation among the agents. Asymptotic convergence to stationary solutions is established. A key feature of the proposed algorithm is that it neither requires the double-stochasticity of the consensus matrices (but only column stochasticity) nor the knowledge of the graph sequence to implement. To the best of our knowledge, the proposed framework is the first broadcast-based distributed algorithm for convex and nonconvex constrained optimization over arbitrary, time-varying digraphs. Numerical results show that our algorithm outperforms current schemes on both convex and nonconvex problems.Comment: Copyright 2001 SS&C. Published in the Proceedings of the 50th annual Asilomar conference on signals, systems, and computers, Nov. 6-9, 2016, CA, US

    Privacy-Preserving Distributed Optimization via Subspace Perturbation: A General Framework

    Get PDF
    As the modern world becomes increasingly digitized and interconnected, distributed signal processing has proven to be effective in processing its large volume of data. However, a main challenge limiting the broad use of distributed signal processing techniques is the issue of privacy in handling sensitive data. To address this privacy issue, we propose a novel yet general subspace perturbation method for privacy-preserving distributed optimization, which allows each node to obtain the desired solution while protecting its private data. In particular, we show that the dual variables introduced in each distributed optimizer will not converge in a certain subspace determined by the graph topology. Additionally, the optimization variable is ensured to converge to the desired solution, because it is orthogonal to this non-convergent subspace. We therefore propose to insert noise in the non-convergent subspace through the dual variable such that the private data are protected, and the accuracy of the desired solution is completely unaffected. Moreover, the proposed method is shown to be secure under two widely-used adversary models: passive and eavesdropping. Furthermore, we consider several distributed optimizers such as ADMM and PDMM to demonstrate the general applicability of the proposed method. Finally, we test the performance through a set of applications. Numerical tests indicate that the proposed method is superior to existing methods in terms of several parameters like estimated accuracy, privacy level, communication cost and convergence rate

    Multiplicative Noise Removal Using Variable Splitting and Constrained Optimization

    Full text link
    Multiplicative noise (also known as speckle noise) models are central to the study of coherent imaging systems, such as synthetic aperture radar and sonar, and ultrasound and laser imaging. These models introduce two additional layers of difficulties with respect to the standard Gaussian additive noise scenario: (1) the noise is multiplied by (rather than added to) the original image; (2) the noise is not Gaussian, with Rayleigh and Gamma being commonly used densities. These two features of multiplicative noise models preclude the direct application of most state-of-the-art algorithms, which are designed for solving unconstrained optimization problems where the objective has two terms: a quadratic data term (log-likelihood), reflecting the additive and Gaussian nature of the noise, plus a convex (possibly nonsmooth) regularizer (e.g., a total variation or wavelet-based regularizer/prior). In this paper, we address these difficulties by: (1) converting the multiplicative model into an additive one by taking logarithms, as proposed by some other authors; (2) using variable splitting to obtain an equivalent constrained problem; and (3) dealing with this optimization problem using the augmented Lagrangian framework. A set of experiments shows that the proposed method, which we name MIDAL (multiplicative image denoising by augmented Lagrangian), yields state-of-the-art results both in terms of speed and denoising performance.Comment: 11 pages, 7 figures, 2 tables. To appear in the IEEE Transactions on Image Processing

    Input-Output Performance of Linear-Quadratic Saddle-Point Algorithms With Application to Distributed Resource Allocation Problems

    Get PDF
    Saddle-point or primal-dual methods have recently attracted renewed interest as a systematic technique to design distributed algorithms, which solve convex optimization problems. When implemented online for streaming data or as dynamic feedback controllers, these algorithms become subject to disturbances and noise; convergence rates provide incomplete performance information, and quantifying input-output performance becomes more important. We analyze the input-output performance of the continuous-time saddle-point method applied to linearly constrained quadratic programs, providing explicit expressions for the saddle-point H2\mathcal {H}_2 norm under a relevant input-output configuration. We then proceed to derive analogous results for regularized and augmented versions of the saddle-point algorithm. We observe some rather peculiar effects-a modest amount of regularization significantly improves the transient performance, while augmentation does not necessarily offer improvement. We then propose a distributed dual version of the algorithm, which overcomes some of the performance limitations imposed by augmentation. Finally, we apply our results to a resource allocation problem to compare the input-output performance of various centralized and distributed saddle-point implementations and show that distributed algorithms may perform as well as their centralized counterparts

    Sample Approximation-Based Deflation Approaches for Chance SINR Constrained Joint Power and Admission Control

    Full text link
    Consider the joint power and admission control (JPAC) problem for a multi-user single-input single-output (SISO) interference channel. Most existing works on JPAC assume the perfect instantaneous channel state information (CSI). In this paper, we consider the JPAC problem with the imperfect CSI, that is, we assume that only the channel distribution information (CDI) is available. We formulate the JPAC problem into a chance (probabilistic) constrained program, where each link's SINR outage probability is enforced to be less than or equal to a specified tolerance. To circumvent the computational difficulty of the chance SINR constraints, we propose to use the sample (scenario) approximation scheme to convert them into finitely many simple linear constraints. Furthermore, we reformulate the sample approximation of the chance SINR constrained JPAC problem as a composite group sparse minimization problem and then approximate it by a second-order cone program (SOCP). The solution of the SOCP approximation can be used to check the simultaneous supportability of all links in the network and to guide an iterative link removal procedure (the deflation approach). We exploit the special structure of the SOCP approximation and custom-design an efficient algorithm for solving it. Finally, we illustrate the effectiveness and efficiency of the proposed sample approximation-based deflation approaches by simulations.Comment: The paper has been accepted for publication in IEEE Transactions on Wireless Communication
    • …
    corecore