33,896 research outputs found

    A new optimization algorithm for network component analysis based on convex programming

    Get PDF
    Proceedings of the IEEE International Conference on Acoustics, Speech and Signal Processing, 2009, p. 509-512Paper no. 2203Network component analysis (NCA) has been established as a promising tool for reconstructing gene regulatory networks from microarray data. NCA is a method that can resolve the problem of blind source separation when the mixing matrix instead has a known sparse structure despite the correlation among the source signals. The original NCA algorithm relies on alternating least squares (ALS) and suffers from local convergence as well as slow convergence. In this paper, we develop new and more robust NCA algorithms by incorporating additional signal constraints. In particular, we introduce the biologically sound constraints that all nonzero entries in the connectivity network are positive. Our new approach formulates a convex optimization problem which can be solved efficiently and effectively by fast convex programming algorithms. We verify the effectiveness and robustness of our new approach using simulations and gene regulatory network reconstruction from experimental yeast cell cycle microarray data. ©2009 IEEE.published_or_final_versio

    A randomized primal distributed algorithm for partitioned and big-data non-convex optimization

    Full text link
    In this paper we consider a distributed optimization scenario in which the aggregate objective function to minimize is partitioned, big-data and possibly non-convex. Specifically, we focus on a set-up in which the dimension of the decision variable depends on the network size as well as the number of local functions, but each local function handled by a node depends only on a (small) portion of the entire optimization variable. This problem set-up has been shown to appear in many interesting network application scenarios. As main paper contribution, we develop a simple, primal distributed algorithm to solve the optimization problem, based on a randomized descent approach, which works under asynchronous gossip communication. We prove that the proposed asynchronous algorithm is a proper, ad-hoc version of a coordinate descent method and thus converges to a stationary point. To show the effectiveness of the proposed algorithm, we also present numerical simulations on a non-convex quadratic program, which confirm the theoretical results

    Robust distributed linear programming

    Full text link
    This paper presents a robust, distributed algorithm to solve general linear programs. The algorithm design builds on the characterization of the solutions of the linear program as saddle points of a modified Lagrangian function. We show that the resulting continuous-time saddle-point algorithm is provably correct but, in general, not distributed because of a global parameter associated with the nonsmooth exact penalty function employed to encode the inequality constraints of the linear program. This motivates the design of a discontinuous saddle-point dynamics that, while enjoying the same convergence guarantees, is fully distributed and scalable with the dimension of the solution vector. We also characterize the robustness against disturbances and link failures of the proposed dynamics. Specifically, we show that it is integral-input-to-state stable but not input-to-state stable. The latter fact is a consequence of a more general result, that we also establish, which states that no algorithmic solution for linear programming is input-to-state stable when uncertainty in the problem data affects the dynamics as a disturbance. Our results allow us to establish the resilience of the proposed distributed dynamics to disturbances of finite variation and recurrently disconnected communication among the agents. Simulations in an optimal control application illustrate the results

    A Finite-Time Cutting Plane Algorithm for Distributed Mixed Integer Linear Programming

    Get PDF
    Many problems of interest for cyber-physical network systems can be formulated as Mixed Integer Linear Programs in which the constraints are distributed among the agents. In this paper we propose a distributed algorithm to solve this class of optimization problems in a peer-to-peer network with no coordinator and with limited computation and communication capabilities. In the proposed algorithm, at each communication round, agents solve locally a small LP, generate suitable cutting planes, namely intersection cuts and cost-based cuts, and communicate a fixed number of active constraints, i.e., a candidate optimal basis. We prove that, if the cost is integer, the algorithm converges to the lexicographically minimal optimal solution in a finite number of communication rounds. Finally, through numerical computations, we analyze the algorithm convergence as a function of the network size.Comment: 6 pages, 3 figure
    corecore