2,200 research outputs found

    A Primal-Dual Algorithmic Framework for Constrained Convex Minimization

    Get PDF
    We present a primal-dual algorithmic framework to obtain approximate solutions to a prototypical constrained convex optimization problem, and rigorously characterize how common structural assumptions affect the numerical efficiency. Our main analysis technique provides a fresh perspective on Nesterov's excessive gap technique in a structured fashion and unifies it with smoothing and primal-dual methods. For instance, through the choices of a dual smoothing strategy and a center point, our framework subsumes decomposition algorithms, augmented Lagrangian as well as the alternating direction method-of-multipliers methods as its special cases, and provides optimal convergence rates on the primal objective residual as well as the primal feasibility gap of the iterates for all.Comment: This paper consists of 54 pages with 7 tables and 12 figure

    Bethe Projections for Non-Local Inference

    Full text link
    Many inference problems in structured prediction are naturally solved by augmenting a tractable dependency structure with complex, non-local auxiliary objectives. This includes the mean field family of variational inference algorithms, soft- or hard-constrained inference using Lagrangian relaxation or linear programming, collective graphical models, and forms of semi-supervised learning such as posterior regularization. We present a method to discriminatively learn broad families of inference objectives, capturing powerful non-local statistics of the latent variables, while maintaining tractable and provably fast inference using non-Euclidean projected gradient descent with a distance-generating function given by the Bethe entropy. We demonstrate the performance and flexibility of our method by (1) extracting structured citations from research papers by learning soft global constraints, (2) achieving state-of-the-art results on a widely-used handwriting recognition task using a novel learned non-convex inference procedure, and (3) providing a fast and highly scalable algorithm for the challenging problem of inference in a collective graphical model applied to bird migration.Comment: minor bug fix to appendix. appeared in UAI 201

    Reciprocity-driven Sparse Network Formation

    Full text link
    A resource exchange network is considered, where exchanges among nodes are based on reciprocity. Peers receive from the network an amount of resources commensurate with their contribution. We assume the network is fully connected, and impose sparsity constraints on peer interactions. Finding the sparsest exchanges that achieve a desired level of reciprocity is in general NP-hard. To capture near-optimal allocations, we introduce variants of the Eisenberg-Gale convex program with sparsity penalties. We derive decentralized algorithms, whereby peers approximately compute the sparsest allocations, by reweighted l1 minimization. The algorithms implement new proportional-response dynamics, with nonlinear pricing. The trade-off between sparsity and reciprocity and the properties of graphs induced by sparse exchanges are examined.Comment: 19 page

    Distributed Stochastic Optimization over Time-Varying Noisy Network

    Full text link
    This paper is concerned with distributed stochastic multi-agent optimization problem over a class of time-varying network with slowly decreasing communication noise effects. This paper considers the problem in composite optimization setting which is more general in noisy network optimization. It is noteworthy that existing methods for noisy network optimization are Euclidean projection based. We present two related different classes of non-Euclidean methods and investigate their convergence behavior. One is distributed stochastic composite mirror descent type method (DSCMD-N) which provides a more general algorithm framework than former works in this literature. As a counterpart, we also consider a composite dual averaging type method (DSCDA-N) for noisy network optimization. Some main error bounds for DSCMD-N and DSCDA-N are obtained. The trade-off among stepsizes, noise decreasing rates, convergence rates of algorithm is analyzed in detail. To the best of our knowledge, this is the first work to analyze and derive convergence rates of optimization algorithm in noisy network optimization. We show that an optimal rate of O(1/T)O(1/\sqrt{T}) in nonsmooth convex optimization can be obtained for proposed methods under appropriate communication noise condition. Moveover, convergence rates in different orders are comprehensively derived in both expectation convergence and high probability convergence sense.Comment: 27 page

    Matrix recovery using Split Bregman

    Full text link
    In this paper we address the problem of recovering a matrix, with inherent low rank structure, from its lower dimensional projections. This problem is frequently encountered in wide range of areas including pattern recognition, wireless sensor networks, control systems, recommender systems, image/video reconstruction etc. Both in theory and practice, the most optimal way to solve the low rank matrix recovery problem is via nuclear norm minimization. In this paper, we propose a Split Bregman algorithm for nuclear norm minimization. The use of Bregman technique improves the convergence speed of our algorithm and gives a higher success rate. Also, the accuracy of reconstruction is much better even for cases where small number of linear measurements are available. Our claim is supported by empirical results obtained using our algorithm and its comparison to other existing methods for matrix recovery. The algorithms are compared on the basis of NMSE, execution time and success rate for varying ranks and sampling ratios
    • …
    corecore