23,812 research outputs found

    Convergence of Tomlin's HOTS algorithm

    Full text link
    The HOTS algorithm uses the hyperlink structure of the web to compute a vector of scores with which one can rank web pages. The HOTS vector is the vector of the exponentials of the dual variables of an optimal flow problem (the "temperature" of each page). The flow represents an optimal distribution of web surfers on the web graph in the sense of entropy maximization. In this paper, we prove the convergence of Tomlin's HOTS algorithm. We first study a simplified version of the algorithm, which is a fixed point scaling algorithm designed to solve the matrix balancing problem for nonnegative irreducible matrices. The proof of convergence is general (nonlinear Perron-Frobenius theory) and applies to a family of deformations of HOTS. Then, we address the effective HOTS algorithm, designed by Tomlin for the ranking of web pages. The model is a network entropy maximization problem generalizing matrix balancing. We show that, under mild assumptions, the HOTS algorithm converges with a linear convergence rate. The proof relies on a uniqueness property of the fixed point and on the existence of a Lyapunov function. We also show that the coordinate descent algorithm can be used to find the ideal and effective HOTS vectors and we compare HOTS and coordinate descent on fragments of the web graph. Our numerical experiments suggest that the convergence rate of the HOTS algorithm may deteriorate when the size of the input increases. We thus give a normalized version of HOTS with an experimentally better convergence rate.Comment: 21 page

    CoCoA: A General Framework for Communication-Efficient Distributed Optimization

    Get PDF
    The scale of modern datasets necessitates the development of efficient distributed optimization methods for machine learning. We present a general-purpose framework for distributed computing environments, CoCoA, that has an efficient communication scheme and is applicable to a wide variety of problems in machine learning and signal processing. We extend the framework to cover general non-strongly-convex regularizers, including L1-regularized problems like lasso, sparse logistic regression, and elastic net regularization, and show how earlier work can be derived as a special case. We provide convergence guarantees for the class of convex regularized loss minimization objectives, leveraging a novel approach in handling non-strongly-convex regularizers and non-smooth loss functions. The resulting framework has markedly improved performance over state-of-the-art methods, as we illustrate with an extensive set of experiments on real distributed datasets

    Random Coordinate Descent Methods for Minimizing Decomposable Submodular Functions

    Full text link
    Submodular function minimization is a fundamental optimization problem that arises in several applications in machine learning and computer vision. The problem is known to be solvable in polynomial time, but general purpose algorithms have high running times and are unsuitable for large-scale problems. Recent work have used convex optimization techniques to obtain very practical algorithms for minimizing functions that are sums of ``simple" functions. In this paper, we use random coordinate descent methods to obtain algorithms with faster linear convergence rates and cheaper iteration costs. Compared to alternating projection methods, our algorithms do not rely on full-dimensional vector operations and they converge in significantly fewer iterations

    Reflection methods for user-friendly submodular optimization

    Get PDF
    Recently, it has become evident that submodularity naturally captures widely occurring concepts in machine learning, signal processing and computer vision. Consequently, there is need for efficient optimization procedures for submodular functions, especially for minimization problems. While general submodular minimization is challenging, we propose a new method that exploits existing decomposability of submodular functions. In contrast to previous approaches, our method is neither approximate, nor impractical, nor does it need any cumbersome parameter tuning. Moreover, it is easy to implement and parallelize. A key component of our method is a formulation of the discrete submodular minimization problem as a continuous best approximation problem that is solved through a sequence of reflections, and its solution can be easily thresholded to obtain an optimal discrete solution. This method solves both the continuous and discrete formulations of the problem, and therefore has applications in learning, inference, and reconstruction. In our experiments, we illustrate the benefits of our method on two image segmentation tasks.Comment: Neural Information Processing Systems (NIPS), \'Etats-Unis (2013

    Efficient Rank Reduction of Correlation Matrices

    Get PDF
    Geometric optimisation algorithms are developed that efficiently find the nearest low-rank correlation matrix. We show, in numerical tests, that our methods compare favourably to the existing methods in the literature. The connection with the Lagrange multiplier method is established, along with an identification of whether a local minimum is a global minimum. An additional benefit of the geometric approach is that any weighted norm can be applied. The problem of finding the nearest low-rank correlation matrix occurs as part of the calibration of multi-factor interest rate market models to correlation.Comment: First version: 20 pages, 4 figures Second version [changed content]: 21 pages, 6 figure
    • …
    corecore