32 research outputs found

    Probabilistic Spectral Sparsification In Sublinear Time

    Full text link
    In this paper, we introduce a variant of spectral sparsification, called probabilistic (ε,δ)(\varepsilon,\delta)-spectral sparsification. Roughly speaking, it preserves the cut value of any cut (S,Sc)(S,S^{c}) with an 1±ε1\pm\varepsilon multiplicative error and a δS\delta\left|S\right| additive error. We show how to produce a probabilistic (ε,δ)(\varepsilon,\delta)-spectral sparsifier with O(nlogn/ε2)O(n\log n/\varepsilon^{2}) edges in time O~(n/ε2δ)\tilde{O}(n/\varepsilon^{2}\delta) time for unweighted undirected graph. This gives fastest known sub-linear time algorithms for different cut problems on unweighted undirected graph such as - An O~(n/OPT+n3/2+t)\tilde{O}(n/OPT+n^{3/2+t}) time O(logn/t)O(\sqrt{\log n/t})-approximation algorithm for the sparsest cut problem and the balanced separator problem. - A n1+o(1)/ε4n^{1+o(1)}/\varepsilon^{4} time approximation minimum s-t cut algorithm with an εn\varepsilon n additive error

    From Proximal Point Method to Nesterov's Acceleration

    Full text link
    The proximal point method (PPM) is a fundamental method in optimization that is often used as a building block for fast optimization algorithms. In this work, building on a recent work by Defazio (2019), we provide a complete understanding of Nesterov's accelerated gradient method (AGM) by establishing quantitative and analytical connections between PPM and AGM. The main observation in this paper is that AGM is in fact equal to a simple approximation of PPM, which results in an elementary derivation of the mysterious updates of AGM as well as its step sizes. This connection also leads to a conceptually simple analysis of AGM based on the standard analysis of PPM. This view naturally extends to the strongly convex case and also motivates other accelerated methods for practically relevant settings.Comment: 14 pages; Section 4 updated; Remark 5 added; comments would be appreciated

    Pooling or sampling: Collective dynamics for electrical flow estimation

    Get PDF
    The computation of electrical flows is a crucial primitive for many recently proposed optimization algorithms on weighted networks. While typically implemented as a centralized subroutine, the ability to perform this task in a fully decentralized way is implicit in a number of biological systems. Thus, a natural question is whether this task can provably be accomplished in an efficient way by a network of agents executing a simple protocol. We provide a positive answer, proposing two distributed approaches to electrical flow computation on a weighted network: a deterministic process mimicking Jacobi's iterative method for solving linear systems, and a randomized token diffusion process, based on revisiting a classical random walk process on a graph with an absorbing node. We show that both processes converge to a solution of Kirchhoff's node potential equations, derive bounds on their convergence rates in terms of the weights of the network, and analyze their time and message complexity

    Алгоритм нахождения k максимальных потоков

    Get PDF
    In the paper the original algorithm of solution of an applied task of the graph theory on finding of k maximum flows between the two set count's tops is proposed. The approach described represents a complex application of Ford-Fulkerson (Edmonds-Karp or Dinitz) algorithm and the algorithm of creation of a truncated tree of states in width in an indivisible optimizing cycle.В работе предлагается оригинальный алгоритм решения прикладной задачи теории графов о нахождении k максимальных потоков между двумя заданными вершинами графа. Описываемый подход представляет собой комплексное применение в едином оптимизационном цикле алгоритма Форда-Фалкерсона (Эдмондса-Карпа или Диница) и алгоритма построения усеченного дерева состояний в ширину

    Linear Coupling: An Ultimate Unification of Gradient and Mirror Descent

    Get PDF
    First-order methods play a central role in large-scale machine learning. Even though many variations exist, each suited to a particular problem, almost all such methods fundamentally rely on two types of algorithmic steps: gradient descent, which yields primal progress, and mirror descent, which yields dual progress. We observe that the performances of gradient and mirror descent are complementary, so that faster algorithms can be designed by LINEARLY COUPLING the two. We show how to reconstruct Nesterov's accelerated gradient methods using linear coupling, which gives a cleaner interpretation than Nesterov's original proofs. We also discuss the power of linear coupling by extending it to many other settings that Nesterov's methods cannot apply to.Comment: A new section added; polished writin
    corecore