32 research outputs found
Probabilistic Spectral Sparsification In Sublinear Time
In this paper, we introduce a variant of spectral sparsification, called
probabilistic -spectral sparsification. Roughly speaking,
it preserves the cut value of any cut with an
multiplicative error and a additive error. We show how
to produce a probabilistic -spectral sparsifier with
edges in time
time for unweighted undirected graph. This gives fastest known sub-linear time
algorithms for different cut problems on unweighted undirected graph such as
- An time -approximation
algorithm for the sparsest cut problem and the balanced separator problem.
- A time approximation minimum s-t cut algorithm
with an additive error
From Proximal Point Method to Nesterov's Acceleration
The proximal point method (PPM) is a fundamental method in optimization that
is often used as a building block for fast optimization algorithms. In this
work, building on a recent work by Defazio (2019), we provide a complete
understanding of Nesterov's accelerated gradient method (AGM) by establishing
quantitative and analytical connections between PPM and AGM. The main
observation in this paper is that AGM is in fact equal to a simple
approximation of PPM, which results in an elementary derivation of the
mysterious updates of AGM as well as its step sizes. This connection also leads
to a conceptually simple analysis of AGM based on the standard analysis of PPM.
This view naturally extends to the strongly convex case and also motivates
other accelerated methods for practically relevant settings.Comment: 14 pages; Section 4 updated; Remark 5 added; comments would be
appreciated
Pooling or sampling: Collective dynamics for electrical flow estimation
The computation of electrical flows is a crucial primitive for many recently proposed optimization algorithms on weighted networks. While typically implemented as a centralized subroutine, the ability to perform this task in a fully decentralized way is implicit in a number of biological systems. Thus, a natural question is whether this task can provably be accomplished in an efficient way by a network of agents executing a simple protocol. We provide a positive answer, proposing two distributed approaches to electrical flow computation on a weighted network: a deterministic process mimicking Jacobi's iterative method for solving linear systems, and a randomized token diffusion process, based on revisiting a classical random walk process on a graph with an absorbing node. We show that both processes converge to a solution of Kirchhoff's node potential equations, derive bounds on their convergence rates in terms of the weights of the network, and analyze their time and message complexity
Алгоритм нахождения k максимальных потоков
In the paper the original algorithm of solution of an applied task of the graph theory on finding of k maximum flows between the two set count's tops is proposed. The approach described represents a complex application of Ford-Fulkerson (Edmonds-Karp or Dinitz) algorithm and the algorithm of creation of a truncated tree of states in width in an indivisible optimizing cycle.В работе предлагается оригинальный алгоритм решения прикладной задачи теории графов о нахождении k максимальных потоков между двумя заданными вершинами графа. Описываемый подход представляет собой комплексное применение в едином оптимизационном цикле алгоритма Форда-Фалкерсона (Эдмондса-Карпа или Диница) и алгоритма построения усеченного дерева состояний в ширину
Linear Coupling: An Ultimate Unification of Gradient and Mirror Descent
First-order methods play a central role in large-scale machine learning. Even
though many variations exist, each suited to a particular problem, almost all
such methods fundamentally rely on two types of algorithmic steps: gradient
descent, which yields primal progress, and mirror descent, which yields dual
progress.
We observe that the performances of gradient and mirror descent are
complementary, so that faster algorithms can be designed by LINEARLY COUPLING
the two. We show how to reconstruct Nesterov's accelerated gradient methods
using linear coupling, which gives a cleaner interpretation than Nesterov's
original proofs. We also discuss the power of linear coupling by extending it
to many other settings that Nesterov's methods cannot apply to.Comment: A new section added; polished writin