76 research outputs found
Throughput Optimal On-Line Algorithms for Advanced Resource Reservation in Ultra High-Speed Networks
Advanced channel reservation is emerging as an important feature of ultra
high-speed networks requiring the transfer of large files. Applications include
scientific data transfers and database backup. In this paper, we present two
new, on-line algorithms for advanced reservation, called BatchAll and BatchLim,
that are guaranteed to achieve optimal throughput performance, based on
multi-commodity flow arguments. Both algorithms are shown to have
polynomial-time complexity and provable bounds on the maximum delay for
1+epsilon bandwidth augmented networks. The BatchLim algorithm returns the
completion time of a connection immediately as a request is placed, but at the
expense of a slightly looser competitive ratio than that of BatchAll. We also
present a simple approach that limits the number of parallel paths used by the
algorithms while provably bounding the maximum reduction factor in the
transmission throughput. We show that, although the number of different paths
can be exponentially large, the actual number of paths needed to approximate
the flow is quite small and proportional to the number of edges in the network.
Simulations for a number of topologies show that, in practice, 3 to 5 parallel
paths are sufficient to achieve close to optimal performance. The performance
of the competitive algorithms are also compared to a greedy benchmark, both
through analysis and simulation.Comment: 9 pages, 8 figure
Near-Optimal Distributed Maximum Flow
We present a near-optimal distributed algorithm for -approximation of single-commodity maximum flow in undirected weighted networks that runs in communication rounds in the \Congest model. Here, and denote the number of nodes and the network diameter, respectively. This is the first improvement over the trivial bound of , and it nearly matches the round complexity lower bound. The development of the algorithm contains two results of independent interest: (i) A -round distributed construction of a spanning tree of average stretch . (ii) A -round distributed construction of an -congestion approximator consisting of the cuts induced by virtual trees. The distributed representation of the cut approximator allows for evaluation in rounds. All our algorithms make use of randomization and succeed with high probability
Nearly linear-time packing and covering LP solvers
Packing and covering linear programs (PC-LP s) form an important class of linear programs (LPs) across computer science, operations research, and optimization. Luby and Nisan (in: STOC, ACM Press, New York, 1993) constructed an iterative algorithm for approximately solving PC-LP s in nearly linear time, where the time complexity scales nearly linearly in N, the number of nonzero entries of the matrix, and polynomially in , the (multiplicative) approximation error. Unfortunately, existing nearly linear-time algorithms (Plotkin et al. in Math Oper Res 20(2):257–301, 1995; Bartal et al., in: Proceedings 38th annual symposium on foundations of computer science, IEEE Computer Society, 1997; Young, in: 42nd annual IEEE symposium on foundations of computer science (FOCS’01), IEEE Computer Society, 2001; Koufogiannakis and Young in Algorithmica 70:494–506, 2013; Young in Nearly linear-time approximation schemes for mixed packing/covering and facility-location linear programs, 2014. arXiv:1407.3015; Allen-Zhu and Orecchia, in: SODA, 2015) for solving PC-LP s require time at least proportional to −2. In this paper, we break this longstanding barrier by designing a packing solver that runs in time ˜(−1) and covering LP solver that runs in time ˜(−1.5). Our packing solver can be extended to run in time ˜(−1) for a class of well-behaved covering programs. In a follow-up work, Wang et al. (in: ICALP, 2016) showed that all covering LPs can be converted into well-behaved ones by a reduction that blows up the problem size only logarithmically.Accepted manuscrip
Learning to compare nodes in branch and bound with graph neural networks
En informatique, la résolution de problèmes NP-difficiles en un temps raisonnable est d’une grande importance : optimisation de la chaîne d’approvisionnement, planification, routage, alignement de séquences biologiques multiples, inference dans les modèles graphiques pro- babilistes, et même certains problèmes de cryptographie sont tous des examples de la classe NP-complet. En pratique, nous modélisons beaucoup d’entre eux comme un problème d’op- timisation en nombre entier, que nous résolvons à l’aide de la méthodologie séparation et évaluation. Un algorithme de ce style divise un espace de recherche pour l’explorer récursi- vement (séparation), et obtient des bornes d’optimalité en résolvant des relaxations linéaires sur les sous-espaces (évaluation). Pour spécifier un algorithme, il faut définir plusieurs pa- ramètres, tel que la manière d’explorer les espaces de recherche, de diviser une recherche l’espace une fois exploré, ou de renforcer les relaxations linéaires. Ces politiques peuvent influencer considérablement la performance de résolution.
Ce travail se concentre sur une nouvelle manière de dériver politique de recherche, c’est à dire le choix du prochain sous-espace à séparer étant donné une partition en cours, en nous servant de l’apprentissage automatique profond. Premièrement, nous collectons des données résumant, sur une collection de problèmes donnés, quels sous-espaces contiennent l’optimum et quels ne le contiennent pas. En représentant ces sous-espaces sous forme de graphes bipartis qui capturent leurs caractéristiques, nous entraînons un réseau de neurones graphiques à déterminer la probabilité qu’un sous-espace contienne la solution optimale par apprentissage supervisé. Le choix d’un tel modèle est particulièrement utile car il peut s’adapter à des problèmes de différente taille sans modifications. Nous montrons que notre approche bat celle de nos concurrents, consistant à des modèles d’apprentissage automatique plus simples entraînés à partir des statistiques du solveur, ainsi que la politique par défaut de SCIP, un solveur open-source compétitif, sur trois familles NP-dures: des problèmes de recherche de stables de taille maximum, de flots de réseau multicommodité à charge fixe, et de satisfiabilité maximum.In computer science, solving NP-hard problems in a reasonable time is of great importance, such as in supply chain optimization, scheduling, routing, multiple biological sequence align- ment, inference in probabilistic graphical models, and even some problems in cryptography. In practice, we model many of them as a mixed integer linear optimization problem, which we solve using the branch and bound framework. An algorithm of this style divides a search space to explore it recursively (branch) and obtains optimality bounds by solving linear relaxations in such sub-spaces (bound). To specify an algorithm, one must set several pa- rameters, such as how to explore search spaces, how to divide a search space once it has been explored, or how to tighten these linear relaxations. These policies can significantly influence resolution performance.
This work focuses on a novel method for deriving a search policy, that is, a rule for select- ing the next sub-space to explore given a current partitioning, using deep machine learning. First, we collect data summarizing which subspaces contain the optimum, and which do not. By representing these sub-spaces as bipartite graphs encoding their characteristics, we train a graph neural network to determine the probability that a subspace contains the optimal so- lution by supervised learning. The choice of such design is particularly useful as the machine learning model can automatically adapt to problems of different sizes without modifications. We show that our approach beats the one of our competitors, consisting of simpler machine learning models trained from solver statistics, as well as the default policy of SCIP, a state- of-the-art open-source solver, on three NP-hard benchmarks: generalized independent set, fixed-charge multicommodity network flow, and maximum satisfiability problems
Breaking 3-Factor Approximation for Correlation Clustering in Polylogarithmic Rounds
In this paper, we study parallel algorithms for the correlation clustering
problem, where every pair of two different entities is labeled with similar or
dissimilar. The goal is to partition the entities into clusters to minimize the
number of disagreements with the labels. Currently, all efficient parallel
algorithms have an approximation ratio of at least 3. In comparison with the
ratio achieved by polynomial-time sequential algorithms
[CLN22], a significant gap exists.
We propose the first poly-logarithmic depth parallel algorithm that achieves
a better approximation ratio than 3. Specifically, our algorithm computes a
-approximate solution and uses work.
Additionally, it can be translated into a -time sequential
algorithm and a poly-logarithmic rounds sublinear-memory MPC algorithm with
total memory.
Our approach is inspired by Awerbuch, Khandekar, and Rao's [AKR12]
length-constrained multi-commodity flow algorithm, where we develop an
efficient parallel algorithm to solve a truncated correlation clustering linear
program of Charikar, Guruswami, and Wirth [CGW05]. Then we show the solution of
the truncated linear program can be rounded with a factor of at most 2.4 loss
by using the framework of [CMSY15]. Such a rounding framework can then be
implemented using parallel pivot-based approaches
Faster Parallel Algorithm for Approximate Shortest Path
We present the first work, time
algorithm in the PRAM model that computes -approximate
single-source shortest paths on weighted, undirected graphs. This improves upon
the breakthrough result of Cohen~[JACM'00] that achieves
work and time. While most previous approaches, including
Cohen's, leveraged the power of hopsets, our algorithm builds upon the recent
developments in \emph{continuous optimization}, studying the shortest path
problem from the lens of the closely-related \emph{minimum transshipment}
problem. To obtain our algorithm, we demonstrate a series of near-linear work,
polylogarithmic-time reductions between the problems of approximate shortest
path, approximate transshipment, and -embeddings, and establish a
recursive algorithm that cycles through the three problems and reduces the
graph size on each cycle. As a consequence, we also obtain faster parallel
algorithms for approximate transshipment and -embeddings with
polylogarithmic distortion. The minimum transshipment algorithm in particular
improves upon the previous best work sequential algorithm of
Sherman~[SODA'17].
To improve readability, the paper is almost entirely self-contained, save for
several staple theorems in algorithms and combinatorics.Comment: 53 pages, STOC 202
- …