571 research outputs found
Fully Dynamic Connectivity in Amortized Expected Time
Dynamic connectivity is one of the most fundamental problems in dynamic graph
algorithms. We present a randomized Las Vegas dynamic connectivity data
structure with amortized expected update time and
worst case query time, which comes very close to the
cell probe lower bounds of Patrascu and Demaine (2006) and Patrascu and Thorup
(2011)
Lower Bounds on Sparse Spanners, Emulators, and Diameter-reducing shortcuts
We prove better lower bounds on additive spanners and emulators, which are lossy compression schemes for undirected graphs, as well as lower bounds on shortcut sets, which reduce the diameter of directed graphs. We show that any O(n)-size shortcut set cannot bring the diameter below Omega(n^{1/6}), and that any O(m)-size shortcut set cannot bring it below Omega(n^{1/11}). These improve Hesse\u27s [Hesse, 2003] lower bound of Omega(n^{1/17}). By combining these constructions with Abboud and Bodwin\u27s [Abboud and Bodwin, 2017] edge-splitting technique, we get additive stretch lower bounds of +Omega(n^{1/13}) for O(n)-size spanners and +Omega(n^{1/18}) for O(n)-size emulators. These improve Abboud and Bodwin\u27s +Omega(n^{1/22}) lower bounds
Cactus Representations in Polylogarithmic Max-flow via Maximal Isolating Mincuts
A cactus representation of a graph, introduced by Dinitz et al. in 1976, is
an edge sparsifier of size that exactly captures all global minimum cuts
of the graph. It is a central combinatorial object that has been a key
ingredient in almost all algorithms for the connectivity augmentation problems
and for maintaining minimum cuts under edge insertions (e.g. [NGM97], [CKL+22],
[Hen97]). This sparsifier was generalized to Steiner cactus for a vertex set
, which can be seen as a vertex sparsifier of size that captures
all partitions of corresponding to a -Steiner minimum cut, and also
hypercactus, an analogous concept in hypergraphs. These generalizations further
extend the applications of cactus to the Steiner and hypergraph settings.
In a long line of work on fast constructions of cactus and its
generalizations, a near-linear time construction of cactus was shown by [Karger
and Panigrahi 2009]. Unfortunately, their technique based on tree packing
inherently does not generalize. The state-of-the-art algorithms for Steiner
cactus and hypercactus are still slower than linear time by a factor of
[DV94] and [CX17], respectively.
We show how to construct both Steiner cactus and hypercactus using
polylogarithmic calls to max flow, which gives the first almost-linear time
algorithms of both problems. The constructions immediately imply
almost-linear-time connectivity augmentation algorithms in the Steiner and
hypergraph settings, as well as speed up the incremental algorithm for
maintaining minimum cuts in hypergraphs by a factor of .
The key technique behind our result is a novel variant of the influential
isolating mincut technique [LP20, AKL+21] which we called maximal isolating
mincuts. This technique makes the isolating mincuts to be "more balanced"
which, we believe, will likely be useful in future applications.Comment: To appear in SODA 202
Breaking 3-Factor Approximation for Correlation Clustering in Polylogarithmic Rounds
In this paper, we study parallel algorithms for the correlation clustering
problem, where every pair of two different entities is labeled with similar or
dissimilar. The goal is to partition the entities into clusters to minimize the
number of disagreements with the labels. Currently, all efficient parallel
algorithms have an approximation ratio of at least 3. In comparison with the
ratio achieved by polynomial-time sequential algorithms
[CLN22], a significant gap exists.
We propose the first poly-logarithmic depth parallel algorithm that achieves
a better approximation ratio than 3. Specifically, our algorithm computes a
-approximate solution and uses work.
Additionally, it can be translated into a -time sequential
algorithm and a poly-logarithmic rounds sublinear-memory MPC algorithm with
total memory.
Our approach is inspired by Awerbuch, Khandekar, and Rao's [AKR12]
length-constrained multi-commodity flow algorithm, where we develop an
efficient parallel algorithm to solve a truncated correlation clustering linear
program of Charikar, Guruswami, and Wirth [CGW05]. Then we show the solution of
the truncated linear program can be rounded with a factor of at most 2.4 loss
by using the framework of [CMSY15]. Such a rounding framework can then be
implemented using parallel pivot-based approaches
Silver Metallization of Polyimide Surfaces Using EnvironmentallyFriendly Reducing Agents
Two environmentally friendly reducing agents, ascorbic acid and glucose, were employed to fabricate Ag-thin-film-coated polyimide(PI) films. Ascorbic acid is an acidic reducing agent, whereas glucose is an alkaline reducing agent. Both of these reducing agentsare capable of reducing Ag+ ions doped in poly(amic acid) (PAA) formed by hydrolysis of a PI surface. Only glucose can producea continuous and dense Ag thin film on a PAA surface. Granular and discontinuous Ag thin films were obtained when ascorbic acidwas employed as a reducing agent. This difference in reactivity is attributed to the pH values of these reducing solutions
Vertex Sparsifiers for Hyperedge Connectivity
Recently, Chalermsook et al. [SODA'21(arXiv:2007.07862)] introduces a notion
of vertex sparsifiers for -edge connectivity, which has found applications
in parameterized algorithms for network design and also led to exciting dynamic
algorithms for -edge st-connectivity [Jin and Sun
FOCS'21(arXiv:2004.07650)]. We study a natural extension called vertex
sparsifiers for -hyperedge connectivity and construct a sparsifier whose
size matches the state-of-the-art for normal graphs. More specifically, we show
that, given a hypergraph with vertices and hyperedges with
terminal vertices and a parameter , there exists a hypergraph
containing only hyperedges that preserves all minimum cuts (up to
value ) between all subset of terminals. This matches the best bound of
edges for normal graphs by [Liu'20(arXiv:2011.15101)]. Moreover,
can be constructed in almost-linear time where is the rank of and
is the total size of , or in time if we slightly relax
the size to hyperedges.Comment: submitted to ESA 202
Search for direct pair production of the top squark in all-hadronic final states in proton-proton collisions at s√=8 TeV with the ATLAS detector
The results of a search for direct pair production of the scalar partner to the top quark using an integrated luminosity of 20.1fb−1 of proton–proton collision data at √s = 8 TeV recorded with the ATLAS detector at the LHC are reported. The top squark is assumed to decay via t˜→tχ˜01 or t˜→ bχ˜±1 →bW(∗)χ˜01 , where χ˜01 (χ˜±1 ) denotes the lightest neutralino (chargino) in supersymmetric models. The search targets a fully-hadronic final state in events with four or more jets and large missing transverse momentum. No significant excess over the Standard Model background prediction is observed, and exclusion limits are reported in terms of the top squark and neutralino masses and as a function of the branching fraction of t˜ → tχ˜01 . For a branching fraction of 100%, top squark masses in the range 270–645 GeV are excluded for χ˜01 masses below 30 GeV. For a branching fraction of 50% to either t˜ → tχ˜01 or t˜ → bχ˜±1 , and assuming the χ˜±1 mass to be twice the χ˜01 mass, top squark masses in the range 250–550 GeV are excluded for χ˜01 masses below 60 GeV
Search for pair-produced long-lived neutral particles decaying to jets in the ATLAS hadronic calorimeter in ppcollisions at √s=8TeV
The ATLAS detector at the Large Hadron Collider at CERN is used to search for the decay of a scalar boson to a pair of long-lived particles, neutral under the Standard Model gauge group, in 20.3fb−1of data collected in proton–proton collisions at √s=8TeV. This search is sensitive to long-lived particles that decay to Standard Model particles producing jets at the outer edge of the ATLAS electromagnetic calorimeter or inside the hadronic calorimeter. No significant excess of events is observed. Limits are reported on the product of the scalar boson production cross section times branching ratio into long-lived neutral particles as a function of the proper lifetime of the particles. Limits are reported for boson masses from 100 GeVto 900 GeV, and a long-lived neutral particle mass from 10 GeVto 150 GeV
- …