17,814 research outputs found
Near-optimal small-depth lower bounds for small distance connectivity
We show that any depth- circuit for determining whether an -node graph
has an -to- path of length at most must have size
. The previous best circuit size lower bounds for this
problem were (due to Beame, Impagliazzo, and Pitassi
[BIP98]) and (following from a recent formula size
lower bound of Rossman [Ros14]). Our lower bound is quite close to optimal,
since a simple construction gives depth- circuits of size
for this problem (and strengthening our bound even to
would require proving that undirected connectivity is not in )
Our proof is by reduction to a new lower bound on the size of small-depth
circuits computing a skewed variant of the "Sipser functions" that have played
an important role in classical circuit lower bounds [Sip83, Yao85, H{\aa}s86].
A key ingredient in our proof of the required lower bound for these Sipser-like
functions is the use of \emph{random projections}, an extension of random
restrictions which were recently employed in [RST15]. Random projections allow
us to obtain sharper quantitative bounds while employing simpler arguments,
both conceptually and technically, than in the previous works [Ajt89, BPU92,
BIP98, Ros14]
Near-optimal small-depth lower bounds for small distance connectivity
We show that any depth-d circuit for determining whether an n-node graph has an s-to-t path of length at most k must have size nâŠ(k1/d/d). The previous best circuit size lower bounds for this problem were nkexp(âO(d)) (due to Beame, Impagliazzo, and Pitassi [BIP98]) and nâŠ((log k)/d) (following from a recent formula size lower bound of Rossman [Ros14]). Our lower bound is quite close to optimal, since a simple construction gives depth-d circuits of size nO(k2/d) for this problem (and strengthening our bound even to nkâŠ(1/d) would require proving that undirected connectivity is not in NC1.) Our proof is by reduction to a new lower bound on the size of small-depth circuits computing a skewed variant of the âSipser functionsâ that have played an important role in classical circuit lower bounds [Sip83, Yao85, HËas86]. A key ingredient in our proof of the required lower bound for these Sipser-like functions is the use of random projections, an extension of random restrictions which were recently employed in [RST15]. Random projections allow us to obtain sharper quantitative bounds while employing simpler arguments, both conceptually and technically, than in the previous works [Ajt89, BPU92, BIP98, Ros14]
On connectivity-dependent resource requirements for digital quantum simulation of -level particles
A primary objective of quantum computation is to efficiently simulate quantum
physics. Scientifically and technologically important quantum Hamiltonians
include those with spin-, vibrational, photonic, and other bosonic degrees
of freedom, i.e. problems composed of or approximated by -level particles
(qudits). Recently, several methods for encoding these systems into a set of
qubits have been introduced, where each encoding's efficiency was studied in
terms of qubit and gate counts. Here, we build on previous results by including
effects of hardware connectivity. To study the number of SWAP gates required to
Trotterize commonly used quantum operators, we use both analytical arguments
and automatic tools that optimize the schedule in multiple stages. We study the
unary (or one-hot), Gray, standard binary, and block unary encodings, with
three connectivities: linear array, ladder array, and square grid. Among other
trends, we find that while the ladder array leads to substantial efficiencies
over the linear array, the advantage of the square over the ladder array is
less pronounced. These results are applicable in hardware co-design and in
choosing efficient qudit encodings for a given set of near-term quantum
hardware. Additionally, this work may be relevant to the scheduling of other
quantum algorithms for which matrix exponentiation is a subroutine.Comment: Accepted to QCE20 (IEEE Quantum Week). Corrected erroneous circuits
in Figure
Streaming Complexity of Spanning Tree Computation
The semi-streaming model is a variant of the streaming model frequently used for the computation of graph problems. It allows the edges of an n-node input graph to be read sequentially in p passes using OÌ(n) space. If the list of edges includes deletions, then the model is called the turnstile model; otherwise it is called the insertion-only model. In both models, some graph problems, such as spanning trees, k-connectivity, densest subgraph, degeneracy, cut-sparsifier, and (Î+1)-coloring, can be exactly solved or (1+Δ)-approximated in a single pass; while other graph problems, such as triangle detection and unweighted all-pairs shortest paths, are known to require ΩÌ(n) passes to compute. For many fundamental graph problems, the tractability in these models is open. In this paper, we study the tractability of computing some standard spanning trees, including BFS, DFS, and maximum-leaf spanning trees. Our results, in both the insertion-only and the turnstile models, are as follows.
Maximum-Leaf Spanning Trees: This problem is known to be APX-complete with inapproximability constant Ï â [245/244, 2). By constructing an Δ-MLST sparsifier, we show that for every constant Δ > 0, MLST can be approximated in a single pass to within a factor of 1+Δ w.h.p. (albeit in super-polynomial time for Δ †Ï-1 assuming P â NP) and can be approximated in polynomial time in a single pass to within a factor of Ï_n+Δ w.h.p., where Ï_n is the supremum constant that MLST cannot be approximated to within using polynomial time and OÌ(n) space. In the insertion-only model, these algorithms can be deterministic.
BFS Trees: It is known that BFS trees require Ï(1) passes to compute, but the naĂŻve approach needs O(n) passes. We devise a new randomized algorithm that reduces the pass complexity to O(ân), and it offers a smooth tradeoff between pass complexity and space usage. This gives a polynomial separation between single-source and all-pairs shortest paths for unweighted graphs.
DFS Trees: It is unknown whether DFS trees require more than one pass. The current best algorithm by Khan and Mehta [STACS 2019] takes OÌ(h) passes, where h is the height of computed DFS trees. Note that h can be as large as Ω(m/n) for n-node m-edge graphs. Our contribution is twofold. First, we provide a simple alternative proof of this result, via a new connection to sparse certificates for k-node-connectivity. Second, we present a randomized algorithm that reduces the pass complexity to O(ân), and it also offers a smooth tradeoff between pass complexity and space usage.ISSN:1868-896
Massively Parallel Algorithms for Distance Approximation and Spanners
Over the past decade, there has been increasing interest in
distributed/parallel algorithms for processing large-scale graphs. By now, we
have quite fast algorithms -- usually sublogarithmic-time and often
-time, or even faster -- for a number of fundamental graph
problems in the massively parallel computation (MPC) model. This model is a
widely-adopted theoretical abstraction of MapReduce style settings, where a
number of machines communicate in an all-to-all manner to process large-scale
data. Contributing to this line of work on MPC graph algorithms, we present
round MPC algorithms for computing
-spanners in the strongly sublinear regime of local memory. To
the best of our knowledge, these are the first sublogarithmic-time MPC
algorithms for spanner construction. As primary applications of our spanners,
we get two important implications, as follows:
-For the MPC setting, we get an -round algorithm for
approximation of all pairs shortest paths (APSP) in the
near-linear regime of local memory. To the best of our knowledge, this is the
first sublogarithmic-time MPC algorithm for distance approximations.
-Our result above also extends to the Congested Clique model of distributed
computing, with the same round complexity and approximation guarantee. This
gives the first sub-logarithmic algorithm for approximating APSP in weighted
graphs in the Congested Clique model
Communication Over a Wireless Network With Random Connections
A network of nodes in which pairs communicate over a shared wireless medium is analyzed. We consider the maximum total aggregate traffic flow possible as given by the number of users multiplied by their data rate. The model in this paper differs substantially from the many existing approaches in that the channel connections in this network are entirely random: rather than being governed by geometry and a decay-versus-distance law, the strengths of the connections between nodes are drawn independently from a common distribution. Such a model is appropriate for environments where the first-order effect that governs the signal strength at a receiving node is a random event (such as the existence of an obstacle), rather than the distance from the transmitter. It is shown that the aggregate traffic flow as a function of the number of nodes n is a strong function of the channel distribution. In particular, for certain distributions the aggregate traffic flow is at least n/(log n)^d for some dâ«0, which is significantly larger than the O(sqrt n) results obtained for many geometric models. The results provide guidelines for the connectivity that is needed for large aggregate traffic. The relation between the proposed model and existing distance-based models is shown in some cases
Distributed Edge Connectivity in Sublinear Time
We present the first sublinear-time algorithm for a distributed
message-passing network sto compute its edge connectivity exactly in
the CONGEST model, as long as there are no parallel edges. Our algorithm takes
time to compute and a
cut of cardinality with high probability, where and are the
number of nodes and the diameter of the network, respectively, and
hides polylogarithmic factors. This running time is sublinear in (i.e.
) whenever is. Previous sublinear-time
distributed algorithms can solve this problem either (i) exactly only when
[Thurimella PODC'95; Pritchard, Thurimella, ACM
Trans. Algorithms'11; Nanongkai, Su, DISC'14] or (ii) approximately [Ghaffari,
Kuhn, DISC'13; Nanongkai, Su, DISC'14].
To achieve this we develop and combine several new techniques. First, we
design the first distributed algorithm that can compute a -edge connectivity
certificate for any in time .
Second, we show that by combining the recent distributed expander decomposition
technique of [Chang, Pettie, Zhang, SODA'19] with techniques from the
sequential deterministic edge connectivity algorithm of [Kawarabayashi, Thorup,
STOC'15], we can decompose the network into a sublinear number of clusters with
small average diameter and without any mincut separating a cluster (except the
`trivial' ones). Finally, by extending the tree packing technique from [Karger
STOC'96], we can find the minimum cut in time proportional to the number of
components. As a byproduct of this technique, we obtain an -time
algorithm for computing exact minimum cut for weighted graphs.Comment: Accepted at 51st ACM Symposium on Theory of Computing (STOC 2019
Dynamic and Multi-functional Labeling Schemes
We investigate labeling schemes supporting adjacency, ancestry, sibling, and
connectivity queries in forests. In the course of more than 20 years, the
existence of labeling schemes supporting each of these
functions was proven, with the most recent being ancestry [Fraigniaud and
Korman, STOC '10]. Several multi-functional labeling schemes also enjoy lower
or upper bounds of or
respectively. Notably an upper bound of for
adjacency+siblings and a lower bound of for each of the
functions siblings, ancestry, and connectivity [Alstrup et al., SODA '03]. We
improve the constants hidden in the -notation. In particular we show a lower bound for connectivity+ancestry and
connectivity+siblings, as well as an upper bound of for connectivity+adjacency+siblings by altering existing
methods.
In the context of dynamic labeling schemes it is known that ancestry requires
bits [Cohen, et al. PODS '02]. In contrast, we show upper and lower
bounds on the label size for adjacency, siblings, and connectivity of
bits, and to support all three functions. There exist efficient
adjacency labeling schemes for planar, bounded treewidth, bounded arboricity
and interval graphs. In a dynamic setting, we show a lower bound of
for each of those families.Comment: 17 pages, 5 figure
Statistical mechanics of the vertex-cover problem
We review recent progress in the study of the vertex-cover problem (VC). VC
belongs to the class of NP-complete graph theoretical problems, which plays a
central role in theoretical computer science. On ensembles of random graphs, VC
exhibits an coverable-uncoverable phase transition. Very close to this
transition, depending on the solution algorithm, easy-hard transitions in the
typical running time of the algorithms occur.
We explain a statistical mechanics approach, which works by mapping VC to a
hard-core lattice gas, and then applying techniques like the replica trick or
the cavity approach. Using these methods, the phase diagram of VC could be
obtained exactly for connectivities , where VC is replica symmetric.
Recently, this result could be confirmed using traditional mathematical
techniques. For , the solution of VC exhibits full replica symmetry
breaking.
The statistical mechanics approach can also be used to study analytically the
typical running time of simple complete and incomplete algorithms for VC.
Finally, we describe recent results for VC when studied on other ensembles of
finite- and infinite-dimensional graphs.Comment: review article, 26 pages, 9 figures, to appear in J. Phys. A: Math.
Ge
- âŠ