29,457 research outputs found
Fast optimization algorithms and the cosmological constant
Denef and Douglas have observed that in certain landscape models the problem
of finding small values of the cosmological constant is a large instance of an
NP-hard problem. The number of elementary operations (quantum gates) needed to
solve this problem by brute force search exceeds the estimated computational
capacity of the observable universe. Here we describe a way out of this
puzzling circumstance: despite being NP-hard, the problem of finding a small
cosmological constant can be attacked by more sophisticated algorithms whose
performance vastly exceeds brute force search. In fact, in some parameter
regimes the average-case complexity is polynomial. We demonstrate this by
explicitly finding a cosmological constant of order in a randomly
generated -dimensional ADK landscape.Comment: 19 pages, 5 figure
Improved bounds for testing Dyck languages
In this paper we consider the problem of deciding membership in Dyck
languages, a fundamental family of context-free languages, comprised of
well-balanced strings of parentheses. In this problem we are given a string of
length in the alphabet of parentheses of types and must decide if it is
well-balanced. We consider this problem in the property testing setting, where
one would like to make the decision while querying as few characters of the
input as possible.
Property testing of strings for Dyck language membership for , with a
number of queries independent of the input size , was provided in [Alon,
Krivelevich, Newman and Szegedy, SICOMP 2001]. Property testing of strings for
Dyck language membership for was first investigated in [Parnas, Ron
and Rubinfeld, RSA 2003]. They showed an upper bound and a lower bound for
distinguishing strings belonging to the language from strings that are far (in
terms of the Hamming distance) from the language, which are respectively (up to
polylogarithmic factors) the power and the power of the input size
.
Here we improve the power of in both bounds. For the upper bound, we
introduce a recursion technique, that together with a refinement of the methods
in the original work provides a test for any power of larger than .
For the lower bound, we introduce a new problem called Truestring Equivalence,
which is easily reducible to the -type Dyck language property testing
problem. For this new problem, we show a lower bound of to the power of
Improving Table Compression with Combinatorial Optimization
We study the problem of compressing massive tables within the
partition-training paradigm introduced by Buchsbaum et al. [SODA'00], in which
a table is partitioned by an off-line training procedure into disjoint
intervals of columns, each of which is compressed separately by a standard,
on-line compressor like gzip. We provide a new theory that unifies previous
experimental observations on partitioning and heuristic observations on column
permutation, all of which are used to improve compression rates. Based on the
theory, we devise the first on-line training algorithms for table compression,
which can be applied to individual files, not just continuously operating
sources; and also a new, off-line training algorithm, based on a link to the
asymmetric traveling salesman problem, which improves on prior work by
rearranging columns prior to partitioning. We demonstrate these results
experimentally. On various test files, the on-line algorithms provide 35-55%
improvement over gzip with negligible slowdown; the off-line reordering
provides up to 20% further improvement over partitioning alone. We also show
that a variation of the table compression problem is MAX-SNP hard.Comment: 22 pages, 2 figures, 5 tables, 23 references. Extended abstract
appears in Proc. 13th ACM-SIAM SODA, pp. 213-222, 200
On optimally partitioning a text to improve its compression
In this paper we investigate the problem of partitioning an input string T in
such a way that compressing individually its parts via a base-compressor C gets
a compressed output that is shorter than applying C over the entire T at once.
This problem was introduced in the context of table compression, and then
further elaborated and extended to strings and trees. Unfortunately, the
literature offers poor solutions: namely, we know either a cubic-time algorithm
for computing the optimal partition based on dynamic programming, or few
heuristics that do not guarantee any bounds on the efficacy of their computed
partition, or algorithms that are efficient but work in some specific scenarios
(such as the Burrows-Wheeler Transform) and achieve compression performance
that might be worse than the optimal-partitioning by a
factor. Therefore, computing efficiently the optimal solution is still open. In
this paper we provide the first algorithm which is guaranteed to compute in
O(n \log_{1+\eps}n) time a partition of T whose compressed output is
guaranteed to be no more than -worse the optimal one, where
may be any positive constant
Substituting Quantum Entanglement for Communication
We show that quantum entanglement can be used as a substitute for
communication when the goal is to compute a function whose input data is
distributed among remote parties. Specifically, we show that, for a particular
function among three parties (each of which possesses part of the function's
input), a prior quantum entanglement enables one of them to learn the value of
the function with only two bits of communication occurring among the parties,
whereas, without quantum entanglement, three bits of communication are
necessary. This result contrasts the well-known fact that quantum entanglement
cannot be used to simulate communication among remote parties.Comment: 4 pages REVTeX, no figures. Minor correction
Dynamics of quantum adiabatic evolution algorithm for Number Partitioning
We have developed a general technique to study the dynamics of the quantum
adiabatic evolution algorithm applied to random combinatorial optimization
problems in the asymptotic limit of large problem size . We use as an
example the NP-complete Number Partitioning problem and map the algorithm
dynamics to that of an auxilary quantum spin glass system with the slowly
varying Hamiltonian. We use a Green function method to obtain the adiabatic
eigenstates and the minimum excitation gap, ,
corresponding to the exponential complexity of the algorithm for Number
Partitioning. The key element of the analysis is the conditional energy
distribution computed for the set of all spin configurations generated from a
given (ancestor) configuration by simulteneous fipping of a fixed number of
spins. For the problem in question this distribution is shown to depend on the
ancestor spin configuration only via a certain parameter related to the energy
of the configuration. As the result, the algorithm dynamics can be described in
terms of one-dimenssional quantum diffusion in the energy space. This effect
provides a general limitation on the power of a quantum adiabatic computation
in random optimization problems. Analytical results are in agreement with the
numerical simulation of the algorithm.Comment: 32 pages, 5 figures, 3 Appendices; List of additions compare to v.3:
(i) numerical solution of the stationary Schroedinger equation for the
adiabatic eigenstates and eigenvalues; (ii) connection between the scaling
law of the minimum gap with the problem size and the shape of the
coarse-grained distribution of the adiabatic eigenvalues at the
avoided-crossing poin
On the Distributed Complexity of Large-Scale Graph Computations
Motivated by the increasing need to understand the distributed algorithmic
foundations of large-scale graph computations, we study some fundamental graph
problems in a message-passing model for distributed computing where
machines jointly perform computations on graphs with nodes (typically, ). The input graph is assumed to be initially randomly partitioned among
the machines, a common implementation in many real-world systems.
Communication is point-to-point, and the goal is to minimize the number of
communication {\em rounds} of the computation.
Our main contribution is the {\em General Lower Bound Theorem}, a theorem
that can be used to show non-trivial lower bounds on the round complexity of
distributed large-scale data computations. The General Lower Bound Theorem is
established via an information-theoretic approach that relates the round
complexity to the minimal amount of information required by machines to solve
the problem. Our approach is generic and this theorem can be used in a
"cookbook" fashion to show distributed lower bounds in the context of several
problems, including non-graph problems. We present two applications by showing
(almost) tight lower bounds for the round complexity of two fundamental graph
problems, namely {\em PageRank computation} and {\em triangle enumeration}. Our
approach, as demonstrated in the case of PageRank, can yield tight lower bounds
for problems (including, and especially, under a stochastic partition of the
input) where communication complexity techniques are not obvious.
Our approach, as demonstrated in the case of triangle enumeration, can yield
stronger round lower bounds as well as message-round tradeoffs compared to
approaches that use communication complexity techniques
- âŠ