1,584 research outputs found
The Speedup Theorem in a Primitive Recursive Framework
Blum’s speedup theorem is a major theorem in computational com-plexity, showing the existence of computable functions for which no optimal program can exist: for any speedup function r there ex-ists a function fr such that for any program computing fr we can find an alternative program computing it with the desired speedup r. The main corollary is that algorithmic problems do not have, in general, a inherent complexity. Traditional proofs of the speedup theorem make an essential use of Kleene’s fix point theorem to close a suitable diagonal argument. As a consequence, very little is known about its validity in subrecursive settings, where there is no universal machine, and no fixpoints. In this article we discuss an alternative, formal proof of the speedup theorem that allows us to spare the invocation of the fix point theorem and sheds more light on the actual complexity of the function fr
A Formal Framework for Speedup Learning from Problems and Solutions
Speedup learning seeks to improve the computational efficiency of problem
solving with experience. In this paper, we develop a formal framework for
learning efficient problem solving from random problems and their solutions. We
apply this framework to two different representations of learned knowledge,
namely control rules and macro-operators, and prove theorems that identify
sufficient conditions for learning in each representation. Our proofs are
constructive in that they are accompanied with learning algorithms. Our
framework captures both empirical and explanation-based speedup learning in a
unified fashion. We illustrate our framework with implementations in two
domains: symbolic integration and Eight Puzzle. This work integrates many
strands of experimental and theoretical work in machine learning, including
empirical learning of control rules, macro-operator learning, Explanation-Based
Learning (EBL), and Probably Approximately Correct (PAC) Learning.Comment: See http://www.jair.org/ for any accompanying file
Multibody Multipole Methods
A three-body potential function can account for interactions among triples of
particles which are uncaptured by pairwise interaction functions such as
Coulombic or Lennard-Jones potentials. Likewise, a multibody potential of order
can account for interactions among -tuples of particles uncaptured by
interaction functions of lower orders. To date, the computation of multibody
potential functions for a large number of particles has not been possible due
to its scaling cost. In this paper we describe a fast tree-code for
efficiently approximating multibody potentials that can be factorized as
products of functions of pairwise distances. For the first time, we show how to
derive a Barnes-Hut type algorithm for handling interactions among more than
two particles. Our algorithm uses two approximation schemes: 1) a deterministic
series expansion-based method; 2) a Monte Carlo-based approximation based on
the central limit theorem. Our approach guarantees a user-specified bound on
the absolute or relative error in the computed potential with an asymptotic
probability guarantee. We provide speedup results on a three-body dispersion
potential, the Axilrod-Teller potential.Comment: To appear in Journal of Computational Physic
Shared-Memory Parallel Maximal Clique Enumeration
We present shared-memory parallel methods for Maximal Clique Enumeration
(MCE) from a graph. MCE is a fundamental and well-studied graph analytics task,
and is a widely used primitive for identifying dense structures in a graph. Due
to its computationally intensive nature, parallel methods are imperative for
dealing with large graphs. However, surprisingly, there do not yet exist
scalable and parallel methods for MCE on a shared-memory parallel machine. In
this work, we present efficient shared-memory parallel algorithms for MCE, with
the following properties: (1) the parallel algorithms are provably
work-efficient relative to a state-of-the-art sequential algorithm (2) the
algorithms have a provably small parallel depth, showing that they can scale to
a large number of processors, and (3) our implementations on a multicore
machine shows a good speedup and scaling behavior with increasing number of
cores, and are substantially faster than prior shared-memory parallel
algorithms for MCE.Comment: 10 pages, 3 figures, proceedings of the 25th IEEE International
Conference on. High Performance Computing, Data, and Analytics (HiPC), 201
Quantum query complexity of entropy estimation
Estimation of Shannon and R\'enyi entropies of unknown discrete distributions
is a fundamental problem in statistical property testing and an active research
topic in both theoretical computer science and information theory. Tight bounds
on the number of samples to estimate these entropies have been established in
the classical setting, while little is known about their quantum counterparts.
In this paper, we give the first quantum algorithms for estimating
-R\'enyi entropies (Shannon entropy being 1-Renyi entropy). In
particular, we demonstrate a quadratic quantum speedup for Shannon entropy
estimation and a generic quantum speedup for -R\'enyi entropy
estimation for all , including a tight bound for the
collision-entropy (2-R\'enyi entropy). We also provide quantum upper bounds for
extreme cases such as the Hartley entropy (i.e., the logarithm of the support
size of a distribution, corresponding to ) and the min-entropy case
(i.e., ), as well as the Kullback-Leibler divergence between
two distributions. Moreover, we complement our results with quantum lower
bounds on -R\'enyi entropy estimation for all .Comment: 43 pages, 1 figur
Automatically Leveraging MapReduce Frameworks for Data-Intensive Applications
MapReduce is a popular programming paradigm for developing large-scale,
data-intensive computation. Many frameworks that implement this paradigm have
recently been developed. To leverage these frameworks, however, developers must
become familiar with their APIs and rewrite existing code. Casper is a new tool
that automatically translates sequential Java programs into the MapReduce
paradigm. Casper identifies potential code fragments to rewrite and translates
them in two steps: (1) Casper uses program synthesis to search for a program
summary (i.e., a functional specification) of each code fragment. The summary
is expressed using a high-level intermediate language resembling the MapReduce
paradigm and verified to be semantically equivalent to the original using a
theorem prover. (2) Casper generates executable code from the summary, using
either the Hadoop, Spark, or Flink API. We evaluated Casper by automatically
converting real-world, sequential Java benchmarks to MapReduce. The resulting
benchmarks perform up to 48.2x faster compared to the original.Comment: 12 pages, additional 4 pages of references and appendi
- …