263,494 research outputs found
A Computable Economist’s Perspective on Computational Complexity
A computable economist's view of the world of computational complexity theory is described. This means the model of computation underpinning theories of computational complexity plays a central role. The emergence of computational complexity theories from diverse traditions is emphasised. The unifications that emerged in the modern era was codified by means of the notions of efficiency of computations, non-deterministic computations, completeness, reducibility and verifiability - all three of the latter concepts had their origins on what may be called 'Post's Program of Research for Higher Recursion Theory'. Approximations, computations and constructions are also emphasised. The recent real model of computation as a basis for studying computational complexity in the domain of the reals is also presented and discussed, albeit critically. A brief sceptical section on algorithmic complexity theory is included in an appendix
On the Complexity of Random Quantum Computations and the Jones Polynomial
There is a natural relationship between Jones polynomials and quantum
computation. We use this relationship to show that the complexity of evaluating
relative-error approximations of Jones polynomials can be used to bound the
classical complexity of approximately simulating random quantum computations.
We prove that random quantum computations cannot be classically simulated up to
a constant total variation distance, under the assumption that (1) the
Polynomial Hierarchy does not collapse and (2) the average-case complexity of
relative-error approximations of the Jones polynomial matches the worst-case
complexity over a constant fraction of random links. Our results provide a
straightforward relationship between the approximation of Jones polynomials and
the complexity of random quantum computations.Comment: 8 pages, 4 figure
Inverse, forward and other dynamic computations computationally optimized with sparse matrix factorizations
We propose an algorithm to compute the dynamics of articulated rigid-bodies
with different sensor distributions. Prior to the on-line computations, the
proposed algorithm performs an off-line optimisation step to simplify the
computational complexity of the underlying solution. This optimisation step
consists in formulating the dynamic computations as a system of linear
equations. The computational complexity of computing the associated solution is
reduced by performing a permuted LU-factorisation with off-line optimised
permutations. We apply our algorithm to solve classical dynamic problems:
inverse and forward dynamics. The computational complexity of the proposed
solution is compared to `gold standard' algorithms: recursive Newton-Euler and
articulated body algorithm. It is shown that our algorithm reduces the number
of floating point operations with respect to previous approaches. We also
evaluate the numerical complexity of our algorithm by performing tests on
dynamic computations for which no gold standard is available.Comment: 8 pages, 2 figure, conference RCAR 201
A Computable Economist’s Perspective on Computational Complexity
A computable economist.s view of the world of computational complexity theory is described. This means the model of computation underpinning theories of computational complexity plays a central role. The emergence of computational complexity theories from diverse traditions is emphasised. The unifications that emerged in the modern era was codified by means of the notions of efficiency of computations, non-deterministic computations, completeness, reducibility and verifiability - all three of the latter concepts had their origins on what may be called "Post's Program of Research for Higher Recursion Theory". Approximations, computations and constructions are also emphasised. The recent real model of computation as a basis for studying computational complexity in the domain of the reals is also presented and discussed, albeit critically. A brief sceptical section on algorithmic complexity theory is included in an appendix.
Statistical Pruning for Near-Maximum Likelihood Decoding
In many communications problems, maximum-likelihood (ML) decoding reduces to finding the closest (skewed) lattice point in N-dimensions to a given point xisin CN. In its full generality, this problem is known to be NP-complete. Recently, the expected complexity of the sphere decoder, a particular algorithm that solves the ML problem exactly, has been computed. An asymptotic analysis of this complexity has also been done where it is shown that the required computations grow exponentially in N for any fixed SNR. At the same time, numerical computations of the expected complexity show that there are certain ranges of rates, SNRs and dimensions N for which the expected computation (counted as the number of scalar multiplications) involves no more than N3 computations. However, when the dimension of the problem grows too large, the required computations become prohibitively large, as expected from the asymptotic exponential complexity. In this paper, we propose an algorithm that, for large N, offers substantial computational savings over the sphere decoder, while maintaining performance arbitrarily close to ML. We statistically prune the search space to a subset that, with high probability, contains the optimal solution, thereby reducing the complexity of the search. Bounds on the error performance of the new method are proposed. The complexity of the new algorithm is analyzed through an upper bound. The asymptotic behavior of the upper bound for large N is also analyzed which shows that the upper bound is also exponential but much lower than the sphere decoder. Simulation results show that the algorithm is much more efficient than the original sphere decoder for smaller dimensions as well, and does not sacrifice much in terms of performance
On the Distributed Complexity of Large-Scale Graph Computations
Motivated by the increasing need to understand the distributed algorithmic
foundations of large-scale graph computations, we study some fundamental graph
problems in a message-passing model for distributed computing where
machines jointly perform computations on graphs with nodes (typically, ). The input graph is assumed to be initially randomly partitioned among
the machines, a common implementation in many real-world systems.
Communication is point-to-point, and the goal is to minimize the number of
communication {\em rounds} of the computation.
Our main contribution is the {\em General Lower Bound Theorem}, a theorem
that can be used to show non-trivial lower bounds on the round complexity of
distributed large-scale data computations. The General Lower Bound Theorem is
established via an information-theoretic approach that relates the round
complexity to the minimal amount of information required by machines to solve
the problem. Our approach is generic and this theorem can be used in a
"cookbook" fashion to show distributed lower bounds in the context of several
problems, including non-graph problems. We present two applications by showing
(almost) tight lower bounds for the round complexity of two fundamental graph
problems, namely {\em PageRank computation} and {\em triangle enumeration}. Our
approach, as demonstrated in the case of PageRank, can yield tight lower bounds
for problems (including, and especially, under a stochastic partition of the
input) where communication complexity techniques are not obvious.
Our approach, as demonstrated in the case of triangle enumeration, can yield
stronger round lower bounds as well as message-round tradeoffs compared to
approaches that use communication complexity techniques
Turing machines can be efficiently simulated by the General Purpose Analog Computer
The Church-Turing thesis states that any sufficiently powerful computational
model which captures the notion of algorithm is computationally equivalent to
the Turing machine. This equivalence usually holds both at a computability
level and at a computational complexity level modulo polynomial reductions.
However, the situation is less clear in what concerns models of computation
using real numbers, and no analog of the Church-Turing thesis exists for this
case. Recently it was shown that some models of computation with real numbers
were equivalent from a computability perspective. In particular it was shown
that Shannon's General Purpose Analog Computer (GPAC) is equivalent to
Computable Analysis. However, little is known about what happens at a
computational complexity level. In this paper we shed some light on the
connections between this two models, from a computational complexity level, by
showing that, modulo polynomial reductions, computations of Turing machines can
be simulated by GPACs, without the need of using more (space) resources than
those used in the original Turing computation, as long as we are talking about
bounded computations. In other words, computations done by the GPAC are as
space-efficient as computations done in the context of Computable Analysis
- …