16,301 research outputs found

    Parallel Complexity of Random Boolean Circuits

    Full text link
    Random instances of feedforward Boolean circuits are studied both analytically and numerically. Evaluating these circuits is known to be a P-complete problem and thus, in the worst case, believed to be impossible to perform, even given a massively parallel computer, in time much less than the depth of the circuit. Nonetheless, it is found that for some ensembles of random circuits, saturation to a fixed truth value occurs rapidly so that evaluation of the circuit can be accomplished in much less parallel time than the depth of the circuit. For other ensembles saturation does not occur and circuit evaluation is apparently hard. In particular, for some random circuits composed of connectives with five or more inputs, the number of true outputs at each level is a chaotic sequence. Finally, while the average case complexity depends on the choice of ensemble, it is shown that for all ensembles it is possible to simultaneously construct a typical circuit together with its solution in polylogarithmic parallel time.Comment: 16 pages, 10 figures, matches published versio

    A Quasi-Random Approach to Matrix Spectral Analysis

    Get PDF
    Inspired by the quantum computing algorithms for Linear Algebra problems [HHL,TaShma] we study how the simulation on a classical computer of this type of "Phase Estimation algorithms" performs when we apply it to solve the Eigen-Problem of Hermitian matrices. The result is a completely new, efficient and stable, parallel algorithm to compute an approximate spectral decomposition of any Hermitian matrix. The algorithm can be implemented by Boolean circuits in O(log2n)O(\log^2 n) parallel time with a total cost of O(nω+1)O(n^{\omega+1}) Boolean operations. This Boolean complexity matches the best known rigorous O(log2n)O(\log^2 n) parallel time algorithms, but unlike those algorithms our algorithm is (logarithmically) stable, so further improvements may lead to practical implementations. All previous efficient and rigorous approaches to solve the Eigen-Problem use randomization to avoid bad condition as we do too. Our algorithm makes further use of randomization in a completely new way, taking random powers of a unitary matrix to randomize the phases of its eigenvalues. Proving that a tiny Gaussian perturbation and a random polynomial power are sufficient to ensure almost pairwise independence of the phases (mod(2π))(\mod (2\pi)) is the main technical contribution of this work. This randomization enables us, given a Hermitian matrix with well separated eigenvalues, to sample a random eigenvalue and produce an approximate eigenvector in O(log2n)O(\log^2 n) parallel time and O(nω)O(n^\omega) Boolean complexity. We conjecture that further improvements of our method can provide a stable solution to the full approximate spectral decomposition problem with complexity similar to the complexity (up to a logarithmic factor) of sampling a single eigenvector.Comment: Replacing previous version: parallel algorithm runs in total complexity nω+1n^{\omega+1} and not nωn^{\omega}. However, the depth of the implementing circuit is log2(n)\log^2(n): hence comparable to fastest eigen-decomposition algorithms know

    Complexity, parallel computation and statistical physics

    Full text link
    The intuition that a long history is required for the emergence of complexity in natural systems is formalized using the notion of depth. The depth of a system is defined in terms of the number of parallel computational steps needed to simulate it. Depth provides an objective, irreducible measure of history applicable to systems of the kind studied in statistical physics. It is argued that physical complexity cannot occur in the absence of substantial depth and that depth is a useful proxy for physical complexity. The ideas are illustrated for a variety of systems in statistical physics.Comment: 21 pages, 7 figure

    Pseudorandomness for Approximate Counting and Sampling

    Get PDF
    We study computational procedures that use both randomness and nondeterminism. The goal of this paper is to derandomize such procedures under the weakest possible assumptions. Our main technical contribution allows one to “boost” a given hardness assumption: We show that if there is a problem in EXP that cannot be computed by poly-size nondeterministic circuits then there is one which cannot be computed by poly-size circuits that make non-adaptive NP oracle queries. This in particular shows that the various assumptions used over the last few years by several authors to derandomize Arthur-Merlin games (i.e., show AM = NP) are in fact all equivalent. We also define two new primitives that we regard as the natural pseudorandom objects associated with approximate counting and sampling of NP-witnesses. We use the “boosting” theorem and hashing techniques to construct these primitives using an assumption that is no stronger than that used to derandomize AM. We observe that Cai's proof that S_2^P ⊆ PP⊆(NP) and the learning algorithm of Bshouty et al. can be seen as reductions to sampling that are not probabilistic. As a consequence they can be derandomized under an assumption which is weaker than the assumption that was previously known to suffice

    Internal Diffusion-Limited Aggregation: Parallel Algorithms and Complexity

    Get PDF
    The computational complexity of internal diffusion-limited aggregation (DLA) is examined from both a theoretical and a practical point of view. We show that for two or more dimensions, the problem of predicting the cluster from a given set of paths is complete for the complexity class CC, the subset of P characterized by circuits composed of comparator gates. CC-completeness is believed to imply that, in the worst case, growing a cluster of size n requires polynomial time in n even on a parallel computer. A parallel relaxation algorithm is presented that uses the fact that clusters are nearly spherical to guess the cluster from a given set of paths, and then corrects defects in the guessed cluster through a non-local annihilation process. The parallel running time of the relaxation algorithm for two-dimensional internal DLA is studied by simulating it on a serial computer. The numerical results are compatible with a running time that is either polylogarithmic in n or a small power of n. Thus the computational resources needed to grow large clusters are significantly less on average than the worst-case analysis would suggest. For a parallel machine with k processors, we show that random clusters in d dimensions can be generated in O((n/k + log k) n^{2/d}) steps. This is a significant speedup over explicit sequential simulation, which takes O(n^{1+2/d}) time on average. Finally, we show that in one dimension internal DLA can be predicted in O(log n) parallel time, and so is in the complexity class NC

    A Near-Optimal Depth-Hierarchy Theorem for Small-Depth Multilinear Circuits

    Full text link
    We study the size blow-up that is necessary to convert an algebraic circuit of product-depth Δ+1\Delta+1 to one of product-depth Δ\Delta in the multilinear setting. We show that for every positive Δ=Δ(n)=o(logn/loglogn),\Delta = \Delta(n) = o(\log n/\log \log n), there is an explicit multilinear polynomial P(Δ)P^{(\Delta)} on nn variables that can be computed by a multilinear formula of product-depth Δ+1\Delta+1 and size O(n)O(n), but not by any multilinear circuit of product-depth Δ\Delta and size less than exp(nΩ(1/Δ))\exp(n^{\Omega(1/\Delta)}). This result is tight up to the constant implicit in the double exponent for all Δ=o(logn/loglogn).\Delta = o(\log n/\log \log n). This strengthens a result of Raz and Yehudayoff (Computational Complexity 2009) who prove a quasipolynomial separation for constant-depth multilinear circuits, and a result of Kayal, Nair and Saha (STACS 2016) who give an exponential separation in the case Δ=1.\Delta = 1. Our separating examples may be viewed as algebraic analogues of variants of the Graph Reachability problem studied by Chen, Oliveira, Servedio and Tan (STOC 2016), who used them to prove lower bounds for constant-depth Boolean circuits

    The Computational Complexity of the Lorentz Lattice Gas

    Full text link
    The Lorentz lattice gas is studied from the perspective of computational complexity theory. It is shown that using massive parallelism, particle trajectories can be simulated in a time that scales logarithmically in the length of the trajectory. This result characterizes the ``logical depth" of the Lorentz lattice gas and allows us to compare it to other models in statistical physics.Comment: 9 pages, LaTeX, to appear in J. Stat. Phy

    Generating Probability Distributions using Multivalued Stochastic Relay Circuits

    Get PDF
    The problem of random number generation dates back to von Neumann's work in 1951. Since then, many algorithms have been developed for generating unbiased bits from complex correlated sources as well as for generating arbitrary distributions from unbiased bits. An equally interesting, but less studied aspect is the structural component of random number generation as opposed to the algorithmic aspect. That is, given a network structure imposed by nature or physical devices, how can we build networks that generate arbitrary probability distributions in an optimal way? In this paper, we study the generation of arbitrary probability distributions in multivalued relay circuits, a generalization in which relays can take on any of N states and the logical 'and' and 'or' are replaced with 'min' and 'max' respectively. Previous work was done on two-state relays. We generalize these results, describing a duality property and networks that generate arbitrary rational probability distributions. We prove that these networks are robust to errors and design a universal probability generator which takes input bits and outputs arbitrary binary probability distributions

    The Computational Complexity of Generating Random Fractals

    Full text link
    In this paper we examine a number of models that generate random fractals. The models are studied using the tools of computational complexity theory from the perspective of parallel computation. Diffusion limited aggregation and several widely used algorithms for equilibrating the Ising model are shown to be highly sequential; it is unlikely they can be simulated efficiently in parallel. This is in contrast to Mandelbrot percolation that can be simulated in constant parallel time. Our research helps shed light on the intrinsic complexity of these models relative to each other and to different growth processes that have been recently studied using complexity theory. In addition, the results may serve as a guide to simulation physics.Comment: 28 pages, LATEX, 8 Postscript figures available from [email protected]
    corecore