252,832 research outputs found

    Communication Complexity and Intrinsic Universality in Cellular Automata

    Get PDF
    The notions of universality and completeness are central in the theories of computation and computational complexity. However, proving lower bounds and necessary conditions remains hard in most of the cases. In this article, we introduce necessary conditions for a cellular automaton to be "universal", according to a precise notion of simulation, related both to the dynamics of cellular automata and to their computational power. This notion of simulation relies on simple operations of space-time rescaling and it is intrinsic to the model of cellular automata. Intrinsinc universality, the derived notion, is stronger than Turing universality, but more uniform, and easier to define and study. Our approach builds upon the notion of communication complexity, which was primarily designed to study parallel programs, and thus is, as we show in this article, particulary well suited to the study of cellular automata: it allowed to show, by studying natural problems on the dynamics of cellular automata, that several classes of cellular automata, as well as many natural (elementary) examples, could not be intrinsically universal

    On the Value of Partial Information for Learning from Examples

    Get PDF
    AbstractThe PAC model of learning and its extension to real valued function classes provides a well-accepted theoretical framework for representing the problem of learning a target functiong(x) using a random sample {(xi,g(xi))}i=1m. Based on the uniform strong law of large numbers the PAC model establishes the sample complexity, i.e., the sample sizemwhich is sufficient for accurately estimating the target function to within high confidence. Often, in addition to a random sample, some form of prior knowledge is available about the target. It is intuitive that increasing the amount of information should have the same effect on the error as increasing the sample size. But quantitatively how does the rate of error with respect to increasing information compare to the rate of error with increasing sample size? To answer this we consider a new approach based on a combination of information-based complexity of Traubet al.and Vapnik–Chervonenkis (VC) theory. In contrast to VC-theory where function classes of finite pseudo-dimension are used only for statistical-based estimation, we let such classes play a dual role of functional estimation as well as approximation. This is captured in a newly introduced quantity, ρd(F), which represents a nonlinear width of a function class F. We then extend the notion of thenth minimal radius of information and define a quantityIn,d(F) which measures the minimal approximation error of the worst-case targetg∈ F by the family of function classes having pseudo-dimensiondgiven partial information ongconsisting of values taken bynlinear operators. The error rates are calculated which leads to a quantitative notion of the value of partial information for the paradigm of learning from examples

    Uniform Diagonalization Theorem for Complexity Classes of Promise Problems including Randomized and Quantum Classes

    Full text link
    Diagonalization in the spirit of Cantor's diagonal arguments is a widely used tool in theoretical computer sciences to obtain structural results about computational problems and complexity classes by indirect proofs. The Uniform Diagonalization Theorem allows the construction of problems outside complexity classes while still being reducible to a specific decision problem. This paper provides a generalization of the Uniform Diagonalization Theorem by extending it to promise problems and the complexity classes they form, e.g. randomized and quantum complexity classes. The theorem requires from the underlying computing model not only the decidability of its acceptance and rejection behaviour but also of its promise-contradicting indifferent behaviour - a property that we will introduce as "total decidability" of promise problems. Implications of the Uniform Diagonalization Theorem are mainly of two kinds: 1. Existence of intermediate problems (e.g. between BQP and QMA) - also known as Ladner's Theorem - and 2. Undecidability if a problem of a complexity class is contained in a subclass (e.g. membership of a QMA-problem in BQP). Like the original Uniform Diagonalization Theorem the extension applies besides BQP and QMA to a large variety of complexity class pairs, including combinations from deterministic, randomized and quantum classes.Comment: 15 page

    Representation Learning for Clustering: A Statistical Framework

    Full text link
    We address the problem of communicating domain knowledge from a user to the designer of a clustering algorithm. We propose a protocol in which the user provides a clustering of a relatively small random sample of a data set. The algorithm designer then uses that sample to come up with a data representation under which kk-means clustering results in a clustering (of the full data set) that is aligned with the user's clustering. We provide a formal statistical model for analyzing the sample complexity of learning a clustering representation with this paradigm. We then introduce a notion of capacity of a class of possible representations, in the spirit of the VC-dimension, showing that classes of representations that have finite such dimension can be successfully learned with sample size error bounds, and end our discussion with an analysis of that dimension for classes of representations induced by linear embeddings.Comment: To be published in Proceedings of UAI 201

    Levelable Sets and the Algebraic Structure of Parameterizations

    Get PDF
    Asking which sets are fixed-parameter tractable for a given parameterization constitutes much of the current research in parameterized complexity theory. This approach faces some of the core difficulties in complexity theory. By focussing instead on the parameterizations that make a given set fixed-parameter tractable, we circumvent these difficulties. We isolate parameterizations as independent measures of complexity and study their underlying algebraic structure. Thus we are able to compare parameterizations, which establishes a hierarchy of complexity that is much stronger than that present in typical parameterized algorithms races. Among other results, we find that no practically fixed-parameter tractable sets have optimal parameterizations

    Error Bounds for Piecewise Smooth and Switching Regression

    Get PDF
    The paper deals with regression problems, in which the nonsmooth target is assumed to switch between different operating modes. Specifically, piecewise smooth (PWS) regression considers target functions switching deterministically via a partition of the input space, while switching regression considers arbitrary switching laws. The paper derives generalization error bounds in these two settings by following the approach based on Rademacher complexities. For PWS regression, our derivation involves a chaining argument and a decomposition of the covering numbers of PWS classes in terms of the ones of their component functions and the capacity of the classifier partitioning the input space. This yields error bounds with a radical dependency on the number of modes. For switching regression, the decomposition can be performed directly at the level of the Rademacher complexities, which yields bounds with a linear dependency on the number of modes. By using once more chaining and a decomposition at the level of covering numbers, we show how to recover a radical dependency. Examples of applications are given in particular for PWS and swichting regression with linear and kernel-based component functions.Comment: This work has been submitted to the IEEE for possible publication. Copyright may be transferred without notice,after which this version may no longer be accessibl
    • 

    corecore