742 research outputs found

    Statistical Learning of Arbitrary Computable Classifiers

    Get PDF
    Statistical learning theory chiefly studies restricted hypothesis classes, particularly those with finite Vapnik-Chervonenkis (VC) dimension. The fundamental quantity of interest is the sample complexity: the number of samples required to learn to a specified level of accuracy. Here we consider learning over the set of all computable labeling functions. Since the VC-dimension is infinite and a priori (uniform) bounds on the number of samples are impossible, we let the learning algorithm decide when it has seen sufficient samples to have learned. We first show that learning in this setting is indeed possible, and develop a learning algorithm. We then show, however, that bounding sample complexity independently of the distribution is impossible. Notably, this impossibility is entirely due to the requirement that the learning algorithm be computable, and not due to the statistical nature of the problem.Comment: Expanded the section on prior work and added reference

    Robust Stochastic Chemical Reaction Networks and Bounded Tau-Leaping

    Get PDF
    The behavior of some stochastic chemical reaction networks is largely unaffected by slight inaccuracies in reaction rates. We formalize the robustness of state probabilities to reaction rate deviations, and describe a formal connection between robustness and efficiency of simulation. Without robustness guarantees, stochastic simulation seems to require computational time proportional to the total number of reaction events. Even if the concentration (molecular count per volume) stays bounded, the number of reaction events can be linear in the duration of simulated time and total molecular count. We show that the behavior of robust systems can be predicted such that the computational work scales linearly with the duration of simulated time and concentration, and only polylogarithmically in the total molecular count. Thus our asymptotic analysis captures the dramatic speedup when molecular counts are large, and shows that for bounded concentrations the computation time is essentially invariant with molecular count. Finally, by noticing that even robust stochastic chemical reaction networks are capable of embedding complex computational problems, we argue that the linear dependence on simulated time and concentration is likely optimal

    Stable Leader Election in Population Protocols Requires Linear Time

    Full text link
    A population protocol *stably elects a leader* if, for all nn, starting from an initial configuration with nn agents each in an identical state, with probability 1 it reaches a configuration y\mathbf{y} that is correct (exactly one agent is in a special leader state \ell) and stable (every configuration reachable from y\mathbf{y} also has a single agent in state \ell). We show that any population protocol that stably elects a leader requires Ω(n)\Omega(n) expected "parallel time" --- Ω(n2)\Omega(n^2) expected total pairwise interactions --- to reach such a stable configuration. Our result also informs the understanding of the time complexity of chemical self-organization by showing an essential difficulty in generating exact quantities of molecular species quickly.Comment: accepted to Distributed Computing special issue of invited papers from DISC 2015; significantly revised proof structure and intuitive explanation

    Approximate Self-Assembly of the Sierpinski Triangle

    Full text link
    The Tile Assembly Model is a Turing universal model that Winfree introduced in order to study the nanoscale self-assembly of complex (typically aperiodic) DNA crystals. Winfree exhibited a self-assembly that tiles the first quadrant of the Cartesian plane with specially labeled tiles appearing at exactly the positions of points in the Sierpinski triangle. More recently, Lathrop, Lutz, and Summers proved that the Sierpinski triangle cannot self-assemble in the "strict" sense in which tiles are not allowed to appear at positions outside the target structure. Here we investigate the strict self-assembly of sets that approximate the Sierpinski triangle. We show that every set that does strictly self-assemble disagrees with the Sierpinski triangle on a set with fractal dimension at least that of the Sierpinski triangle (roughly 1.585), and that no subset of the Sierpinski triangle with fractal dimension greater than 1 strictly self-assembles. We show that our bounds are tight, even when restricted to supersets of the Sierpinski triangle, by presenting a strict self-assembly that adds communication fibers to the fractal structure without disturbing it. To verify this strict self-assembly we develop a generalization of the local determinism method of Soloveichik and Winfree
    corecore