115 research outputs found

    The Unsupervised Acquisition of a Lexicon from Continuous Speech

    Get PDF
    We present an unsupervised learning algorithm that acquires a natural-language lexicon from raw speech. The algorithm is based on the optimal encoding of symbol sequences in an MDL framework, and uses a hierarchical representation of language that overcomes many of the problems that have stymied previous grammar-induction procedures. The forward mapping from symbol sequences to the speech stream is modeled using features based on articulatory gestures. We present results on the acquisition of lexicons and language models from raw speech, text, and phonetic transcripts, and demonstrate that our algorithm compares very favorably to other reported results with respect to segmentation performance and statistical efficiency.Comment: 27 page technical repor

    An Inductive Inference Bibliography

    Get PDF

    How to Handle Assumptions in Synthesis

    Full text link
    The increased interest in reactive synthesis over the last decade has led to many improved solutions but also to many new questions. In this paper, we discuss the question of how to deal with assumptions on environment behavior. We present four goals that we think should be met and review several different possibilities that have been proposed. We argue that each of them falls short in at least one aspect.Comment: In Proceedings SYNT 2014, arXiv:1407.493

    Bypass transition and spot nucleation in boundary layers

    Full text link
    The spatio-temporal aspects of the transition to turbulence are considered in the case of a boundary layer flow developing above a flat plate exposed to free-stream turbulence. Combining results on the receptivity to free-stream turbulence with the nonlinear concept of a transition threshold, a physically motivated model suggests a spatial distribution of spot nucleation events. To describe the evolution of turbulent spots a probabilistic cellular automaton is introduced, with all parameters directly fitted from numerical simulations of the boundary layer. The nucleation rates are then combined with the cellular automaton model, yielding excellent quantitative agreement with the statistical characteristics for different free-stream turbulence levels. We thus show how the recent theoretical progress on transitional wall-bounded flows can be extended to the much wider class of spatially developing boundary-layer flows

    Design for a Darwinian Brain: Part 1. Philosophy and Neuroscience

    Full text link
    Physical symbol systems are needed for open-ended cognition. A good way to understand physical symbol systems is by comparison of thought to chemistry. Both have systematicity, productivity and compositionality. The state of the art in cognitive architectures for open-ended cognition is critically assessed. I conclude that a cognitive architecture that evolves symbol structures in the brain is a promising candidate to explain open-ended cognition. Part 2 of the paper presents such a cognitive architecture.Comment: Darwinian Neurodynamics. Submitted as a two part paper to Living Machines 2013 Natural History Museum, Londo

    Lazy Probabilistic Model Checking without Determinisation

    Get PDF
    The bottleneck in the quantitative analysis of Markov chains and Markov decision processes against specifications given in LTL or as some form of nondeterministic B\"uchi automata is the inclusion of a determinisation step of the automaton under consideration. In this paper, we show that full determinisation can be avoided: subset and breakpoint constructions suffice. We have implemented our approach---both explicit and symbolic versions---in a prototype tool. Our experiments show that our prototype can compete with mature tools like PRISM.Comment: 38 pages. Updated version for introducing the following changes: - general improvement on paper presentation; - extension of the approach to avoid full determinisation; - added proofs for such an extension; - added case studies; - updated old case studies to reflect the added extensio

    Synthesizing Systems with Optimal Average-Case Behavior for Ratio Objectives

    Full text link
    We show how to automatically construct a system that satisfies a given logical specification and has an optimal average behavior with respect to a specification with ratio costs. When synthesizing a system from a logical specification, it is often the case that several different systems satisfy the specification. In this case, it is usually not easy for the user to state formally which system she prefers. Prior work proposed to rank the correct systems by adding a quantitative aspect to the specification. A desired preference relation can be expressed with (i) a quantitative language, which is a function assigning a value to every possible behavior of a system, and (ii) an environment model defining the desired optimization criteria of the system, e.g., worst-case or average-case optimal. In this paper, we show how to synthesize a system that is optimal for (i) a quantitative language given by an automaton with a ratio cost function, and (ii) an environment model given by a labeled Markov decision process. The objective of the system is to minimize the expected (ratio) costs. The solution is based on a reduction to Markov Decision Processes with ratio cost functions which do not require that the costs in the denominator are strictly positive. We find an optimal strategy for these using a fractional linear program.Comment: In Proceedings iWIGP 2011, arXiv:1102.374

    An Introduction to Quantum Computing for Non-Physicists

    Full text link
    Richard Feynman's observation that quantum mechanical effects could not be simulated efficiently on a computer led to speculation that computation in general could be done more efficiently if it used quantum effects. This speculation appeared justified when Peter Shor described a polynomial time quantum algorithm for factoring integers. In quantum systems, the computational space increases exponentially with the size of the system which enables exponential parallelism. This parallelism could lead to exponentially faster quantum algorithms than possible classically. The catch is that accessing the results, which requires measurement, proves tricky and requires new non-traditional programming techniques. The aim of this paper is to guide computer scientists and other non-physicists through the conceptual and notational barriers that separate quantum computing from conventional computing. We introduce basic principles of quantum mechanics to explain where the power of quantum computers comes from and why it is difficult to harness. We describe quantum cryptography, teleportation, and dense coding. Various approaches to harnessing the power of quantum parallelism are explained, including Shor's algorithm, Grover's algorithm, and Hogg's algorithms. We conclude with a discussion of quantum error correction.Comment: 45 pages. To appear in ACM Computing Surveys. LATEX file. Exposition improved throughout thanks to reviewers' comment
    corecore