5,058 research outputs found

    Learning Linear Temporal Properties

    Full text link
    We present two novel algorithms for learning formulas in Linear Temporal Logic (LTL) from examples. The first learning algorithm reduces the learning task to a series of satisfiability problems in propositional Boolean logic and produces a smallest LTL formula (in terms of the number of subformulas) that is consistent with the given data. Our second learning algorithm, on the other hand, combines the SAT-based learning algorithm with classical algorithms for learning decision trees. The result is a learning algorithm that scales to real-world scenarios with hundreds of examples, but can no longer guarantee to produce minimal consistent LTL formulas. We compare both learning algorithms and demonstrate their performance on a wide range of synthetic benchmarks. Additionally, we illustrate their usefulness on the task of understanding executions of a leader election protocol

    On the Parameterized Complexity of Learning Monadic Second-Order Formulas

    Full text link
    Within the model-theoretic framework for supervised learning introduced by Grohe and Tur\'an (TOCS 2004), we study the parameterized complexity of learning concepts definable in monadic second-order logic (MSO). We show that the problem of learning a consistent MSO-formula is fixed-parameter tractable on structures of bounded tree-width and on graphs of bounded clique-width in the 1-dimensional case, that is, if the instances are single vertices (and not tuples of vertices). This generalizes previous results on strings and on trees. Moreover, in the agnostic PAC-learning setting, we show that the result also holds in higher dimensions. Finally, via a reduction to the MSO-model-checking problem, we show that learning a consistent MSO-formula is para-NP-hard on general structures

    The Method of Contrast and the Perception of Causality in Audition

    Get PDF
    The method of contrast is used within philosophy of perception in order to demonstrate that a specific property could be part of our perception. The method is based on two passages. I argue that the method succeeds in its task only if the intuition of the difference, which constitutes the core of the first passage, has two specific traits. The second passage of the method consists in the evaluation of the available explanations of this difference. Among the three outlined options, I will demonstrate that only in the third option – as we shall see, the case of the scenario that remains the same but is perceived in two different ways by the same perceiver – the intuition purports a difference that posses the necessary characteristics, namely being immediately evident and extremely complex and multifaceted, which determine its tensive nature. The application within auditory perception of this third option will generate two cases, a diachronic one and a synchronic one, which clearly show that we can auditorily perceive causality as a link between two sonorous episodes. The causal explanation is the only possible explanation among the many evaluated within the second passage of the method of contrast

    Learning Concepts Described By Weight Aggregation Logic

    Get PDF
    We consider weighted structures, which extend ordinary relational structures by assigning weights, i.e. elements from a particular group or ring, to tuples present in the structure. We introduce an extension of first-order logic that allows to aggregate weights of tuples, compare such aggregates, and use them to build more complex formulas. We provide locality properties of fragments of this logic including Feferman-Vaught decompositions and a Gaifman normal form for a fragment called FOW?, as well as a localisation theorem for a larger fragment called FOWA?. This fragment can express concepts from various machine learning scenarios. Using the locality properties, we show that concepts definable in FOWA? over a weighted background structure of at most polylogarithmic degree are agnostically PAC-learnable in polylogarithmic time after pseudo-linear time preprocessing

    Learning implicational models of universal grammar parameters

    Get PDF
    The use of parameters in the description of natural language syntax has to balance between the need to discriminate among (sometimes subtly different) languages, which can be seen as a cross-linguistic version of Chomsky's descriptive adequacy (Chomsky, 1964), and the complexity of the acquisition task that a large number of parameters would imply, which is a problem for explanatory adequacy. Here we first present a novel approach in which machine learning is used to detect hidden dependencies in a table of parameters. The result is a dependency graph in which some of the parameters can be fully predicted from others. These findings can be then subjected to linguistic analysis, which may either refute them by providing typological counter-examples of languages not included in the original dataset, dismiss them on theoretical grounds, or uphold them as tentative empirical laws worth of further study. Machine learning is also used to explore the full sets of parameters that are sufficient to distinguish one historically established language family from others. These results provide a new type of empirical evidence about the historical adequacy of parameter theories

    Calibrating Generative Models: The Probabilistic Chomsky-SchĂĽtzenberger Hierarchy

    Get PDF
    A probabilistic Chomsky–Schützenberger hierarchy of grammars is introduced and studied, with the aim of understanding the expressive power of generative models. We offer characterizations of the distributions definable at each level of the hierarchy, including probabilistic regular, context-free, (linear) indexed, context-sensitive, and unrestricted grammars, each corresponding to familiar probabilistic machine classes. Special attention is given to distributions on (unary notations for) positive integers. Unlike in the classical case where the "semi-linear" languages all collapse into the regular languages, using analytic tools adapted from the classical setting we show there is no collapse in the probabilistic hierarchy: more distributions become definable at each level. We also address related issues such as closure under probabilistic conditioning
    • …
    corecore