2,935 research outputs found

    Second-Order Functions and Theorems in ACL2

    Full text link
    SOFT ('Second-Order Functions and Theorems') is a tool to mimic second-order functions and theorems in the first-order logic of ACL2. Second-order functions are mimicked by first-order functions that reference explicitly designated uninterpreted functions that mimic function variables. First-order theorems over these second-order functions mimic second-order theorems universally quantified over function variables. Instances of second-order functions and theorems are systematically generated by replacing function variables with functions. SOFT can be used to carry out program refinement inside ACL2, by constructing a sequence of increasingly stronger second-order predicates over one or more target functions: the sequence starts with a predicate that specifies requirements for the target functions, and ends with a predicate that provides executable definitions for the target functions.Comment: In Proceedings ACL2 2015, arXiv:1509.0552

    Classification and Verification of Online Handwritten Signatures with Time Causal Information Theory Quantifiers

    Get PDF
    We present a new approach for online handwritten signature classification and verification based on descriptors stemming from Information Theory. The proposal uses the Shannon Entropy, the Statistical Complexity, and the Fisher Information evaluated over the Bandt and Pompe symbolization of the horizontal and vertical coordinates of signatures. These six features are easy and fast to compute, and they are the input to an One-Class Support Vector Machine classifier. The results produced surpass state-of-the-art techniques that employ higher-dimensional feature spaces which often require specialized software and hardware. We assess the consistency of our proposal with respect to the size of the training sample, and we also use it to classify the signatures into meaningful groups.Comment: Submitted to PLOS On

    More individual differences in language attainment: How much do adult native speakers of English know about passives and quantifiers?

    Get PDF
    This paper provides experimental evidence suggesting that there are considerable differences in native language attainment, and that these are at least partially attributable to individual speakers’ experience. Experiment 1 tested high academic attainment (hereafter, HAA) and low academic attainment (LAA) participants’ comprehension using a picture selection task. Test sentences comprised passives and two variants of the universal quantification construction. Active constructions were used as a control condition. HAA participants performed at ceiling in all conditions; LAA participants performed at ceiling only on actives. As predicted by usage-based accounts, the order of difficulty of the four sentence types mirrored their frequency. Experiment 2 tested whether the less-educated participants’ difficulties with these constructions are attributable to insufficient experience. After a screening test, low scoring participants were randomly assigned to two training groups. The passive training group were given a short training session on the passive construction; and the quantifier training group were trained on sentences with quantifiers. A series of post-training tests show that performance on the trained construction improved dramatically, and that the effect was long-lasting

    Recursive Neural Networks Can Learn Logical Semantics

    Full text link
    Tree-structured recursive neural networks (TreeRNNs) for sentence meaning have been successful for many applications, but it remains an open question whether the fixed-length representations that they learn can support tasks as demanding as logical deduction. We pursue this question by evaluating whether two such models---plain TreeRNNs and tree-structured neural tensor networks (TreeRNTNs)---can correctly learn to identify logical relationships such as entailment and contradiction using these representations. In our first set of experiments, we generate artificial data from a logical grammar and use it to evaluate the models' ability to learn to handle basic relational reasoning, recursive structures, and quantification. We then evaluate the models on the more natural SICK challenge data. Both models perform competitively on the SICK data and generalize well in all three experiments on simulated data, suggesting that they can learn suitable representations for logical inference in natural language
    corecore