451 research outputs found

    Polynomial Time in the Parametric Lambda Calculus

    Get PDF

    Decidability for Non-Standard Conversions in Typed Lambda-Calculi

    Get PDF
    This thesis studies the decidability of conversions in typed lambda-calculi, along with the algorithms allowing for this decidability. Our study takes in consideration conversions going beyond the traditional beta, eta, or permutative conversions (also called commutative conversions). To decide these conversions, two classes of algorithms compete, the algorithms based on rewriting, here the goal is to decompose and orient the conversion so as to obtain a convergent system, these algorithms then boil down to rewrite the terms until they reach an irreducible forms; and the "reduction free" algorithms where the conversion is decided recursively by a detour via a meta-language. Throughout this thesis, we strive to explain the latter thanks to the former

    A new program for combinatory reduction and abstraction

    Get PDF
    v, 96 leaves ; 29 cmEven though lambda calculus (λ-calculus) and combinatory logic (CL) appear to be equivalent, they are not. As yet we do not have a reduction in CL which corresponds to β-reduction in λ-calculus. There are three proposals but they all have few problems one of which is the lack of a complete characterization of CL-terms corresponding to λ-terms in β-normal form. Finding such a characterization for any of the three proposals appears to require a lot of examples which are tedious and time consuming to develop by hand. For this reason, a computer program to do reductions and abstractions of CL-terms would be useful. This thesis is about an attempt to write such a program. The program that we have does not yet work for the three proposals but it works for βη-strong reduction. Coding this program turned out to be much harder than anticipated. Dr. Robin Cockett developed a semantic translation which helped in coding the program but his semantic translation needs to be extended to all three proposals to obtain the program originally desired and that needs a lot of research

    Introduction to linear logic and ludics, part II

    Get PDF
    This paper is the second part of an introduction to linear logic and ludics, both due to Girard. It is devoted to proof nets, in the limited, yet central, framework of multiplicative linear logic and to ludics, which has been recently developped in an aim of further unveiling the fundamental interactive nature of computation and logic. We hope to offer a few computer science insights into this new theory

    Neuronal bases of structural coherence in contemporary dance observation

    Get PDF
    The neuronal processes underlying dance observation have been the focus of an increasing number of brain imaging studies over the past decade. However, the existing literature mainly dealt with effects of motor and visual expertise, whereas the neural and cognitive mechanisms that underlie the interpretation of dance choreographies remained unexplored. Hence, much attention has been given to the Action Observation Network (AON) whereas the role of other potentially relevant neuro-cognitive mechanisms such as mentalizing (theory of mind) or language (narrative comprehension) in dance understanding is yet to be elucidated. We report the results of an fMRI study where the structural coherence of short contemporary dance choreographies was manipulated parametrically using the same taped movement material. Our participants were all trained dancers. The whole-brain analysis argues that the interpretation of structurally coherent dance phrases involves a subpart (Superior Parietal) of the AON as well as mentalizing regions in the dorsomedial Prefrontal Cortex. An ROI analysis based on a similar study using linguistic materials (Pallier et al. 2011) suggests that structural processing in language and dance might share certain neural mechanisms

    Neural Combinatory Constituency Parsing

    Get PDF
    東京都立大学Tokyo Metropolitan University博士(情報科学)doctoral thesi

    A Typed Lambda Calculus with Intersection Types

    Get PDF
    AbstractIntersection types are well known to type theorists mainly for two reasons. Firstly, they type all and only the strongly normalizable lambda terms. Secondly, the intersection type operator is a meta-level operator, that is, there is no direct logical counterpart in the Curry–Howard isomorphism sense. In particular, its meta-level nature implies that it does not correspond to the intuitionistic conjunction.The intersection type system is naturally a type inference system (system à la Curry), but the meta-level nature of the intersection operator does not allow to easily design an equivalent typed system (system à la Church). There are many proposals in the literature to design such systems, but none of them gives an entirely satisfactory answer to the problem. In this paper, we will review the main results in the literature both on the logical interpretation of intersection types and on proposed typed lambda calculi.The core of this paper is a new proposal for a true intersection typed lambda calculus, without any meta-level notion. Namely, any typable term (in the intersection type inference) has a corresponding typed term (which is the same as the untyped term by erasing the type decorations and the typed term constructors) with the same type, and vice versa.The main idea is to introduce a relevant parallel term constructor which corresponds to the intersection type constructor, in such a way that terms in parallel share the same resources, that is, the same context of free typed variables. Three rules allow us to generate all typed terms. The first two rules, Application and Lambda-abstraction, are performed on all the components of a parallel term in a synchronized way. Finally, via the third rule of Local Renaming, once a free typed variable is bounded by lambda-abstraction, each of the terms in parallel can do its local renaming, with type refinement, of that particular resource

    Porting a lexicalized-grammar parser to the biomedical domain

    Get PDF
    AbstractThis paper introduces a state-of-the-art, linguistically motivated statistical parser to the biomedical text mining community, and proposes a method of adapting it to the biomedical domain requiring only limited resources for data annotation. The parser was originally developed using the Penn Treebank and is therefore tuned to newspaper text. Our approach takes advantage of a lexicalized grammar formalism, Combinatory Categorial Grammar (ccg), to train the parser at a lower level of representation than full syntactic derivations. The ccg parser uses three levels of representation: a first level consisting of part-of-speech (pos) tags; a second level consisting of more fine-grained ccg lexical categories; and a third, hierarchical level consisting of ccg derivations. We find that simply retraining the pos tagger on biomedical data leads to a large improvement in parsing performance, and that using annotated data at the intermediate lexical category level of representation improves parsing accuracy further. We describe the procedure involved in evaluating the parser, and obtain accuracies for biomedical data in the same range as those reported for newspaper text, and higher than those previously reported for the biomedical resource on which we evaluate. Our conclusion is that porting newspaper parsers to the biomedical domain, at least for parsers which use lexicalized grammars, may not be as difficult as first thought

    Is the Optimal Implementation Inefficient? Elementarily Not

    Get PDF
    Sharing graphs are a local and asynchronous implementation of lambda-calculus beta-reduction (or linear logic proof-net cut-elimination) that avoids useless duplications. Empirical benchmarks suggest that they are one of the most efficient machineries, when one wants to fully exploit the higher-order features of lambda-calculus. However, we still lack confirming grounds with theoretical solidity to dispel uncertainties about the adoption of sharing graphs. Aiming at analysing in detail the worst-case overhead cost of sharing operators, we restrict to the case of elementary and light linear logic, two subsystems with bounded computational complexity of multiplicative exponential linear logic. In these two cases, the bookkeeping component is unnecessary, and sharing graphs are simplified to the so-called "abstract algorithm". By a modular cost comparison over a syntactical simulation, we prove that the overhead of shared reductions is quadratically bounded to cost of the naive implementation, i.e. proof-net reduction. This result generalises and strengthens a previous complexity result, and implies that the price of sharing is negligible, if compared to the obtainable benefits on reductions requiring a large amount of duplication
    corecore