37 research outputs found
Nominal Logic Programming
Nominal logic is an extension of first-order logic which provides a simple
foundation for formalizing and reasoning about abstract syntax modulo
consistent renaming of bound names (that is, alpha-equivalence). This article
investigates logic programming based on nominal logic. We describe some typical
nominal logic programs, and develop the model-theoretic, proof-theoretic, and
operational semantics of such programs. Besides being of interest for ensuring
the correct behavior of implementations, these results provide a rigorous
foundation for techniques for analysis and reasoning about nominal logic
programs, as we illustrate via examples.Comment: 46 pages; 19 page appendix; 13 figures. Revised journal submission as
of July 23, 200
FICS 2010
International audienceInformal proceedings of the 7th workshop on Fixed Points in Computer Science (FICS 2010), held in Brno, 21-22 August 201
Higher-Order Subtyping with Type Intervals
Modern, statically typed programming languages provide various abstraction facilities at both the term- and type-level. Common abstraction mechanisms for types include parametric polymorphism -- a hallmark of functional languages -- and subtyping -- which is pervasive in object-oriented languages. Additionally, both kinds of languages may allow parametrized (or generic) datatype definitions in modules or classes. When several of these features are present in the same language, new and more expressive combinations arise, such as (1) bounded quantification, (2) bounded operator abstractions and (3) translucent type definitions. An example of such a language is Scala, which features all three of the aforementioned type-level constructs. This increases the expressivity of the language, but also the complexity of its type system. From a theoretical point of view, the various abstraction mechanisms have been studied through different extensions of Girard's higher-order polymorphic lambda-calculus F-omega. Higher-order subtyping and bounded polymorphism (1 and 2) have been formalized in F-omega-sub and its many variants; type definitions of various degrees of opacity (3) have been formalized through extensions of F-omega with singleton types. In this dissertation, I propose type intervals as a unifying concept for expressing (1--3) and other related constructs. In particular, I develop an extension of F-omega with interval kinds as a formal theory of higher-order subtyping with type intervals, and show how the familiar concepts of higher-order bounded quantification, bounded operator abstraction and singleton kinds can all be encoded in a semantics-preserving way using interval kinds. Going beyond the status quo, the theory is expressive enough to also cover less familiar constructs, such as lower-bounded operator abstractions and first-class, higher-order inequality constraints. I establish basic metatheoretic properties of the theory: I prove that subject reduction holds for well-kinded types w.r.t. full beta-reduction, that types and kinds are weakly normalizing, and that the theory is type safe w.r.t. its call-by-value operational reduction semantics. Key to this metatheoretic development is the use of hereditary substitution and the definition of an equivalent, canonical presentation of subtyping, which involves only normal types and kinds. The resulting metatheory is entirely syntactic, i.e. does not involve any model constructions, and has been fully mechanized in Agda. The extension of F-omega with interval kinds constitutes a stepping stone to the development of a higher-order version of the calculus of Dependent Object Types (DOT) -- the theoretical foundation of Scala's type system. In the last part of this dissertation, I briefly sketch a possible extension of the theory toward this goal and discuss some of the challenges involved in adapting the existing metatheory to that extension
A Dependently Typed Language with Nontermination
We propose a full-spectrum dependently typed programming language, Zombie, which supports general recursion natively. The Zombie implementation is an elaborating typechecker. We prove type saftey for a large subset of the Zombie core language, including features such as computational irrelevance, CBV-reduction, and propositional equality with a heterogeneous, completely erased elimination form. Zombie does not automatically beta-reduce expressions, but instead uses congruence closure for proof and type inference. We give a specification of a subset of the surface language via a bidirectional type system, which works up-to-congruence, and an algorithm for elaborating expressions in this language to an explicitly typed core language. We prove that our elaboration algorithm is complete with respect to the source type system. Zombie also features an optional termination-checker, allowing nonterminating programs returning proofs as well as external proofs about programs
Normalisation by Evaluation in the Compilation of Typed Functional Programming Languages
This thesis presents a critical analysis of normalisation by evaluation as a technique for
speeding up compilation of typed functional programming languages. Our investigation
focuses on the SML.NET compiler and its typed intermediate language MIL. We
implement and measure the performance of normalisation by evaluation for MIL across
a range of benchmarks. Taking a different approach, we also implement and measure
the performance of a graph-based shrinking reductions algorithm for SML.NET.
MIL is based on Moggi’s computational metalanguage. As a stepping stone to
normalisation by evaluation, we investigate strong normalisation of the computational
metalanguage by introducing an extension of Girard-Tait reducibility. Inspired by previous
work on local state and parametric polymorphism, we define reducibility for
continuations and more generally reducibility for frame stacks. First we prove strong
normalistion for the computational metalanguage. Then we extend that proof to include
features of MIL such as sums and exceptions.
Taking an incremental approach, we construct a collection of increasingly sophisticated
normalisation by evaluation algorithms, culminating in a range of normalisation
algorithms for MIL. Congruence rules and alpha-rules are captured by a compositional
parameterised semantics. Defunctionalisation is used to eliminate eta-rules. Normalisation
by evaluation for the computational metalanguage is introduced using a monadic
semantics. Variants in which the monadic effects are made explicit, using either state
or control operators, are also considered.
Previous implementations of normalisation by evaluation with sums have relied
on continuation-passing-syle or control operators. We present a new algorithm which
instead uses a single reference cell and a zipper structure. This suggests a possible
alternative way of implementing Filinski’s monadic reflection operations.
In order to obtain benchmark results without having to take into account all of
the features of MIL, we implement two different techniques for eliding language constructs.
The first is not semantics-preserving, but is effective for assessing the efficiency
of normalisation by evaluation algorithms. The second is semantics-preserving,
but less flexible. In common with many intermediate languages, but unlike the computational
metalanguage, MIL requires all non-atomic values to be named. We use either
control operators or state to ensure each non-atomic value is named.
We assess our normalisation by evaluation algorithms by comparing them with a
spectrum of progressively more optimised, rewriting-based normalisation algorithms.
The SML.NET front-end is used to generate MIL code from ML programs, including
the SML.NET compiler itself. Each algorithm is then applied to the generated MIL
code. Normalisation by evaluation always performs faster than the most naıve algorithms—
often by orders of magnitude. Some of the algorithms are slightly faster than
normalisation by evaluation. Closer inspection reveals that these algorithms are in fact
defunctionalised versions of normalisation by evaluation algorithms.
Our normalisation by evaluation algorithms perform unrestricted inlining of functions.
Unrestricted inlining can lead to a super-exponential blow-up in the size of
target code with respect to the source. Furthermore, the worst-case complexity of
compilation with unrestricted inlining is non-elementary in the size of the source code.
SML.NET alleviates both problems by using a restricted form of normalisation based
on Appel and Jim’s shrinking reductions. The original algorithm is quadratic in the
worst case. Using a graph-based representation for terms we implement a compositional
linear algorithm. This speeds up the time taken to perform shrinking reductions
by up to a factor of fourteen, which leads to an improvement of up to forty percent in
total compile time
The Longitudinal Development of Lexical Network Knowledge in L2 Learners: Multiple Methods and Parallel Data
This dissertation analyzes the development of adult second language (L2) English learners’ depth of lexical knowledge over six months through a series of three longitudinal experimental studies. The goal of the project is to provide a better understanding of how the English lexicon develops over time in L2 learners. Methods employed include vocabulary size assessment, a word association task, lexical decision semantic priming, and the computational analysis of subjects’ spoken lexical output. Results found little evidence of longitudinal development in L2 subjects’ lexical network knowledge, with the exception of L2 word associations, which actually became less native-like over time. In addition, results indicated no observed relationship between vocabulary size and the dependent measures obtained in the three studies. This dissertation project has theoretical implications for our understanding of depth of lexical knowledge and its rate of development. The project also has methodological implications for future experimental and/or longitudinal investigations of the L2 lexicon
Proceedings of the 11th Workshop on Nonmonotonic Reasoning
These are the proceedings of the 11th Nonmonotonic Reasoning Workshop. The aim of this series is to bring together active researchers in the broad area of nonmonotonic reasoning, including belief revision, reasoning about actions, planning, logic programming, argumentation, causality, probabilistic and possibilistic approaches to KR, and other related topics. As part of the program of the 11th workshop, we have assessed the status of the field and discussed issues such as: Significant recent achievements in the theory and automation of NMR; Critical short and long term goals for NMR; Emerging new research directions in NMR; Practical applications of NMR; Significance of NMR to knowledge representation and AI in general
Programming Languages and Systems
This open access book constitutes the proceedings of the 29th European Symposium on Programming, ESOP 2020, which was planned to take place in Dublin, Ireland, in April 2020, as Part of the European Joint Conferences on Theory and Practice of Software, ETAPS 2020. The actual ETAPS 2020 meeting was postponed due to the Corona pandemic. The papers deal with fundamental issues in the specification, design, analysis, and implementation of programming languages and systems
Handbook of Lexical Functional Grammar
Lexical Functional Grammar (LFG) is a nontransformational theory of
linguistic structure, first developed in the 1970s by Joan Bresnan and
Ronald M. Kaplan, which assumes that language is best described and
modeled by parallel structures representing different facets of
linguistic organization and information, related by means of
functional correspondences. This volume has five parts. Part I,
Overview and Introduction, provides an introduction to core syntactic
concepts and representations. Part II, Grammatical Phenomena, reviews
LFG work on a range of grammatical phenomena or constructions. Part
III, Grammatical modules and interfaces, provides an overview of LFG
work on semantics, argument structure, prosody, information structure,
and morphology. Part IV, Linguistic disciplines, reviews LFG work in
the disciplines of historical linguistics, learnability,
psycholinguistics, and second language learning. Part V, Formal and
computational issues and applications, provides an overview of
computational and formal properties of the theory, implementations,
and computational work on parsing, translation, grammar induction, and
treebanks. Part VI, Language families and regions, reviews LFG work
on languages spoken in particular geographical areas or in particular
language families. The final section, Comparing LFG with other
linguistic theories, discusses LFG work in relation to other
theoretical approaches
Proceedings of Sinn und Bedeutung 21
TesisSe realizó un análisis clínico-electrocardiográfico integral de los hemibloqueos comprendiendo incidencia, edad, etiología, evaluación cuantitativa de los criterios diagnósticos, relación con los trastornos de conducción aurículo-ventricular, y pronóstico. Con tal motivo se estudiaron 221hemibloqueos encontrados en 7,130 pacientes adultos de sexo masculino en un servicio de cardiología y medicina. Los hemibloqueos fueron diagnosticados mediante los criterios señalados por Rosenbaum, Castellanos, y Prior y Blount