79 research outputs found

    Nominal C-Unification

    Full text link
    Nominal unification is an extension of first-order unification that takes into account the \alpha-equivalence relation generated by binding operators, following the nominal approach. We propose a sound and complete procedure for nominal unification with commutative operators, or nominal C-unification for short, which has been formalised in Coq. The procedure transforms nominal C-unification problems into simpler (finite families) of fixpoint problems, whose solutions can be generated by algebraic techniques on combinatorics of permutations.Comment: Pre-proceedings paper presented at the 27th International Symposium on Logic-Based Program Synthesis and Transformation (LOPSTR 2017), Namur, Belgium, 10-12 October 2017 (arXiv:1708.07854

    Extensions of nominal terms

    Get PDF
    This thesis studies two major extensions of nominal terms. In particular, we study an extension with -abstraction over nominal unknowns and atoms, and an extension with an arguably better theory of freshness and -equivalence. Nominal terms possess two levels of variable: atoms a represent variable symbols, and unknowns X are `real' variables. As a syntax, they are designed to facilitate metaprogramming; unknowns are used to program on syntax with variable symbols. Originally, the role of nominal terms was interpreted narrowly. That is, they were seen solely as a syntax for representing partially-speci ed abstract syntax with binding. The main motivation of this thesis is to extend nominal terms so that they can be used for metaprogramming on proofs, programs, etc. and not just for metaprogramming on abstract syntax with binding. We therefore extend nominal terms in two signi cant ways: adding -abstraction over nominal unknowns and atoms| facilitating functional programing|and improving the theory of -equivalence that nominal terms possesses. Neither of the two extensions considered are trivial. The capturing substitution action of nominal unknowns implies that our notions of scope, intuited from working with syntax possessing a non-capturing substitution, such as the -calculus, is no longer applicable. As a result, notions of -abstraction and -equivalence must be carefully reconsidered. In particular, the rst research contribution of this thesis is the two-level - calculus, intuitively an intertwined pair of -calculi. As the name suggests, the two-level -calculus has two level of variable, modelled by nominal atoms and unknowns, respectively. Both levels of variable can be -abstracted, and requisite notions of -reduction are provided. The result is an expressive context-calculus. The traditional problems of handling -equivalence and the failure of commutation between instantiation and -reduction in context-calculi are handled through the use of two distinct levels of variable, swappings, and freshness side-conditions on unknowns, i.e. `nominal technology'. The second research contribution of this thesis is permissive nominal terms, an alternative form of nominal term. They retain the `nominal' rst-order avour of nominal terms (in fact, their grammars are almost identical) but forego the use of explicit freshness contexts. Instead, permissive nominal terms label unknowns with a permission sort, where permission sorts are in nite and coin nite sets of atoms. This in nite-coin nite nature means that permissive nominal terms recover two properties|we call them the `always-fresh' and `always-rename' properties that nominal terms lack. We argue that these two properties bring the theory of -equivalence on permissive nominal terms closer to `informal practice'. The reader may consider -abstraction and -equivalence so familiar as to be `solved problems'. The work embodied in this thesis stands testament to the fact that this isn't the case. Considering -abstraction and -equivalence in the context of two levels of variable poses some new and interesting problems and throws light on some deep questions related to scope and binding

    Formalization of First-Order Syntactic Unification

    Get PDF

    Nominal equational problems modulo associativity, commutativity and associativity-commutativity

    Get PDF
    Tese (doutorado)—Universidade de Brasília, Instituto de Ciências Exatas, Departamento de Ciência da Computação, 2019.A sintaxe nominal tem sido utilizada em vários contextos por quase duas décadas. Ela é uma ferramenta poderosa para se lidar com ligação de variáveis de uma forma concreta, que pode ser aplicada a qualquer especificação na qual parâmetros são utilizados para se abstrair variáveis, tal como em predicados e funções. Na sintaxe nominal, objetos que são sintaticamente diferentes podem ter a mesma semântica módulo alfa-conversão, tal como acontece no Cálculo Lambda. O tratamento de igualdades, em especial a alphaequivalêcia, é algo essencial em linguagens formais e implementações. Este trabalho investiga a alpha-equivalência nominal com símbolos de função associativos (A), comutativos (C) e associativos-comutativos (AC). Verificação de equivalência, casamento e unificação módulo A, C e AC são investigados. Em relação a verificação de igualdade, as alphaequivalências nominais módulo A, C e AC foram especificadas em Coq e provadas ser corretas. Um algoritmo implementado em OCaml para verificação de igualdade módulo A, C e AC é automaticamente extraído da especificação e experimentos são executados utilizando-se também um algoritmo aperfeiçoado. Limites superiores para o tempo de execução na solução de problemas nominais de verificação equacional são fornecidos. Um algoritmo de unificação módulo C baseado em regras de redução é especificado em Coq e provado ser correto e completo. Por meio do uso de variáveis protegidas, este algoritmo de unificação resolve problemas de casamento nominal módulo C, o que foi também formalizado ser correto e completo. O algoritmo de unificação baseado em regras de redução fornece uma família finita de conjuntos de equações nominais de ponto fixo. Cada uma destas equações pode ter um conjunto infinito de soluções independentes. Portanto, demonstra-se que problemas de unificação nominal módulo C e AC podem gerar um conjunto infinito de soluções independentes. Este fato contrasta com unificação sintática módulo C ou AC, que são conhecidas por estar na classe finitária de problemas. Uma implementação em OCaml do algoritmo de unificação nominal é fornecida e utilizado para se construir exemplos.The nominal syntax has been used in many application contexts for almost two decades. It is a powerful tool for dealing with variable binding in a concrete manner that can be applied to any specification in which parameters are used to abstract variables, such as in predicates and functions. In the nominal syntax, syntactically different objects can have the same semantics modulo alpha-conversion, as happens in the lambda calculus. Dealing with equality, and in special with alpha-equivalence, is essential in formal languages and implementations. This work investigates the nominal alpha-equivalence with associative (A), commutative (C) and associative-comutative (AC) function symbols. Equalitychecking, matching and unification modulo A, C and AC are investigated. Regarding equality-checking, nominal alpha-equivalence modulo A, C and AC are specified in Coq and proved sound. An algorithm implemented in OCaml for equality-checking modulo A, C and AC is automatically extracted from the specification and experiments are performed using also an improved algorithm. Upper bounds for solving nominal equality-checking problems are given. A rule-based nominal unification modulo C algorithm is specified in Coq and proved sound and complete. By using protected variables, this unification algorithm solves nominal matching problems modulo C, which is formalised to be sound and complete. The rule-based nominal unification algorithm outputs a finite family of sets of fixed point nominal equations. Each of which might have an infinite set of independent solutions. Therefore, nominal unification modulo C or AC are proved to potentially generate infinite independent solutions. This contrasts with syntactic unification modulo C or AC that are known to be in the finitary class. An OCaml implementation of the nominal unification algorithm is provided and used to build examples

    Fuzzy Logic

    Get PDF
    Fuzzy Logic is becoming an essential method of solving problems in all domains. It gives tremendous impact on the design of autonomous intelligent systems. The purpose of this book is to introduce Hybrid Algorithms, Techniques, and Implementations of Fuzzy Logic. The book consists of thirteen chapters highlighting models and principles of fuzzy logic and issues on its techniques and implementations. The intended readers of this book are engineers, researchers, and graduate students interested in fuzzy logic systems

    Automated Reasoning

    Get PDF
    This volume, LNAI 13385, constitutes the refereed proceedings of the 11th International Joint Conference on Automated Reasoning, IJCAR 2022, held in Haifa, Israel, in August 2022. The 32 full research papers and 9 short papers presented together with two invited talks were carefully reviewed and selected from 85 submissions. The papers focus on the following topics: Satisfiability, SMT Solving,Arithmetic; Calculi and Orderings; Knowledge Representation and Jutsification; Choices, Invariance, Substitutions and Formalization; Modal Logics; Proofs System and Proofs Search; Evolution, Termination and Decision Prolems. This is an open access book

    Utilising Local Model Neural Network Jacobian Information in Neurocontrol

    Get PDF
    Student Number : 8315331 - MSc dissertation - School of Electrical and Information Engineering - Faculty of Engineering and the Built EnvironmentIn this dissertation an efficient algorithm to calculate the differential of the network output with respect to its inputs is derived for axis orthogonal Local Model (LMN) and Radial Basis Function (RBF) Networks. A new recursive Singular Value Decomposition (SVD) adaptation algorithm, which attempts to circumvent many of the problems found in existing recursive adaptation algorithms, is also derived. Code listings and simulations are presented to demonstrate how the algorithms may be used in on-line adaptive neurocontrol systems. Specifically, the control techniques known as series inverse neural control and instantaneous linearization are highlighted. The presented material illustrates how the approach enhances the flexibility of LMN networks making them suitable for use in both direct and indirect adaptive control methods. By incorporating this ability into LMN networks an important characteristic of Multi Layer Perceptron (MLP) networks is obtained whilst retaining the desirable properties of the RBF and LMN approach

    Learning Functional Prepositions

    Full text link
    In first language acquisition, what does it mean for a grammatical category to have been acquired, and what are the mechanisms by which children learn functional categories in general? In the context of prepositions (Ps), if the lexical/functional divide cuts through the P category, as has been suggested in the theoretical literature, then constructivist accounts of language acquisition would predict that children develop adult-like competence with the more abstract units, functional Ps, at a slower rate compared to their acquisition of lexical Ps. Nativists instead assume that the features of functional P are made available by Universal Grammar (UG), and are mapped as quickly, if not faster, than the semantic features of their lexical counterparts. Conversely, if Ps are either all lexical or all functional, on both accounts of acquisition we should observe few differences in learning. Three empirical studies of the development of P were conducted via computer analysis of the English and Spanish sub-corpora of the CHILDES database. Study 1 analyzed errors in child usage of Ps, finding almost no errors in commission in either language, but that the English learners lag in their production of functional Ps relative to lexical Ps. That no such delay was found in the Spanish data suggests that the English pattern is not universal. Studies 2 and 3 applied novel measures of phrasal (P head + nominal complement) productivity to the data. Study 2 examined prepositional phrases (PPs) whose head-complement pairs appeared in both child and adult speech, while Study 3 considered PPs produced by children that never occurred in adult speech. In both studies the productivity of Ps for English children developed faster than that of lexical Ps. In Spanish there were few differences, suggesting that children had already mastered both orders of Ps early in acquisition. These empirical results suggest that at least in English P is indeed a split category, and that children acquire the syntax of the functional subset very quickly, committing almost no errors. The UG position is thus supported. Next, the dissertation investigates a \u27soft nativist\u27 acquisition strategy that composes the distributional analysis of input, minimal a priori knowledge of the possible co-occurrence of morphosyntactic features associated with functional elements, and linguistic knowledge that is presumably acquired via the experience of pragmatic, communicative situations. The output of the analysis consists in a mapping of morphemes to the feature bundles of nominative pronouns for English and Spanish, plus specific claims about the sort of knowledge required from experience. The acquisition model is then extended to adpositions, to examine what, if anything, distributional analysis can tell us about the functional sequences of PPs. The results confirm the theoretical position according to which spatiotemporal Ps are lexical in character, rooting their own extended projections, and that functional Ps express an aspectual sequence in the functional superstructure of the PP

    Proceedings of the 3rd Annual Conference on Aerospace Computational Control, volume 1

    Get PDF
    Conference topics included definition of tool requirements, advanced multibody component representation descriptions, model reduction, parallel computation, real time simulation, control design and analysis software, user interface issues, testing and verification, and applications to spacecraft, robotics, and aircraft

    Statistical Deep parsing for spanish

    Get PDF
    This document presents the development of a statistical HPSG parser for Spanish. HPSG is a deep linguistic formalism that combines syntactic and semanticinformation in the same representation, and is capable of elegantly modelingmany linguistic phenomena. Our research consists in the following steps: design of the HPSG grammar, construction of the corpus, implementation of theparsing algorithms, and evaluation of the parsers performance. We created a simple yet powerful HPSG grammar for Spanish that modelsmorphosyntactic information of words, syntactic combinatorial valence, and semantic argument structures in its lexical entries. The grammar uses thirteenvery broad rules for attaching specifiers, complements, modifiers, clitics, relative clauses and punctuation symbols, and for modeling coordinations. In asimplification from standard HPSG, the only type of long range dependency wemodel is the relative clause that modifies a noun phrase, and we use semanticrole labeling as our semantic representation. We transformed the Spanish AnCora corpus using a semi-automatic processand analyzed it using our grammar implementation, creating a Spanish HPSGcorpus of 517,237 words in 17,328 sentences (all of AnCora). We implemented several statistical parsing algorithms and trained them overthis corpus. The implemented strategies are: a bottom-up baseline using bi-lexical comparisons or a multilayer perceptron; a CKY approach that uses theresults of a supertagger; and a top-down approach that encodes word sequencesusing a LSTM network. We evaluated the performance of the implemented parsers and compared them with each other and against other existing Spanish parsers. Our LSTM top-down approach seems to be the best performing parser over our test data, obtaining the highest scores (compared to our strategies and also to externalparsers) according to constituency metrics (87.57 unlabeled F1, 82.06 labeled F1), dependency metrics (91.32 UAS, 88.96 LAS), and SRL (87.68 unlabeled,80.66 labeled), but we must take in consideration that the comparison against the external parsers might be noisy due to the post-processing we needed to do in order to adapt them to our format. We also defined a set of metrics to evaluate the identification of some particular language phenomena, and the LSTM top-down parser out performed the baselines in almost all of these metrics as well.Este documento presenta el desarrollo de un parser HPSG estadístico para el español. HPSG es un formalismo lingüístico profundo que combina información sintáctica y semántica en sus representaciones, y es capaz de modelar elegantemente una buena cantidad de fenómenos lingüísticos. Nuestra investigación se compone de los siguiente pasos: diseño de la gramática HPSG, construcción del corpus, implementación de los algoritmos de parsing y evaluación de la performance de los parsers. Diseñamos una gramática HPSG para el español simple y a la vez poderosa, que modela en sus entradas léxicas la información morfosintáctica de las palabras, la valencia combinatoria sintáctica y la estructura argumental semántica. La gramática utiliza trece reglas genéricas para adjuntar especificadores, complementos, clíticos, cláusulas relativas y símbolos de puntuación, y también para modelar coordinaciones. Como simplificación de la teoría HPSG estándar, el único tipo de dependencia de largo alcance que modelamos son las cláusulas relativas que modifican sintagmas nominales, y utilizamos etiquetado de roles semánticos como representación semántica. Transformamos el corpus AnCora en español utilizando un proceso semiautomático y lo analizamos mediante nuestra implementación de la gramática, para crear un corpus HPSG en español de 517,237 palabras en 17,328 oraciones (todo el contenido de AnCora). Implementamos varios algoritmos de parsing estadístico entrenados sobre este corpus. En particular, teníamos como objetivo probar enfoques basados en redes neuronales. Las estrategias implementadas son: una línea base bottom-up que utiliza comparaciones bi-léxicas o un perceptrón multicapa; un enfoque tipo CKY que utiliza los resultados de un supertagger; y un enfoque top-down que codifica las secuencias de palabras mediante redes tipo LSTM. Evaluamos la performance de los parsers implementados y los comparamos entre sí y con un conjunto de parsers existententes para el español. Nuestro enfoque LSTM top-down parece ser el que tiene mejor desempeño para nuestro conjunto de test, obteniendo los mejores puntajes (comparado con nuestras estrategias y también con parsers externos) en cuanto a métricas de constituyentes (87.57 F1 no etiquetada, 82.06 F1 etiquetada), métricas de dependencias (91.32 UAS, 88.96 LAS), y SRL (87.68 no etiquetada, 80.66 etiquetada), pero debemos tener en cuenta que la comparación con parsers externos puede ser ruidosa debido al post procesamiento realizado para adaptarlos a nuestro formato. También definimos un conjunto de métricas para evaluar la identificación de algunos fenómenos particulares del lenguaje, y el parser LSTM top-down obtuvo mejores resultados que las baselines para casi todas estas métricas
    • …
    corecore