58 research outputs found
Grammatical structures and logical deductions
The three essays presented here concern natural connections between grammatical derivations and structures provided by certain standard grammar formalisms, on the one hand, and deductions in logical systems, on the other hand. In the first essay we analyse the adequacy of Polish notation for higher-order languages. The Ajdukiewicz algorithm (Ajdukiewicz 1935) is discussed in terms of generalized MP-deductions. We exhibit a failure in Ajdukiewicz’s original version of the algorithm and give a correct one; we prove that generalized MP-deductions have the frontier property, which is essential for the plausibility of Polish notation. The second essay deals with logical systems corresponding to different grammar formalisms, as e.g. Finite State Acceptors, Context-Free Grammars, Categorial Grammars, and others. We show how can logical methods be used to establish certain linguistically significant properties of formal grammars. The third essay discusses the interplay between Natural Deduction proofs in grammar oriented logics and semantic structures expressible by typed lambda terms and combinators
A Generalised Quantifier Theory of Natural Language in Categorical Compositional Distributional Semantics with Bialgebras
Categorical compositional distributional semantics is a model of natural
language; it combines the statistical vector space models of words with the
compositional models of grammar. We formalise in this model the generalised
quantifier theory of natural language, due to Barwise and Cooper. The
underlying setting is a compact closed category with bialgebras. We start from
a generative grammar formalisation and develop an abstract categorical
compositional semantics for it, then instantiate the abstract setting to sets
and relations and to finite dimensional vector spaces and linear maps. We prove
the equivalence of the relational instantiation to the truth theoretic
semantics of generalised quantifiers. The vector space instantiation formalises
the statistical usages of words and enables us to, for the first time, reason
about quantified phrases and sentences compositionally in distributional
semantics
A Generalised Quantifier Theory of Natural Language in Categorical Compositional Distributional Semantics with Bialgebras
Categorical compositional distributional semantics is a model of natural language; it combines the statistical vector space models of words with the compositional models of grammar. We formalise in this model the generalised quantifier theory of natural language, due to Barwise and Cooper. The underlying setting is a compact closed category with bialgebras. We start from a generative grammar formalisation and develop an abstract categorical compositional semantics for it, then instantiate the abstract setting to sets and relations and to finite dimensional vector spaces and linear maps. We prove the equivalence of the relational instantiation to the truth theoretic semantics of generalised quantifiers. The vector space instantiation formalises the statistical usages of words and enables us to, for the first time, reason about quantified phrases and sentences compositionally in distributional semantics
Type-Logical Syntax
A novel logic-based framework for representing the syntax–semantics interface of natural language, applicable to a range of phenomena. In this book, Yusuke Kubota and Robert Levine propose a type-logical version of categorial grammar as a viable alternative model of natural language syntax and semantics. They show that this novel logic-based framework is applicable to a range of phenomena—especially in the domains of coordination and ellipsis—that have proven problematic for traditional approaches. The type-logical syntax the authors propose takes derivations of natural language sentences to be proofs in a particular kind of logic governing the way words and phrases are combined. This logic builds on and unifies two deductive systems from the tradition of categorial grammar; the resulting system, Hybrid Type-Logical Categorial Grammar (Hybrid TLCG) enables comprehensive approaches to coordination (gapping, dependent cluster coordination, and right-node raising) and ellipsis (VP ellipsis, pseudogapping, and extraction/ellipsis interaction). It captures a number of intricate patterns of interaction between scopal operators and seemingly incomplete constituents that are frequently found in these two empirical domains. Kubota and Levine show that the hybrid calculus underlying their framework incorporates key analytic ideas from competing approaches in the generative syntax literature to offer a unified and systematic treatment of data that have posed considerable difficulties for previous accounts. Their account demonstrates that logic is a powerful tool for analyzing the deeper principles underlying the syntax and semantics of natural language
Learning categorial grammars
In 1967 E. M. Gold published a paper in which the language classes from the Chomsky-hierarchy were analyzed in terms of learnability, in the technical sense of identification in the limit. His results were mostly negative, and perhaps because of this his work had little impact on linguistics.
In the early eighties there was renewed interest in the paradigm, mainly because of work by Angluin and Wright. Around the same time, Arikawa and his co-workers refined the paradigm by applying it to so-called Elementary Formal Systems. By making use of this approach Takeshi Shinohara was able to come up with an impressive result; any class of context-sensitive grammars with a bound on its number of rules is learnable.
Some linguistically motivated work on learnability also appeared from this point on, most notably Wexler & Culicover 1980 and Kanazawa 1994. The latter investigates the learnability of various classes of categorial grammar, inspired by work by Buszkowski and Penn, and raises some interesting questions.
We follow up on this work by exploring complexity issues relevant to learning these classes, answering an open question from Kanazawa 1994, and applying the same kind of approach to obtain (non)learnable classes of Combinatory Categorial Grammars, Tree Adjoining Grammars, Minimalist grammars, Generalized Quantifiers, and some variants of Lambek Grammars. We also discuss work on learning tree languages and its application to learning Dependency Grammars.
Our main conclusions are:
- formal learning theory is relevant to linguistics,
- identification in the limit is feasible for non-trivial classes,
- the `Shinohara approach' -i.e., placing a numerical bound on the complexity of a grammar- can lead to a learnable class, but this completely depends on the specific nature of the formalism and the notion of complexity. We give examples of natural classes of commonly used linguistic formalisms that resist this kind of approach,
- learning is hard work. Our results indicate that learning even `simple' classes of languages requires a lot of computational effort,
- dealing with structure (derivation-, dependency-) languages instead of string languages offers a useful and promising approach to learnabilty in a linguistic contex
Spurious ambiguity and focalization
Spurious ambiguity is the phenomenon whereby distinct derivations in grammar may assign the same structural reading, resulting in redundancy in the parse search space and inefficiency in parsing. Understanding the problem depends on identifying the essential mathematical structure of derivations. This is trivial in the case of context free grammar, where the parse structures are ordered trees; in the case of type logical categorial grammar, the parse structures are proof nets. However, with respect to multiplicatives, intrinsic proof nets have not yet been given for displacement calculus, and proof nets for additives, which have applications to polymorphism, are not easy to characterize. In this context we approach here multiplicative-additive spurious ambiguity by means of the proof-theoretic technique of focalization.Peer ReviewedPostprint (published version
- …