378 research outputs found
A Tableau Calculus for Pronoun Resolution
We present a tableau calculus for reasoning in fragments of natural language.
We focus on the problem of pronoun resolution and the way in which it
complicates automated theorem proving for natural language processing. A method
for explicitly manipulating contextual information during deduction is
proposed, where pronouns are resolved against this context during deduction. As
a result, pronoun resolution and deduction can be interleaved in such a way
that pronouns are only resolved if this is licensed by a deduction rule; this
helps us to avoid the combinatorial complexity of total pronoun disambiguation.Comment: 16 page
Sloppy Identity
Although sloppy interpretation is usually accounted for by theories of
ellipsis, it often arises in non-elliptical contexts. In this paper, a theory
of sloppy interpretation is provided which captures this fact. The underlying
idea is that sloppy interpretation results from a semantic constraint on
parallel structures and the theory is shown to predict sloppy readings for
deaccented and paycheck sentences as well as relational-, event-, and
one-anaphora. It is further shown to capture the interaction of sloppy/strict
ambiguity with quantification and binding.Comment: 20 page
Quine's interpretation problem and the early development of possible worlds semantics
In this paper, I shall consider the challenge that Quine posed in 1947 to the advocates of quantified modal logic to provide an explanation, or interpretation, of modal notions that is intuitively clear, allows “quantifying in”, and does not presuppose, mysterious, intensional entities. The modal concepts that Quine and his contemporaries, e.g. Carnap and Ruth Barcan Marcus, were primarily concerned with in the 1940’s were the notions of (broadly) logical, or analytical, necessity and possibility, rather than the metaphysical modalities that have since become popular, largely due to the influence of Kripke. In the 1950’s modal logicians responded to Quine’s challenge by providing quantified modal logic with model-theoretic semantics of various types. In doing so they also, explicitly or implicitly addressed Quine’s interpretation problem. Here I shall consider the approaches developed by Carnap in the late 1940’s, and by Kanger, Hintikka, Montague, and Kripke in the 1950’s, and discuss to what extent these approaches were successful in meeting Quine’s doubts about the intelligibility of quantified modal logic
Focused labeled proof systems for modal logic
International audienceFocused proofs are sequent calculus proofs that group inference rules into alternating positive and negative phases. These phases can then be used to define macro-level inference rules from Gentzen's original and tiny introduction and structural rules. We show here that the inference rules of labeled proof systems for modal logics can similarly be described as pairs of such phases within the LKF focused proof system for first-order classical logic. We consider the system G3K of Negri for the modal logic K and define a translation from labeled modal formulas into first-order polarized formulas and show a strict correspondence between derivations in the two systems, i.e., each rule application in G3K corresponds to a bipole—a pair of a positive and a negative phases—in LKF. Since geometric axioms (when properly polarized) induce bipoles, this strong correspondence holds for all modal logics whose Kripke frames are characterized by geometric properties. We extend these results to present a focused labeled proof system for this same class of modal logics and show its soundness and completeness. The resulting proof system allows one to define a rich set of normal forms of modal logic proofs
Large Linguistic Models: Analyzing theoretical linguistic abilities of LLMs
The performance of large language models (LLMs) has recently improved to the
point where the models can perform well on many language tasks. We show here
that for the first time, the models can also generate coherent and valid formal
analyses of linguistic data and illustrate the vast potential of large language
models for analyses of their metalinguistic abilities. LLMs are primarily
trained on language data in the form of text; analyzing and evaluating their
metalinguistic abilities improves our understanding of their general
capabilities and sheds new light on theoretical models in linguistics. In this
paper, we probe into GPT-4's metalinguistic capabilities by focusing on three
subfields of formal linguistics: syntax, phonology, and semantics. We outline a
research program for metalinguistic analyses of large language models, propose
experimental designs, provide general guidelines, discuss limitations, and
offer future directions for this line of research. This line of inquiry also
exemplifies behavioral interpretability of deep learning, where models'
representations are accessed by explicit prompting rather than internal
representations
Solving Smullyan puzzles with formal systems
info:eu-repo/semantics/publishedVersio
- …