58,482 research outputs found

    Proof-checking mathematical texts in controlled natural language

    Get PDF
    The research conducted for this thesis has been guided by the vision of a computer program that could check the correctness of mathematical proofs written in the language found in mathematical textbooks. Given that reliable processing of unrestricted natural language input is out of the reach of current technology, we focused on the attainable goal of using a controlled natural language (a subset of a natural language defined through a formal grammar) as input language to such a program. We have developed a prototype of such a computer program, the Naproche system. This thesis is centered around the novel logical and linguistic theory needed for defining and motivating the controlled natural language and the proof checking algorithm of the Naproche system. This theory provides means for bridging the wide gap between natural and formal mathematical proofs. We explain how our system makes use of and extends existing linguistic formalisms in order to analyse the peculiarities of the language of mathematics. In this regard, we describe a phenomenon of this language previously not described by other logicians or linguists, the implicit dynamic function introduction, exemplified by constructs of the form "for every x there is an f(x) such that ...". We show how this function introduction can lead to a paradox analogous to Russell's paradox. To tackle this problem, we developed a novel foundational theory of functions called Ackermann-like Function Theory, which is equiconsistent to ZFC (Zermelo-Fraenkel set theory with the Axiom of Choice) and can be used for imposing limitations to implicit dynamic function introduction in order to avoid this paradox. We give a formal account of implicit dynamic function introduction by extending Dynamic Predicate Logic, a formalism developed by linguists to account for the dynamic nature of natural language quantification, to a novel formalism called Higher-Order Dynamic Predicate Logic, whose semantics is based on Ackermann-like Function Theory. Higher-Order Dynamic Predicate Logic also includes a formal account of the linguistic theory of presuppositions, which we use for clarifying and formally modelling the usage of potentially undefined terms (e.g. 1/x, which is undefined for x=0) and of definite descriptions (e.g. "the even prime number") in the language of mathematics. The semantics of the controlled natural language is defined through a translation from the controlled natural language into an extension of Higher-Order Dynamic Predicate Logic called Proof Text Logic. Proof Text Logic extends Higher-Order Dynamic Predicate Logic in two respects, which make it suitable for representing the content of mathematical texts: It contains features for representing complete texts rather than single assertions, and instead of being based on Ackermann-like Function Theory, it is based on a richer foundational theory called Class-Map-Tuple-Number Theory, which does not only have maps/functions, but also classes/sets, tuples, numbers and Booleans as primitives. The proof checking algorithm checks the deductive correctness of proof texts written in the controlled natural language of the Naproche system. Since the semantics of the controlled natural language is defined through a translation into the Proof Text Logic formalism, the proof checking algorithm is defined on Proof Text Logic input. The algorithm makes use of automated theorem provers for checking the correctness of single proof steps. In this way, the proof steps in the input text do not need to be as fine-grained as in formal proof calculi, but may contain several reasoning steps at once, just as is usual in natural mathematical texts. The proof checking algorithm has to recognize implicit dynamic function introductions in the input text and has to take care of presuppositions of mathematical statements according to the principles of the formal account of presuppositions mentioned above. We prove two soundness and two completeness theorems for the proof checking algorithm: In each case one theorem compares the algorithm to the semantics of Proof Text Logic and one theorem compares it to the semantics of standard first-order predicate logic. As a case study for the theory developed in the thesis, we illustrate the working of the Naproche system on a controlled natural language adaptation of the beginning of Edmund Landau's Grundlagen der Analysis.Beweisprüfung mathematischer Texte in kontrollierter natürlicher Sprache Die Forschung, die für diese Dissertation durchgeführt wurde, basiert auf der Vision eines Computerprogramms, das die Korrektheit von mathematischen Beweisen, die in der gewöhnlichen mathematischen Fachsprache verfasst sind, überprüfen kann. Da die zuverlässige automatische Bearbeitung von uneingeschränktem natürlich-sprachlichen Input außer Reichweite der gegenwärtigen Technologie ist, haben wir uns auf das erreichbare Ziel fokussiert, eine kontrollierte natürliche Sprache (eine Teilmenge der natürlichen Sprache, die durch eine formale Grammatik definiert ist) als Eingabesprache für ein solches Programm zu verwenden. Wir haben einen Prototypen eines solchen Computerprogramms, das Naproche-System, entwickelt. Die vorliegende Dissertation beschreibt die neuartigen logischen und linguistischen Theorien, die benötigt werden, um die kontrollierte natürliche Sprache und den Beweisprüfungs-Algorithmus des Naproche-Systems zu definieren und zu motivieren. Diese Theorien stellen Methoden zu Verfügung, die dazu verwendet werden können, die weite Kluft zwischen natürlichen und formalen mathematischen Beweisen zu überbrücken. Wir erklären, wie unser System existierende linguistische Formalismen verwendet und erweitert, um die Besonderheiten der mathematischen Fachsprache zu analysieren. In diesem Zusammenhang beschreiben wir ein Phänomen dieser Fachsprache, das bisher von Logikern und Linguisten nicht beschrieben wurde – die implizite dynamische Funktionseinführung, die durch Konstruktionen der vorm "für jedes x gibt es ein f(x), so dass ..." veranschaulicht werden kann. Wir zeigen, wie diese Funktionseinführung zu einer der Russellschen analogen Antinomie führt. Um dieses Problem zu lösen, haben wir eine neuartige Grundlagentheorie für Funktionen entwickelt, die Ackermann-artige Funktionstheorie, die äquikonsistent zu ZFC (Zermelo-Fraenkel-Mengenlehre mit Auswahlaxiom) ist und verwendet werden kann, um der impliziten dynamischen Funktionseinführung Grenzen zu setzen, die zur Vermeidung dieser Antinomie führen. Wir beschreiben die implizite dynamische Funktionseinführung formal, indem wir die Dynamische Prädikatenlogik – ein Formalismus, der von Linguisten entwickelt wurde, um die dynamischen Eigenschaften der natürlich-sprachlichen Quantifizierung zu erfassen – zur Dynamischen Prädikatenlogik Höherer Stufe erweitern, deren Semantik auf der Ackermann-artigen Funktionstheorie basiert. Die Dynamische Prädikatenlogik Höherer Stufe formalisiert auch die linguistische Theorie der Präsuppositionen, die wir verwenden, um den Gebrauch potentiell undefinierter Terme (z.B. der Term 1/x, der für x=0 undefiniert ist) und bestimmter Kennzeichnungen (z.B. "die gerade Primzahl") in der mathematischen Fachsprache zu modellieren. Die Semantik der kontrollierten natürlichen Sprache wird definiert durch eine Übersetzung dieser in eine Erweiterung der Dynamischen Prädikatenlogik Höherer Stufe mit der Bezeichnung Beweistext-Logik. Die Beweistext-Logik erweitert die Dynamische Prädikatenlogik Höherer Stufe in zwei Hinsichten: Sie stellt Funktionalitäten für die Repräsentation von vollständigen Texten, und nicht nur von Einzelaussagen, zur Verfügung, und anstatt auf der Ackermann-artigen Funktionstheorie zu basieren, basiert sie auf einer reichhaltigeren Grundlagentheorie – der Klassen-Abbildungs-Tupel-Zahlen-Theorie, die neben Abbildungen/Funktionen auch noch Klassen/Mengen, Tupel, Zahlen und boolesche Werte als Grundobjekte zur Verfügung stellt. Der Beweisprüfungs-Algorithmus prüft die deduktive Korrektheit von Beweistexten, die in der kontrollierten natürlichen Sprache des Naproche-Systems verfasst sind. Da die Semantik dieser kontrollierten natürlichen Sprache durch eine Übersetzung in die Beweistext-Logik definiert ist, ist der Beweisprüfungs-Algorithmus für Beweistext-Logik-Input definiert. Der Algorithmus verwendet automatische Beweiser für die Überprüfung einzelner Beweisschritte. Dadurch müssen die Beweisschritte in dem Eingabetext nicht so kleinschrittig sein wie in formalen Beweiskalkülen, sondern können mehrere Deduktionsschritte zu einem Schritt vereinen, so wie dies auch in natürlichen mathematischen Texten üblich ist. Der Beweisprüfungs-Algorithmus muss die impliziten Funktionseinführungen im Eingabetext erkennen und Präsuppositionen von mathematischen Aussagen auf Grundlage der oben erwähnten Präsuppositionstheorie behandeln. Wir beweisen zwei Korrektheits- und zwei Vollständigkeitssätze für den Beweisprüfungs-Algorithmus: Jeweils einer dieser Sätze vergleicht den Algorithmus mit der Semantik der Beweistext-Logik und jeweils einer mit der Semantik der üblichen Prädikatenlogik erster Stufe. Als Fallstudie für die in dieser Dissertation entwickelte Theorie veranschaulichen wir die Funktionsweise des Naproche-Systems an einem an die kontrollierte natürliche Sprache angepassten Anfangsabschnitt von Edmund Landaus Grundlagen der Analysis

    RuleCNL: A Controlled Natural Language for Business Rule Specifications

    Full text link
    Business rules represent the primary means by which companies define their business, perform their actions in order to reach their objectives. Thus, they need to be expressed unambiguously to avoid inconsistencies between business stakeholders and formally in order to be machine-processed. A promising solution is the use of a controlled natural language (CNL) which is a good mediator between natural and formal languages. This paper presents RuleCNL, which is a CNL for defining business rules. Its core feature is the alignment of the business rule definition with the business vocabulary which ensures traceability and consistency with the business domain. The RuleCNL tool provides editors that assist end-users in the writing process and automatic mappings into the Semantics of Business Vocabulary and Business Rules (SBVR) standard. SBVR is grounded in first order logic and includes constructs called semantic formulations that structure the meaning of rules.Comment: 12 pages, 7 figures, Fourth Workshop on Controlled Natural Language (CNL 2014) Proceeding

    Executable First-Order Queries in the Logic of Information Flows

    Get PDF

    HYPE with stochastic events

    Get PDF
    The process algebra HYPE was recently proposed as a fine-grained modelling approach for capturing the behaviour of hybrid systems. In the original proposal, each flow or influence affecting a variable is modelled separately and the overall behaviour of the system then emerges as the composition of these flows. The discrete behaviour of the system is captured by instantaneous actions which might be urgent, taking effect as soon as some activation condition is satisfied, or non-urgent meaning that they can tolerate some (unknown) delay before happening. In this paper we refine the notion of non-urgent actions, to make such actions governed by a probability distribution. As a consequence of this we now give HYPE a semantics in terms of Transition-Driven Stochastic Hybrid Automata, which are a subset of a general class of stochastic processes termed Piecewise Deterministic Markov Processes.Comment: In Proceedings QAPL 2011, arXiv:1107.074

    Aiming for Cognitive Equivalence – Mental Models as a Tertium Comparationis for Translation and Empirical Semantics

    Get PDF
    This paper introduces my concept of cognitive equivalence (cf. Mandelblit, 1997), an attempt to reconcile elements of Nida’s dynamic equivalence with recent innovations in cognitive linguistics and cognitive psychology, and building on the current focus on translators’ mental processes in translation studies (see e.g. Göpferich et al., 2009, Lewandowska-Tomaszczyk, 2010; Halverson, 2014). My approach shares its general impetus with Lewandowska-Tomaszczyk’s concept of re-conceptualization, but is independently derived from findings in cognitive linguistics and simulation theory (see e.g. Langacker, 2008; Feldman, 2006; Barsalou, 1999; Zwaan, 2004). Against this background, I propose a model of translation processing focused on the internal simulation of reader reception and the calibration of these simulations to achieve similarity between ST and TT impact. The concept of cognitive equivalence is exemplarily tested by exploring a conceptual / lexical field (MALE BALDNESS) through the way that English, German and Japanese lexical items in this field are linked to matching visual-conceptual representations by native speaker informants. The visual data gathered via this empirical method can be used to effectively triangulate the linguistic items involved, enabling an extra-linguistic comparison across languages. Results show that there is a reassuring level of interinformant agreement within languages, but that the conceptual domain for BALDNESS is linguistically structured in systematically different ways across languages. The findings are interpreted as strengthening the call for a cognition-focused, embodied approach to translation

    A Formal Model of Semantic Web Service Ontology (WSMO) Execution

    Get PDF
    Semantic Web Services have been one of the most significant research areas within the Semantic Web vision, and have been recognized as a promising technology that exhibits huge commercial potential. Current Semantic Web Service research focuses on defining models and languages for the semantic markup of all relevant aspects of services, which are accessible through a Web service interface. The Web Service Modelling Ontology (WSMO) is one of the most significant Semantic Web Service framework proposed to date. To support the standardization and tool support of WSMO, a formal semantics of the language is highly desirable. As there are a few variants of WSMO and it is still under development, the semantics of WSMO needs to be formally defined to facilitate easy reuse and future development. In this paper, we present a formal Object-Z semantics of WSMO. Different aspects of the language have been precisely defined within one unified framework. This model provides a formal unambiguous specification, which can be used to develop tools and facilitate future development

    Entanglement as a semantic resource

    Get PDF
    The characteristic holistic features of the quantum theoretic formalism and the intriguing notion of entanglement can be applied to a field that is far from microphysics: logical semantics. Quantum computational logics are new forms of quantum logic that have been suggested by the theory of quantum logical gates in quantum computation. In the standard semantics of these logics, sentences denote quantum information quantities: systems of qubits (quregisters) or, more generally, mixtures of quregisters (qumixes), while logical connectives are interpreted as special quantum logical gates (which have a characteristic reversible and dynamic behavior). In this framework, states of knowledge may be entangled, in such a way that our information about the whole determines our information about the parts; and the procedure cannot be, generally, inverted. In spite of its appealing properties, the standard version of the quantum computational semantics is strongly "Hilbert-space dependent". This certainly represents a shortcoming for all applications, where real and complex numbers do not generally play any significant role (as happens, for instance, in the case of natural and of artistic languages). We propose an abstract version of quantum computational semantics, where abstract qumixes, quregisters and registers are identified with some special objects (not necessarily living in a Hilbert space), while gates are reversible functions that transform qumixes into qumixes. In this framework, one can give an abstract definition of the notions of superposition and of entangled pieces of information, quite independently of any numerical values. We investigate three different forms of abstract holistic quantum computational logic
    corecore