95 research outputs found

    Towards an Intelligent Tutor for Mathematical Proofs

    Get PDF
    Computer-supported learning is an increasingly important form of study since it allows for independent learning and individualized instruction. In this paper, we discuss a novel approach to developing an intelligent tutoring system for teaching textbook-style mathematical proofs. We characterize the particularities of the domain and discuss common ITS design models. Our approach is motivated by phenomena found in a corpus of tutorial dialogs that were collected in a Wizard-of-Oz experiment. We show how an intelligent tutor for textbook-style mathematical proofs can be built on top of an adapted assertion-level proof assistant by reusing representations and proof search strategies originally developed for automated and interactive theorem proving. The resulting prototype was successfully evaluated on a corpus of tutorial dialogs and yields good results.Comment: In Proceedings THedu'11, arXiv:1202.453

    A Computer-Assisted Proof of the Bellman-Ford Lemma

    Get PDF

    Models of Philosophical Thought Experimentation

    Get PDF
    The practice of thought experimentation plays a central role in contemporary philosophical methodology. Many philosophers rely on thought experimentation as their primary and even sole procedure for testing theories about the natures of properties and relations. This test procedure involves entertaining hypothetical cases in imaginative thought and then undergoing intuitions about the distribution of properties and relations in them. A theory’s comporting with an intuition is treated as evidence in favour of it; but a clash is treated as evidence against the theory and may even be regarded as falsifying it. The epistemic power of thought experimentation is mysterious. How can experiments carried out within the mind enable us to discover truths about the natures of properties and relations like knowledge, causation, personal identity, reference, meaning, consciousness, beauty, justice, morality, and free will? This epistemological challenge is urgent, but a model of philosophical thought experimentation would seem to be a necessary propaedeutic to any serious discussion of it. An adequate model would make the relevant test procedure explicit, thereby assisting in the identification of points of potential epistemic vulnerability. In this monograph I advance the propaedeutical model-building work already done by Timothy Williamson, Anna-Sara Malmgren, and Jonathan Ichikawa and Benjamin Jarvis. Following the lead of these philosophers, I focus on a single Gettier-style thought experiment and the problem of identifying the real content of the Gettier intuition. My first contribution is to establish the inadequacy of all of the existing models. Each of them, I argue, fails to solve the content problem. It emerges from my discussion, however, that Ichikawa and Jarvis’s truth in fiction approach holds out the prospect of a solution. My second contribution is to develop and defend a new way of implementing the general idea behind the truth in fiction approach. The model I put forward does a better overall job of modelling Gettier-style thought experiments than any of the existing models. It has none of the defects which render those models inadequate and Iam unable to find any major defects peculiar to it. This should make us feel confident that my model is adequate. Moreover, since the Gettier-style thought experiment I focus on is paradigmatic, we should also feel confident that my model will generalise naturally to other philosophical thought experiments

    External allomorphy and lexical representations

    Get PDF
    Many cases of allomorphic alternation are restricted to specific lexical items but at the same time show a regular phonological distribution. Standard approaches cannot deal with these cases because they must either resort to diacritic features or list regular phonological contexts as idiosyncratic. These problems can be overcome if we assume that allomorphs are lexically organized as a partially ordered set. If no ordering is established, allomorphic choice is determined by the phonology- in particular, by the emergence of the unmarked (TETU). In other cases, TETU effects are insufficient, and lexical ordering determines the preference for dominant allomorphs

    Students´ language in computer-assisted tutoring of mathematical proofs

    Get PDF
    Truth and proof are central to mathematics. Proving (or disproving) seemingly simple statements often turns out to be one of the hardest mathematical tasks. Yet, doing proofs is rarely taught in the classroom. Studies on cognitive difficulties in learning to do proofs have shown that pupils and students not only often do not understand or cannot apply basic formal reasoning techniques and do not know how to use formal mathematical language, but, at a far more fundamental level, they also do not understand what it means to prove a statement or even do not see the purpose of proof at all. Since insight into the importance of proof and doing proofs as such cannot be learnt other than by practice, learning support through individualised tutoring is in demand. This volume presents a part of an interdisciplinary project, set at the intersection of pedagogical science, artificial intelligence, and (computational) linguistics, which investigated issues involved in provisioning computer-based tutoring of mathematical proofs through dialogue in natural language. The ultimate goal in this context, addressing the above-mentioned need for learning support, is to build intelligent automated tutoring systems for mathematical proofs. The research presented here has been focused on the language that students use while interacting with such a system: its linguistic propeties and computational modelling. Contribution is made at three levels: first, an analysis of language phenomena found in students´ input to a (simulated) proof tutoring system is conducted and the variety of students´ verbalisations is quantitatively assessed, second, a general computational processing strategy for informal mathematical language and methods of modelling prominent language phenomena are proposed, and third, the prospects for natural language as an input modality for proof tutoring systems is evaluated based on collected corpora

    On Compositional Information Flow Aware Refinement

    Get PDF
    The concepts of information flow security and refinement are known to have had a troubled relationship ever since the seminal work of McLean. In this work we study refinements that support changes in data representation and semantics, including the addition of state variables that may induce new observational power or side channels. We propose a new epistemic approach to ignorance-preserving refinement where an abstract model is used as a specification of a system’s permitted information flows, that may include the declassification of secret information. The core idea is to require that refinement steps must not induce observer knowledge that is not already available in the abstract model. Our study is set in the context of a class of shared variable multi-agent models similar to interpreted systems in epistemic logic. We demonstrate the expressiveness of our framework through a series of small examples and compare our approach to existing, stricter notions of information-flow secure refinement based on bisimulations and noninterference preservation. Interestingly, noninterference preservation is not supported “out of the box” in our setting, because refinement steps may introduce new secrets that are independent of secrets already present at abstract level. To support verification, we first introduce a “cube-shaped” unwinding condition related to conditions recently studied in the context of value-dependent noninterference, kernel verification, and secure compilation. A fundamental problem with ignorance-preserving refinement, caused by the support for general data and observation refinement, is that sequential composability is lost. We propose a solution based on relational pre- and post-conditions and illustrate its use together with unwinding on the oblivious RAM construction of Chung and Pass

    Constraining lexical phonology: evidence from English vowels

    Get PDF
    Standard Generative Phonology is inadequate in at least three respects: it is unable to curtail the abstractness of underlying forms and the complexity of derivations in any principled way; the assumption that related dialects share an identical system of underlying representations leads to an inadequate account of dialect variation; and no coherent model for the incorporation of sound changes into the synchronic grammar is proposed. The purpose of this thesis is to demonstrate that a well-constrained model of Lexical Phonology, which is a generative, derivational successor of the Standard Generative model, need not suffer from these inadequacies. Chapter 1 provides an outline of the development and characteristics of Lexical Phonology and Morphology. In Chapters 2 and 3, the model of Lexical Phonology proposed for English by Halle and Mohanan (1985) is revised: the lexical phonology is limited to two levels; substantially more concrete underlying vowel systems are proposed for RP and General American; and radically revised formulations of certain modern English phonological rules, including the Vowel Shift Rule and j-Insertion, are suggested. These constrained analyses and rules are found to be consistent with internal data, and with external evidence from a number of sources, including dialect differences, diachrony, speech errors and psycholinguistic experiments. In Chapters 4-6, a third reference accent, Scottish Standard English, is introduced. In Chapter 4, the diachronic development and synchronic characteristics of this accent, and the related Scots dialects, are outlined. Chapters 5 and 6 provide a synchronic and diachronic account of the Scottish Vowel Length Rule (SVLR). I argue that SVLR represents a Scots-specific phonologisation of part of a pan-dialectal postlexical lengthening rule, which remains productive in all varieties of English, while SVLR has acquired certain properties of a lexical rule, and has been relocated into the lexicon. In becoming lexical, SVLR has neutralised the long/short distinction for Scots vowels, so that synchronically, the underlying vowel system of Scots/SSE is organised differently from that of other varieties of English. It is established that a constrained lexicalist model necessitates the recognition of underlying dialect variation; demonstrates a connection of lexical and postlexical rules with two distinct types of sound change; gives an illuminating account of the transition of sound changes to synchronic phonological rules; and permits the characterisation of dialect and language variation as a continuum

    Assertion level proof planning with compiled strategies

    Get PDF
    This book presents new techniques that allow the automatic verification and generation of abstract human-style proofs. The core of this approach builds an efficient calculus that works directly by applying definitions, theorems, and axioms, which reduces the size of the underlying proof object by a factor of ten. The calculus is extended by the deep inference paradigm which allows the application of inference rules at arbitrary depth inside logical expressions and provides new proofs that are exponentially shorter and not available in the sequent calculus without cut. In addition, a strategy language for abstract underspecified declarative proof patterns is developed. Together, the complementary methods provide a framework to automate declarative proofs. The benefits of the techniques are illustrated by practical applications.Die vorliegende Arbeit beschäftigt sich damit, das Formalisieren von Beweisen zu vereinfachen, indem Methoden entwickelt werden, um informale Beweise formal zu verifizieren und erzeugen zu können. Dazu wird ein abstrakter Kalkül entwickelt, der direkt auf der Faktenebene arbeitet, welche von Menschen geführten Beweisen relativ nahe kommt. Anhand einer Fallstudie wird gezeigt, dass die abstrakte Beweisführung auf der Fakteneben vorteilhaft für automatische Suchverfahren ist. Zusätzlich wird eine Strategiesprache entwickelt, die es erlaubt, unterspezifizierte Beweismuster innerhalb des Beweisdokumentes zu spezifizieren und Beweisskizzen automatisch zu verfeinern. Fallstudien zeigen, dass komplexe Beweismuster kompakt in der entwickelten Strategiesprache spezifiziert werden können. Zusammen bilden die einander ergänzenden Methoden den Rahmen zur Automatisierung von deklarativen Beweisen auf der Faktenebene, die bisher überwiegend manuell entwickelt werden mussten
    corecore