59,182 research outputs found

    Using edit distance to analyse errors in a natural language to logic translation corpus

    Get PDF
    We have assembled a large corpus of student submissions to an automatic grading system, where the subject matter involves the translation of natural language sentences into propositional logic. Of the 2.3 million translation instances in the corpus, 286,000 (approximately 12%) are categorized as being in error. We want to understand the nature of the errors that students make, so that we can develop tools and supporting infrastructure that help students with the problems that these errors represent. With this aim in mind, this paper describes an analysis of a significant proportion of the data, using edit distance between incorrect answers and their corresponding correct solutions, and the associated edit sequences, as a means of organising the data and detecting categories of errors. We demonstrate that a large proportion of errors can be accounted for by means of a small number of relatively simple error types, and that the method draws attention to interesting phenomena in the data set

    A Review of integrity constraint maintenance and view updating techniques

    Get PDF
    Two interrelated problems may arise when updating a database. On one hand, when an update is applied to the database, integrity constraints may become violated. In such case, the integrity constraint maintenance approach tries to obtain additional updates to keep integrity constraints satisfied. On the other hand, when updates of derived or view facts are requested, a view updating mechanism must be applied to translate the update request into correct updates of the underlying base facts. This survey reviews the research performed on integrity constraint maintenance and view updating. It is proposed a general framework to classify and to compare methods that tackle integrity constraint maintenance and/or view updating. Then, we analyze some of these methods in more detail to identify their actual contribution and the main limitations they may present.Postprint (published version

    Extending SMTCoq, a Certified Checker for SMT (Extended Abstract)

    Full text link
    This extended abstract reports on current progress of SMTCoq, a communication tool between the Coq proof assistant and external SAT and SMT solvers. Based on a checker for generic first-order certificates implemented and proved correct in Coq, SMTCoq offers facilities both to check external SAT and SMT answers and to improve Coq's automation using such solvers, in a safe way. Currently supporting the SAT solver zChaff, and the SMT solver veriT for the combination of the theories of congruence closure and linear integer arithmetic, SMTCoq is meant to be extendable with a reasonable amount of effort: we present work in progress to support the SMT solver CVC4 and the theory of bit vectors.Comment: In Proceedings HaTT 2016, arXiv:1606.0542

    Dimensions of Neural-symbolic Integration - A Structured Survey

    Full text link
    Research on integrated neural-symbolic systems has made significant progress in the recent past. In particular the understanding of ways to deal with symbolic knowledge within connectionist systems (also called artificial neural networks) has reached a critical mass which enables the community to strive for applicable implementations and use cases. Recent work has covered a great variety of logics used in artificial intelligence and provides a multitude of techniques for dealing with them within the context of artificial neural networks. We present a comprehensive survey of the field of neural-symbolic integration, including a new classification of system according to their architectures and abilities.Comment: 28 page

    The Dimensions of Argumentative Texts and Their Assessment

    Get PDF
    The definition and the assessment of the quality of argumentative texts has become an increasingly crucial issue in education, classroom discourse, and argumentation theory. The different methods developed and used in the literature are all characterized by specific perspectives that fail to capture the complexity of the subject matter, which remains ill-defined and not systematically investigated. This paper addresses this problem by building on the four main dimensions of argument quality resulting from the definition of argument and the literature in classroom discourse: dialogicity, accountability, relevance, and textuality (DART). We use and develop the insights from the literature in education and argumentation by integrating the frameworks that capture both the textual and the argumentative nature of argumentative texts. This theoretical background will be used to propose a method for translating the DART dimensions into specific and clear proxies and evaluation criteria

    Rule learning of the Atomic dataset using Transformers

    Get PDF
    Models used for machine learning are used for a multitude of tasks that require some type of reasoning. Language models have been very capable of capturing patterns and regularities found in natural language, but their ability to perform logical reasoning has come under scrutiny. In contrast, systems for automated reasoning are well-versed in logic-based reasoning but require their input to be in logical rules to do so. The issue is that the conception of such systems, and the production of adequate rules are time-consuming processes that few have the skill set to perform. Thus, we investigate the Transformer architecture's ability to translate natural language sentences into logical rules. We perform experiments of neural machine translation on the DKET dataset from the literature consisting of definitory sentences, and we create a dataset of if-then statements from the Atomic knowledge bank by using an algorithm we have created that we also perform experiments on.Masteroppgave i informatikkINF399MAMN-PROGMAMN-IN

    Arguments Whose Strength Depends on Continuous Variation

    Get PDF
    Both the traditional Aristotelian and modern symbolic approaches to logic have seen logic in terms of discrete symbol processing. Yet there are several kinds of argument whose validity depends on some topological notion of continuous variation, which is not well captured by discrete symbols. Examples include extrapolation and slippery slope arguments, sorites, fuzzy logic, and those involving closeness of possible worlds. It is argued that the natural first attempts to analyze these notions and explain their relation to reasoning fail, so that ignorance of their nature is profound

    How much of commonsense and legal reasoning is formalizable? A review of conceptual obstacles

    Get PDF
    Fifty years of effort in artificial intelligence (AI) and the formalization of legal reasoning have produced both successes and failures. Considerable success in organizing and displaying evidence and its interrelationships has been accompanied by failure to achieve the original ambition of AI as applied to law: fully automated legal decision-making. The obstacles to formalizing legal reasoning have proved to be the same ones that make the formalization of commonsense reasoning so difficult, and are most evident where legal reasoning has to meld with the vast web of ordinary human knowledge of the world. Underlying many of the problems is the mismatch between the discreteness of symbol manipulation and the continuous nature of imprecise natural language, of degrees of similarity and analogy, and of probabilities
    • …
    corecore