17,030 research outputs found

    Improving QED-Tutrix by Automating the Generation of Proofs

    Full text link
    The idea of assisting teachers with technological tools is not new. Mathematics in general, and geometry in particular, provide interesting challenges when developing educative softwares, both in the education and computer science aspects. QED-Tutrix is an intelligent tutor for geometry offering an interface to help high school students in the resolution of demonstration problems. It focuses on specific goals: 1) to allow the student to freely explore the problem and its figure, 2) to accept proofs elements in any order, 3) to handle a variety of proofs, which can be customized by the teacher, and 4) to be able to help the student at any step of the resolution of the problem, if the need arises. The software is also independent from the intervention of the teacher. QED-Tutrix offers an interesting approach to geometry education, but is currently crippled by the lengthiness of the process of implementing new problems, a task that must still be done manually. Therefore, one of the main focuses of the QED-Tutrix' research team is to ease the implementation of new problems, by automating the tedious step of finding all possible proofs for a given problem. This automation must follow fundamental constraints in order to create problems compatible with QED-Tutrix: 1) readability of the proofs, 2) accessibility at a high school level, and 3) possibility for the teacher to modify the parameters defining the "acceptability" of a proof. We present in this paper the result of our preliminary exploration of possible avenues for this task. Automated theorem proving in geometry is a widely studied subject, and various provers exist. However, our constraints are quite specific and some adaptation would be required to use an existing prover. We have therefore implemented a prototype of automated prover to suit our needs. The future goal is to compare performances and usability in our specific use-case between the existing provers and our implementation.Comment: In Proceedings ThEdu'17, arXiv:1803.0072

    The transaction pattern through automating TrAM

    Get PDF
    Transaction Agent Modelling (TrAM) has demonstrated how the early requirements of complex enterprise systems can be captured and described in a lucid yet rigorous way. Using Geerts and McCarthy’s REA (Resource-Events-Agents) model as its basis, the TrAM process manages to capture the ‘qualitative’ dimensions of business transactions and business processes. A key part of the process is automated model-checking, which CG has revealed to be beneficial in this regard. It enables models to retain the high-level business concepts yet providing a formal structure at that high-level that is lacking in Use Cases. Using a conceptual catalogue informed by transactions, we illustrate the automation of a transaction pattern from which further specialisations impart a tested specification for system implementation, which we envisage as a multi-agent system in order to reflect the dynamic world of business activity. It would furthermore be able to interoperate across business domains as they would share the generalised TM as a pattern.</p

    The "Artificial Mathematician" Objection: Exploring the (Im)possibility of Automating Mathematical Understanding

    Get PDF
    Reuben Hersh confided to us that, about forty years ago, the late Paul Cohen predicted to him that at some unspecified point in the future, mathematicians would be replaced by computers. Rather than focus on computers replacing mathematicians, however, our aim is to consider the (im)possibility of human mathematicians being joined by “artificial mathematicians” in the proving practice—not just as a method of inquiry but as a fellow inquirer

    A Factor Graph Approach to Automated Design of Bayesian Signal Processing Algorithms

    Get PDF
    The benefits of automating design cycles for Bayesian inference-based algorithms are becoming increasingly recognized by the machine learning community. As a result, interest in probabilistic programming frameworks has much increased over the past few years. This paper explores a specific probabilistic programming paradigm, namely message passing in Forney-style factor graphs (FFGs), in the context of automated design of efficient Bayesian signal processing algorithms. To this end, we developed "ForneyLab" (https://github.com/biaslab/ForneyLab.jl) as a Julia toolbox for message passing-based inference in FFGs. We show by example how ForneyLab enables automatic derivation of Bayesian signal processing algorithms, including algorithms for parameter estimation and model comparison. Crucially, due to the modular makeup of the FFG framework, both the model specification and inference methods are readily extensible in ForneyLab. In order to test this framework, we compared variational message passing as implemented by ForneyLab with automatic differentiation variational inference (ADVI) and Monte Carlo methods as implemented by state-of-the-art tools "Edward" and "Stan". In terms of performance, extensibility and stability issues, ForneyLab appears to enjoy an edge relative to its competitors for automated inference in state-space models.Comment: Accepted for publication in the International Journal of Approximate Reasonin

    Tea: A High-level Language and Runtime System for Automating Statistical Analysis

    Full text link
    Though statistical analyses are centered on research questions and hypotheses, current statistical analysis tools are not. Users must first translate their hypotheses into specific statistical tests and then perform API calls with functions and parameters. To do so accurately requires that users have statistical expertise. To lower this barrier to valid, replicable statistical analysis, we introduce Tea, a high-level declarative language and runtime system. In Tea, users express their study design, any parametric assumptions, and their hypotheses. Tea compiles these high-level specifications into a constraint satisfaction problem that determines the set of valid statistical tests, and then executes them to test the hypothesis. We evaluate Tea using a suite of statistical analyses drawn from popular tutorials. We show that Tea generally matches the choices of experts while automatically switching to non-parametric tests when parametric assumptions are not met. We simulate the effect of mistakes made by non-expert users and show that Tea automatically avoids both false negatives and false positives that could be produced by the application of incorrect statistical tests.Comment: 11 page

    Decision blocks: A tool for automating decision making in CLIPS

    Get PDF
    The human capability of making complex decision is one of the most fascinating facets of human intelligence, especially if vague, judgemental, default or uncertain knowledge is involved. Unfortunately, most existing rule based forward chaining languages are not very suitable to simulate this aspect of human intelligence, because of their lack of support for approximate reasoning techniques needed for this task, and due to the lack of specific constructs to facilitate the coding of frequently reoccurring decision block to provide better support for the design and implementation of rule based decision support systems. A language called BIRBAL, which is defined on the top of CLIPS, for the specification of decision blocks, is introduced. Empirical experiments involving the comparison of the length of CLIPS program with the corresponding BIRBAL program for three different applications are surveyed. The results of these experiments suggest that for decision making intensive applications, a CLIPS program tends to be about three times longer than the corresponding BIRBAL program

    A System for Accessible Artificial Intelligence

    Full text link
    While artificial intelligence (AI) has become widespread, many commercial AI systems are not yet accessible to individual researchers nor the general public due to the deep knowledge of the systems required to use them. We believe that AI has matured to the point where it should be an accessible technology for everyone. We present an ongoing project whose ultimate goal is to deliver an open source, user-friendly AI system that is specialized for machine learning analysis of complex data in the biomedical and health care domains. We discuss how genetic programming can aid in this endeavor, and highlight specific examples where genetic programming has automated machine learning analyses in previous projects.Comment: 14 pages, 5 figures, submitted to Genetic Programming Theory and Practice 2017 worksho
    • 

    corecore