616 research outputs found

    Using integrated knowledge acquisition to prepare sophisticated expert plans for their re-use in novel situations

    Get PDF
    Plans which were constructed by human experts and have been repeatedly executed to the complete satisfaction of some customer in a complex real world domain contain very valuable planning knowledge. In order to make this compiled knowledge re-usable for novel situations, a specific integrated knowledge acquisition method has been developed: First, a domain theory is established from documentation materials or texts, which is then used as the foundation for explaining how the plan achieves the planning goal. Secondly, hierarchically structured problem class definitions are obtained from the practitioners\u27 highlevel problem conceptualizations. The descriptions of these problem classes also provide operationality criteria for the various levels in the hierarchy. A skeletal plan is then constructed for each problem class with an explanation-based learning procedure. These skeletal plans consist of a sequence of general plan elements, so that each plan element can be independently refined. The skeletal plan thus accounts for the interactions between the various concrete operations of the plan at a general level. The complexity of the planning problem is thereby factored in a domain-specific way and the compiled knowledge of sophisticated expert plans can be re-used in novel situations

    Generalisation strategies and representation among last-year primary school students

    Get PDF
    Recent research has highlighted the role of functional relationships in introducing elementary school students to algebraic thinking. This functional approach is here considered to study essential components of algebraic thinking such as generalization and its representation, and also the strategies used by students and their connection with generalization. This paper jointly describes the strategies and representations of generalisation used by a group of 33 sixth-year elementary school students, with no former algebraic training, in two generalisation tasks involving a functional relationship. The strategies applied by the students differed depending on whether they were working on specific or general cases. To answer questions on near specific cases they resorted to counting or additive operational strategies. As higher values or indeterminate quantities were considered, the strategies diversified. The correspondence strategy was the most used and the common approach when students generalised. Students were able to generalise verbally as well as symbolically and varied their strategies flexibly when changing from specific to general cases, showing a clear preference for a functional approach in the latter

    In defense of compilation: A response to Davis' form and content in model-based reasoning

    Get PDF
    In a recent paper entitled 'Form and Content in Model Based Reasoning', Randy Davis argues that model based reasoning research aimed at compiling task specific rules from underlying device models is mislabeled, misguided, and diversionary. Some of Davis' claims are examined and his basic conclusions are challenged about the value of compilation research to the model based reasoning community. In particular, Davis' claim is refuted that model based reasoning is exempt from the efficiency benefits provided by knowledge compilation techniques. In addition, several misconceptions are clarified about the role of representational form in compilation. It is concluded that techniques have the potential to make a substantial contribution to solving tractability problems in model based reasoning

    Machine learning and its applications in reliability analysis systems

    Get PDF
    In this thesis, we are interested in exploring some aspects of Machine Learning (ML) and its application in the Reliability Analysis systems (RAs). We begin by investigating some ML paradigms and their- techniques, go on to discuss the possible applications of ML in improving RAs performance, and lastly give guidelines of the architecture of learning RAs. Our survey of ML covers both levels of Neural Network learning and Symbolic learning. In symbolic process learning, five types of learning and their applications are discussed: rote learning, learning from instruction, learning from analogy, learning from examples, and learning from observation and discovery. The Reliability Analysis systems (RAs) presented in this thesis are mainly designed for maintaining plant safety supported by two functions: risk analysis function, i.e., failure mode effect analysis (FMEA) ; and diagnosis function, i.e., real-time fault location (RTFL). Three approaches have been discussed in creating the RAs. According to the result of our survey, we suggest currently the best design of RAs is to embed model-based RAs, i.e., MORA (as software) in a neural network based computer system (as hardware). However, there are still some improvement which can be made through the applications of Machine Learning. By implanting the 'learning element', the MORA will become learning MORA (La MORA) system, a learning Reliability Analysis system with the power of automatic knowledge acquisition and inconsistency checking, and more. To conclude our thesis, we propose an architecture of La MORA

    Pedagogical Possibilities for the N-Puzzle Problem

    Full text link
    In this paper we present work on a project funded by the National Science Foundation with a goal of unifying the Artificial Intelligence (AI) course around the theme of machine learning. Our work involves the development and testing of an adaptable framework for the presentation of core AI topics that emphasizes the relationship between AI and computer science. Several hands-on laboratory projects that can be closely integrated into an introductory AI course have been developed. We present an overview of one of the projects and describe the associated curricular materials that have been developed. The project uses machine learning as a theme to unify core AI topics in the context of the N-puzzle game. Games provide a rich framework to introduce students to search fundamentals and other core AI concepts. The paper presents several pedagogical possibilities for the N-puzzle game, the rich challenge it offers, and summarizes our experiences using it

    A survey of planning and scheduling research at the NASA Ames Research Center

    Get PDF
    NASA Ames Research Center has a diverse program in planning and scheduling. This paper highlights some of our research projects as well as some of our applications. Topics addressed include machine learning techniques, action representations and constraint-based scheduling systems. The applications discussed are planetary rovers, Hubble Space Telescope scheduling, and Pioneer Venus orbit scheduling

    Purposive discovery of operations

    Get PDF
    The Generate, Prune & Prove (GPP) methodology for discovering definitions of mathematical operators is introduced. GPP is a task within the IL exploration discovery system. We developed GPP for use in the discovery of mathematical operators with a wider class of representations than was possible with the previous methods by Lenat and by Shen. GPP utilizes the purpose for which an operator is created to prune the possible definitions. The relevant search spaces are immense and there exists insufficient information for a complete evaluation of the purpose constraint, so it is necessary to perform a partial evaluation of the purpose (i.e., pruning) constraint. The constraint is first transformed so that it is operational with respect to the partial information, and then it is applied to examples in order to test the generated candidates for an operator's definition. In the GPP process, once a candidate definition survives this empirical prune, it is passed on to a theorem prover for formal verification. We describe the application of this methodology to the (re)discovery of the definition of multiplication for Conway numbers, a discovery which is difficult for human mathematicians. We successfully model this discovery process utilizing information which was reasonably available at the time of Conway's original discovery. As part of this discovery process, we reduce the size of the search space from a computationally intractable size to 3468 elements
    corecore