325 research outputs found

    Machine learning and its applications in reliability analysis systems

    Get PDF
    In this thesis, we are interested in exploring some aspects of Machine Learning (ML) and its application in the Reliability Analysis systems (RAs). We begin by investigating some ML paradigms and their- techniques, go on to discuss the possible applications of ML in improving RAs performance, and lastly give guidelines of the architecture of learning RAs. Our survey of ML covers both levels of Neural Network learning and Symbolic learning. In symbolic process learning, five types of learning and their applications are discussed: rote learning, learning from instruction, learning from analogy, learning from examples, and learning from observation and discovery. The Reliability Analysis systems (RAs) presented in this thesis are mainly designed for maintaining plant safety supported by two functions: risk analysis function, i.e., failure mode effect analysis (FMEA) ; and diagnosis function, i.e., real-time fault location (RTFL). Three approaches have been discussed in creating the RAs. According to the result of our survey, we suggest currently the best design of RAs is to embed model-based RAs, i.e., MORA (as software) in a neural network based computer system (as hardware). However, there are still some improvement which can be made through the applications of Machine Learning. By implanting the 'learning element', the MORA will become learning MORA (La MORA) system, a learning Reliability Analysis system with the power of automatic knowledge acquisition and inconsistency checking, and more. To conclude our thesis, we propose an architecture of La MORA

    A Formal Framework for Speedup Learning from Problems and Solutions

    Full text link
    Speedup learning seeks to improve the computational efficiency of problem solving with experience. In this paper, we develop a formal framework for learning efficient problem solving from random problems and their solutions. We apply this framework to two different representations of learned knowledge, namely control rules and macro-operators, and prove theorems that identify sufficient conditions for learning in each representation. Our proofs are constructive in that they are accompanied with learning algorithms. Our framework captures both empirical and explanation-based speedup learning in a unified fashion. We illustrate our framework with implementations in two domains: symbolic integration and Eight Puzzle. This work integrates many strands of experimental and theoretical work in machine learning, including empirical learning of control rules, macro-operator learning, Explanation-Based Learning (EBL), and Probably Approximately Correct (PAC) Learning.Comment: See http://www.jair.org/ for any accompanying file

    Explanation-Based Learning: A Survey of Programs and Perspectives

    Get PDF
    "Explanation-Based learning" (EBl) is a technique by which an intelligent system can learn by observing examples. EBl systems are characterized by the ability to create justified generalizations from single training instances. They are also distinguished by their reliance on background knowledge of the domain under study. Although EBl is usually viewed as a method for performing generalization, it can be viewed in other ways as well. In particular, EBl can be seen as a method that performs four different learning tasks: generalization, chunking, operationalization and analogy. This paper provides a general introduction to the field of explanation-based learning. It places considerable emphasis on showing how EBl combines the four learning tasks mentioned above. The paper begins by presenting an intuitive example of the EBl technique. It subsequently places EBl in its historical context and describes the relation between EBl and other areas of machine learning. The major part of this paper is a survey of selected EBl programs. The programs have been chosen to show how EBl manifests each of the four learning tasks. Attempts to formalize the EBl technique are also briefly discussed. The paper concludes by discussing the limitations of EBl and the major open questions in the field

    The 1990 progress report and future plans

    Get PDF
    This document describes the progress and plans of the Artificial Intelligence Research Branch (RIA) at ARC in 1990. Activities span a range from basic scientific research to engineering development and to fielded NASA applications, particularly those applications that are enabled by basic research carried out at RIA. Work is conducted in-house and through collaborative partners in academia and industry. Our major focus is on a limited number of research themes with a dual commitment to technical excellence and proven applicability to NASA short, medium, and long-term problems. RIA acts as the Agency's lead organization for research aspects of artificial intelligence, working closely with a second research laboratory at JPL and AI applications groups at all NASA centers

    Flexibly Instructable Agents

    Full text link
    This paper presents an approach to learning from situated, interactive tutorial instruction within an ongoing agent. Tutorial instruction is a flexible (and thus powerful) paradigm for teaching tasks because it allows an instructor to communicate whatever types of knowledge an agent might need in whatever situations might arise. To support this flexibility, however, the agent must be able to learn multiple kinds of knowledge from a broad range of instructional interactions. Our approach, called situated explanation, achieves such learning through a combination of analytic and inductive techniques. It combines a form of explanation-based learning that is situated for each instruction with a full suite of contextually guided responses to incomplete explanations. The approach is implemented in an agent called Instructo-Soar that learns hierarchies of new tasks and other domain knowledge from interactive natural language instructions. Instructo-Soar meets three key requirements of flexible instructability that distinguish it from previous systems: (1) it can take known or unknown commands at any instruction point; (2) it can handle instructions that apply to either its current situation or to a hypothetical situation specified in language (as in, for instance, conditional instructions); and (3) it can learn, from instructions, each class of knowledge it uses to perform tasks.Comment: See http://www.jair.org/ for any accompanying file

    Learning Efficient Disambiguation

    Get PDF
    This dissertation analyses the computational properties of current performance-models of natural language parsing, in particular Data Oriented Parsing (DOP), points out some of their major shortcomings and suggests suitable solutions. It provides proofs that various problems of probabilistic disambiguation are NP-Complete under instances of these performance-models, and it argues that none of these models accounts for attractive efficiency properties of human language processing in limited domains, e.g. that frequent inputs are usually processed faster than infrequent ones. The central hypothesis of this dissertation is that these shortcomings can be eliminated by specializing the performance-models to the limited domains. The dissertation addresses "grammar and model specialization" and presents a new framework, the Ambiguity-Reduction Specialization (ARS) framework, that formulates the necessary and sufficient conditions for successful specialization. The framework is instantiated into specialization algorithms and applied to specializing DOP. Novelties of these learning algorithms are 1) they limit the hypotheses-space to include only "safe" models, 2) are expressed as constrained optimization formulae that minimize the entropy of the training tree-bank given the specialized grammar, under the constraint that the size of the specialized model does not exceed a predefined maximum, and 3) they enable integrating the specialized model with the original one in a complementary manner. The dissertation provides experiments with initial implementations and compares the resulting Specialized DOP (SDOP) models to the original DOP models with encouraging results.Comment: 222 page

    Planning and learning under uncertainty

    Get PDF
    Automated Planning is the component of Artificial Intelligence that studies the computational process of synthesizing sets of actions whose execution achieves some given objectives. Research on Automated Planning has traditionally focused on solving theoretical problems in controlled environments. In such environments both, the current state of the environment and the outcome of actions, are completely known. The development of real planning applications during the last decade (planning fire extinction operations (Castillo et al., 2006), planning spacecraft activities (Nayak et al., 1999), planning emergency evacuation actions (Muñoz-Avila et al., 1999) has evidenced that these two assumptions are not true in many real-world problems. The planning research community is aware of this issue and during the last years, it has multiply its efforts to find new planning systems able to address these kinds of problems. All these efforts have created a new field in Automated Planning called planning under uncertainty. Nevertheless, the new systems suffer from two limitations. (1) They precise accurate action models, though the definition by hand of accurate action models is frequently very complex. (2) They present scalability problems due to the combinatorial explosion implied by the expressiveness of its action models. This thesis defines a new planning paradigm for building, in an efficient and scalable way, robust plans in domains with uncertainty though the action model is incomplete. The thesis is that, the integration of relational machine learning techniques with the planning and execution processes, allows to develop planning systems that automatically enrich their initial knowledge about the environment and therefore find more robust plans. An empirical evaluation illustrates these benefits in comparison with state-of-the-art probabilistic planners which use static actions models. -----------------------------------------------------------------------------------------------------------------------------------------------------------------------------La Planificación Automática es la rama de la Inteligencia Artificial que estudia los procesos computacionales para la síntesis de conjuntos de acciones cuya ejecución permita alcanzar unos objetivos dados. Históricamente, la investigación en esta rama ha tratado de resolver problemas teóricos en entornos controlados en los que conocía tanto el estado actual del entorno como el resultado de ejecutar acciones en él. En la última década, el desarrollo de aplicaciones de planificación (gestión de las tareas de extinción de incendios forestales (Castillo et al., 2006), control de las actividades de la nave espacial Deep Space 1 (Nayak et al., 1999), planificación de evacuaciones de emergencia (Muñoz-Avila et al., 1999) ha evidenciado que tales supuestos no son ciertos en muchos problemas reales. Consciente de ello, la comunidad investigadora ha multiplicado sus esfuerzos para encontrar nuevos paradigmas de planificación que se ajusten mejor a este tipo de problemas. Estos esfuerzos han llevado al nacimiento de una nueva área dentro de la Planificación Automática, llamada planificación con incertidumbre. Sin embargo, los nuevos planificadores para dominios con incertidumbre aún presentan dos importantes limitaciones: (1) Necesitan modelos de acciones detallados que contemplen los posibles resultados de ejecutar cada acción. En la mayoría de problemas reales es difícil obtener modelos de este tipo. (2) Presentan fuertes problemas de escalabilidad debido a la explosión combinatoria que provoca la complejidad de los modelos de acciones que manejan. En esta Tesis se define un paradigma de planificación capaz de generar, de forma eficiente y escalable, planes robustos en dominios con incertidumbre aunque no se disponga de modelos de acciones completamente detallados. La Tesis que se defiende es que la integración de técnicas de aprendizaje automático relacional con los procesos de decisión y ejecución permite desarrollar sistemas de planificación capaces de enriquecer automáticamente su modelo de acciones con información adicional que les ayuda a encontrar planes más robustos. Los beneficios de esta integración son evaluados experimentalmente mediante una comparación con planificadores probabilísticos del estado del arte los cuales no modifican su modelo de acciones

    A Blind Search for Bursts of Very High Enery Gamma Rays with Milagro

    Full text link
    Milagro is a water-Cherenkov detector that observes the extended air showers produced by cosmic gamma rays of energies E>100GeV. The effective area of Milagro peaks at energies E~10TeV, however it is still large even down to a few hundred GeV (~10m^2 at 100GeV). The wide field of view (~2sr) and high duty cycle (>90%) of Milagro make it ideal for continuously monitoring the overhead sky for transient Very High Energy (VHE) emissions. This study searched the Milagro data for such emissions. Even though the search was optimized primarily for detecting the emission from Gamma-Ray Bursts (GRBs), it was still sensitive to the emission from the last stages of the evaporation of Primordial Black Holes or to any other kind of phenomena that produce bursts of VHE gamma rays. Measurements of the GRB spectra by satellites up to few tens of GeV showed no signs of a cutoff. Even though multiple instruments sensitive to GeV/TeVGeV/TeV gamma rays have performed observations of GRBs, there has not yet been a definitive detection of such an emission yet. One of the reasons for that is that gamma rays with energies E>~100GeV are attenuated by interactions with the extragalactic background light or are absorbed internally at the site of the burst. There are many models that predict VHE gamma-ray emission from GRBs. A detection or a constraint of such an emission can provide useful information on the mechanism and environment of GRBs. This study performed a blind search of the Milagro data of the last five years for bursts of VHE gamma rays with durations ranging from 100musec to 316sec. No GRB localization was provided by an external instrument. Instead, the whole dataset was thoroughly searched in time, space, and duration. No significant events were detected. Upper limits were placed on the VHE emission from GRBs.Comment: Ph.D. Dissertation, Department of Physics, University of Maryland, College Park, US
    corecore