47 research outputs found
Machine learning and its applications in reliability analysis systems
In this thesis, we are interested in exploring some aspects of Machine Learning (ML) and its application in the Reliability Analysis systems (RAs). We begin by investigating some ML paradigms and their- techniques, go on to discuss the possible applications of ML in improving RAs performance, and lastly give guidelines of the architecture of learning RAs. Our survey of ML covers both levels of Neural Network learning and Symbolic learning. In symbolic process learning, five types of learning and their applications are discussed: rote learning, learning from instruction, learning from analogy, learning from examples, and learning from observation and discovery. The Reliability Analysis systems (RAs) presented in this thesis are mainly designed for maintaining plant safety supported by two functions: risk analysis function, i.e., failure mode effect analysis (FMEA) ; and diagnosis function, i.e., real-time fault location (RTFL). Three approaches have been discussed in creating the RAs. According to the result of our survey, we suggest currently the best design of RAs is to embed model-based RAs, i.e., MORA (as software) in a neural network based computer system (as hardware). However, there are still some improvement which can be made through the applications of Machine Learning. By implanting the 'learning element', the MORA will become learning MORA (La MORA) system, a learning Reliability Analysis system with the power of automatic knowledge acquisition and inconsistency checking, and more. To conclude our thesis, we propose an architecture of La MORA
Flexibly Instructable Agents
This paper presents an approach to learning from situated, interactive
tutorial instruction within an ongoing agent. Tutorial instruction is a
flexible (and thus powerful) paradigm for teaching tasks because it allows an
instructor to communicate whatever types of knowledge an agent might need in
whatever situations might arise. To support this flexibility, however, the
agent must be able to learn multiple kinds of knowledge from a broad range of
instructional interactions. Our approach, called situated explanation, achieves
such learning through a combination of analytic and inductive techniques. It
combines a form of explanation-based learning that is situated for each
instruction with a full suite of contextually guided responses to incomplete
explanations. The approach is implemented in an agent called Instructo-Soar
that learns hierarchies of new tasks and other domain knowledge from
interactive natural language instructions. Instructo-Soar meets three key
requirements of flexible instructability that distinguish it from previous
systems: (1) it can take known or unknown commands at any instruction point;
(2) it can handle instructions that apply to either its current situation or to
a hypothetical situation specified in language (as in, for instance,
conditional instructions); and (3) it can learn, from instructions, each class
of knowledge it uses to perform tasks.Comment: See http://www.jair.org/ for any accompanying file
Recommended from our members
A comparative survey of integrated learning systems
This paper presents the duction framework for unifying the three basic forms of inference - deduction, abduction, and induction - by specifying the possible relationships and influences among them in the context of integrated learning. Special assumptive forms of inference are defined that extend the use of these inference methods, and the properties of these forms are explored. A comparison to a related inference-based learning frame work is made. Finally several existing integrated learning programs are examined in the perspective of the duction framework
A Formal Framework for Speedup Learning from Problems and Solutions
Speedup learning seeks to improve the computational efficiency of problem
solving with experience. In this paper, we develop a formal framework for
learning efficient problem solving from random problems and their solutions. We
apply this framework to two different representations of learned knowledge,
namely control rules and macro-operators, and prove theorems that identify
sufficient conditions for learning in each representation. Our proofs are
constructive in that they are accompanied with learning algorithms. Our
framework captures both empirical and explanation-based speedup learning in a
unified fashion. We illustrate our framework with implementations in two
domains: symbolic integration and Eight Puzzle. This work integrates many
strands of experimental and theoretical work in machine learning, including
empirical learning of control rules, macro-operator learning, Explanation-Based
Learning (EBL), and Probably Approximately Correct (PAC) Learning.Comment: See http://www.jair.org/ for any accompanying file
Learning in Tele-autonomous Systems using Soar
Robo-Soar is a high-level robot arm control system implemented in Soar. Robo-Soar learns to perform simple block manipulation tasks using advice from a human. Following learning, the system is able to perform similar tasks without external guidance. Robo-Soar corrects its knowledge by accepting advice about relevance of features in its domain, using a unique integration of analytic and empirical learning techniques
Toward Intelligent Machine Learning Algorithms
Coordinated Science Laboratory was formerly known as Control Systems LaboratoryNational Science Foundation / NSF IST-85-11170Office of Naval Research / N00014-82-K-0186Defense Advanced Research Projects Agency / N00014-87-K-0874Texas Instruments, Inc
Robo-Soar: An Integration of External Interaction, Planning, and Learning using Soar
This chapter reports progress in extending the Soar architecture to tasks that involve interaction with external environments. The tasks are performed using a Puma arm and a camera in a system called Robo-Soar. The tasks require the integration of a variety of capabilities
including problem solving with incomplete knowledge, reactivity, planning, guidance from external advice, and learning to improve the efficiency and correctness of problem solving. All of these capabilities are achieved without the addition of special purpose modules or subsystems to Soar
Recommended from our members
Explanation-Based Learning: A Survey of Programs and Perspectives
"Explanation-Based learning" (EBl) is a technique by which an intelligent system can learn by observing examples. EBl systems are characterized by the ability to create justified generalizations from single training instances. They are also distinguished by their reliance on background knowledge of the domain under study. Although EBl is usually viewed as a method for performing generalization, it can be viewed in other ways as well. In particular, EBl can be seen as a method that performs four different learning tasks: generalization, chunking, operationalization and analogy. This paper provides a general introduction to the field of explanation-based learning. It places considerable emphasis on showing how EBl combines the four learning tasks mentioned above. The paper begins by presenting an intuitive example of the EBl technique. It subsequently places EBl in its historical context and describes the relation between EBl and other areas of machine learning. The major part of this paper is a survey of selected EBl programs. The programs have been chosen to show how EBl manifests each of the four learning tasks. Attempts to formalize the EBl technique are also briefly discussed. The paper concludes by discussing the limitations of EBl and the major open questions in the field
The 1990 progress report and future plans
This document describes the progress and plans of the Artificial Intelligence Research Branch (RIA) at ARC in 1990. Activities span a range from basic scientific research to engineering development and to fielded NASA applications, particularly those applications that are enabled by basic research carried out at RIA. Work is conducted in-house and through collaborative partners in academia and industry. Our major focus is on a limited number of research themes with a dual commitment to technical excellence and proven applicability to NASA short, medium, and long-term problems. RIA acts as the Agency's lead organization for research aspects of artificial intelligence, working closely with a second research laboratory at JPL and AI applications groups at all NASA centers