679,435 research outputs found

    An architecture for object-oriented intelligent control of power systems in space

    Get PDF
    A control system for autonomous distribution and control of electrical power during space missions is being developed. This system should free the astronauts from localizing faults and reconfiguring loads if problems with the power distribution and generation components occur. The control system uses an object-oriented simulation model of the power system and first principle knowledge to detect, identify, and isolate faults. Each power system component is represented as a separate object with knowledge of its normal behavior. The reasoning process takes place at three different levels of abstraction: the Physical Component Model (PCM) level, the Electrical Equivalent Model (EEM) level, and the Functional System Model (FSM) level, with the PCM the lowest level of abstraction and the FSM the highest. At the EEM level the power system components are reasoned about as their electrical equivalents, e.g, a resistive load is thought of as a resistor. However, at the PCM level detailed knowledge about the component's specific characteristics is taken into account. The FSM level models the system at the subsystem level, a level appropriate for reconfiguration and scheduling. The control system operates in two modes, a reactive and a proactive mode, simultaneously. In the reactive mode the control system receives measurement data from the power system and compares these values with values determined through simulation to detect the existence of a fault. The nature of the fault is then identified through a model-based reasoning process using mainly the EEM. Compound component models are constructed at the EEM level and used in the fault identification process. In the proactive mode the reasoning takes place at the PCM level. Individual components determine their future health status using a physical model and measured historical data. In case changes in the health status seem imminent the component warns the control system about its impending failure. The fault isolation process uses the FSM level for its reasoning base

    Complexity modelling for case knowledge maintenance in case-based reasoning.

    Get PDF
    Case-based reasoning solves new problems by re-using the solutions of previously solved similar problems and is popular because many of the knowledge engineering demands of conventional knowledge-based systems are removed. The content of the case knowledge container is critical to the performance of case-based classification systems. However, the knowledge engineer is given little support in the selection of suitable techniques to maintain and monitor the case base. This research investigates the coverage, competence and problem-solving capacity of case knowledge with the aim of developing techniques to model and maintain the case base. We present a novel technique that creates a model of the case base by measuring the uncertainty in local areas of the problem space based on the local mix of solutions present. The model provides an insight into the structure of a case base by means of a complexity profile that can assist maintenance decision-making and provide a benchmark to assess future changes to the case base. The distribution of cases in the case base is critical to the performance of a case-based reasoning system. We argue that classification boundaries represent important regions of the problem space and develop two complexity-guided algorithms which use boundary identification techniques to actively discover cases close to boundaries. We introduce a complexity-guided redundancy reduction algorithm which uses a case complexity threshold to retain cases close to boundaries and delete cases that form single class clusters. The algorithm offers control over the balance between maintaining competence and reducing case base size. The performance of a case-based reasoning system relies on the integrity of its case base but in real life applications the available data invariably contains erroneous, noisy cases. Automated removal of these noisy cases can improve system accuracy. In addition, error rates can often be reduced by removing cases to give smoother decision boundaries between classes. We show that the optimal level of boundary smoothing is domain dependent and, therefore, our approach to error reduction reacts to the characteristics of the domain by setting an appropriate level of smoothing. We introduce a novel algorithm which identifies and removes both noisy and boundary cases with the aid of a local distance ratio. A prototype interface has been developed that shows how the modelling and maintenance approaches can be used in practice in an interactive manner. The interface allows the knowledge engineer to make informed maintenance choices without the need for extensive evaluation effort while, at the same time, retaining control over the process. One of the strengths of our approach is in applying a consistent, integrated method to case base maintenance to provide a transparent process that gives a degree of explanation

    Designing for practice-based context-awareness in ubiquitous e-health environments

    Get PDF
    Existing approaches for supporting context-aware knowledge sharing in ubiquitous healthcare give little attention to practice-based structures of knowledge representation. They guide knowledge re-use at an abstract level and hardly incorporate details of actionable tasks and processes necessary for accomplishing work in a real-world context. This paper presents a context-aware model for supporting clinical knowledge sharing across organizational and geographical boundaries in ubiquitous e-health. The model draws on activity and situation awareness theories as well as the Belief-Desire Intention and Case-based Reasoning techniques in intelligent systems with the goal of enabling clinicians in disparate locations to gain a common representation of relevant situational information in each other's work contexts based on the notion of practice. We discuss the conceptual design of the model, present a formal approach for representing practice as context in a ubiquitous healthcare environment, and describe an application scenario and a prototype system to evaluate the proposed approach

    Knowledge transfer in cognitive systems theory: models, computation, and explanation

    Get PDF
    Knowledge transfer in cognitive systems can be explicated in terms of structure mapping and control. The structure of an effective model enables adaptive control for the system's intended domain of application. Knowledge is transferred by a system when control of a new domain is enabled by mapping the structure of a previously effective model. I advocate for a model-based view of computation which recognizes effective structure mapping at a low level. Artificial neural network systems are furthermore viewed as model-based, where effective models are learned through feedback. Thus, many of the most popular artificial neural network systems are best understood in light of the cybernetic tradition as error-controlled regulators. Knowledge transfer with pre-trained networks (transfer learning) can, when automated like other machine learning methods, be seen as an advancement towards artificial general intelligence. I argue this is convincing because it is akin to automating a general systems methodology of knowledge transfer in scientific reasoning. Analogical reasoning is typical in such a methodology, and some accounts view analogical cognition as the core of cognition which provides adaptive benefits through efficient knowledge transfer. I then discuss one modern example of analogical reasoning in physics, and how an extended Bayesian view might model confirmation given a structural mapping between two systems. In light of my account of knowledge transfer, I finally assess the case of quantum-like models in cognition, and whether the transfer of quantum principles is appropriate. I conclude by throwing my support behind a general systems philosophy of science framework which emphasizes the importance of structure, and which rejects a controversial view of scientific explanation in favor of a view of explanation as enabling control

    Implementasi Case-based Reasoning Untuk Sistem Tanya Jawab Penyakit Pada Anjing

    Full text link
    This debriefing question answering system using Case Base Reasoning algorithm. Case Base Reasoning is an algorithm to solve new problems by comparing the old problems or solve new problems by providing answers from a document.In this system uses several methods to process the document as an information / knowledge that makes the system increasingly relevant in answering the question. The method used is the Modified K-Nearest Neighbour, Vector Space Model and Paragraph Based Passage. M-KNN method is used to facilitate in classifying diseases in dogs, VSM method is used to search for relevant documents that match the query. then to provide answers to relevant documents obtained by the system used method Based Paragraph Passage. This level of accuracy obtained from this exchange system using training data 228 is equal to 92% with a value of k = 3

    Case Based Reasoning and TRIZ : a coupling for Innovative conception in Chemical Engineering

    Get PDF
    With the evolutions of the surrounding world market, researchers and engineers have to propose technical innovations. Nevertheless, Chemical Engineering community demonstrates a small interest for innovation compared to other engineering fields. In this paper, an approach to accelerate inventive preliminary design for Chemical Engineering is presented. This approach uses Case Based Reasoning (CBR) method to model, to capture, to store and to make available the knowledge deployed during design. CBR is a very interesting method coming from Artificial Intelligence, for routine design. Indeed, in CBR the main assumption is that a new problem of design can be solved with the help of past successful ones. Consequently, the problem solving process is based on past successful solutions therefore the design is accelerated but creativity is limited and not stimulated. Our approach is an extension of the CBR method from routine design to inventive design. One of the main drawbacks of this method is that it is restricted in one particular domain of application. To propose inventive solution, the level of abstraction for problem resolution must be increased. For this reason CBR is coupled with the TRIZ theory (Russian acronym for Theory of solving inventive problem). TRIZ is a problem solving method that increases the ability to solve creative problems thanks to its capacity to give access to the best practices in all the technical domains. The proposed synergy between CBR and TRIZ combines the main advantages of CBR (ability to store and to reuse rapidly knowledge) and those of TRIZ (no trade off during resolution, inventive solutions). Based on this synergy, a tool is developed and a mere example is treated

    Possibilistic Uncertainty Handling for Answer Set Programming

    Get PDF
    In this work, we introduce a new framework able to deal with a reasoning that is at the same time non monotonic and uncertain. In order to take into account a certainty level associated to each piece of knowledge, we use possibility theory to extend the non monotonic semantics of stable models for logic programs with default negation. By means of a possibility distribution we define a clear semantics of such programs by introducing what is a possibilistic stable model. We also propose a syntactic process based on a fix-point operator to compute these particular models representing the deductions of the program and their certainty. Then, we show how this introduction of a certainty level on each rule of a program can be used in order to restore its consistency in case of the program has no model at all. Furthermore, we explain how we can compute possibilistic stable models by using available softwares for Answer Set Programming and we describe the main lines of the system that we have developed to achieve this goal

    Discovering a Domain Knowledge Representation for Image Grouping: Multimodal Data Modeling, Fusion, and Interactive Learning

    Get PDF
    In visually-oriented specialized medical domains such as dermatology and radiology, physicians explore interesting image cases from medical image repositories for comparative case studies to aid clinical diagnoses, educate medical trainees, and support medical research. However, general image classification and retrieval approaches fail in grouping medical images from the physicians\u27 viewpoint. This is because fully-automated learning techniques cannot yet bridge the gap between image features and domain-specific content for the absence of expert knowledge. Understanding how experts get information from medical images is therefore an important research topic. As a prior study, we conducted data elicitation experiments, where physicians were instructed to inspect each medical image towards a diagnosis while describing image content to a student seated nearby. Experts\u27 eye movements and their verbal descriptions of the image content were recorded to capture various aspects of expert image understanding. This dissertation aims at an intuitive approach to extracting expert knowledge, which is to find patterns in expert data elicited from image-based diagnoses. These patterns are useful to understand both the characteristics of the medical images and the experts\u27 cognitive reasoning processes. The transformation from the viewed raw image features to interpretation as domain-specific concepts requires experts\u27 domain knowledge and cognitive reasoning. This dissertation also approximates this transformation using a matrix factorization-based framework, which helps project multiple expert-derived data modalities to high-level abstractions. To combine additional expert interventions with computational processing capabilities, an interactive machine learning paradigm is developed to treat experts as an integral part of the learning process. Specifically, experts refine medical image groups presented by the learned model locally, to incrementally re-learn the model globally. This paradigm avoids the onerous expert annotations for model training, while aligning the learned model with experts\u27 sense-making
    corecore