4,801 research outputs found

    Toward Reasoning Methods for Automatic Mechanical Repair

    Get PDF
    A knowledge representation scheme, QUORUM (Qualitative reasoning Of Repair and Understanding of Mechanisms), has been constructed to apply qualitative techniques to the mechanical domain, which is an area that has been neglected in the qualitative reasoning field. In addition, QUORUM aims at providing foundations for building a repair expert system. The problem in constructing such a representation is the difficulty of recognizing a feasible ontology with which we can express the behavior of mechanical devices and, more importantly, faulty behaviors of a device and their causes. Unlike most other approaches, our ontology employs the notion of force and energy transfer and motion propagation. We discuss how the overall behavior of a device can be derived from knowledge of the structure and the topology of the device, and how faulty behaviors can be predicted based on information about the perturbation of some of the original conditions of the device. Necessary predicates and functions are constructed to express the physical properties of a wide variety of basic and complex mechanisms, and the connection relationships among the parts of mechanisms. Several examples analyzed with QUORUM include a pair of gears, a spring-driven ratchet mechanism, and a pendulum clock. An algorithm for the propagation of force, motion, and causality is proposed and examined

    Breaking down barriers in cooperative fault management: Temporal and functional information displays

    Get PDF
    At the highest level, the fundamental question addressed by this research is how to aid human operators engaged in dynamic fault management. In dynamic fault management there is some underlying dynamic process (an engineered or physiological process referred to as the monitored process - MP) whose state changes over time and whose behavior must be monitored and controlled. In these types of applications (dynamic, real-time systems), a vast array of sensor data is available to provide information on the state of the MP. Faults disturb the MP and diagnosis must be performed in parallel with responses to maintain process integrity and to correct the underlying problem. These situations frequently involve time pressure, multiple interacting goals, high consequences of failure, and multiple interleaved tasks

    Developing Methods of Obtaining Quality Failure Information from Complex Systems

    Get PDF
    The complexity in most engineering systems is constantly growing due to ever-increasing technological advancements. This result in a corresponding need for methods that adequately account for the reliability of such systems based on failure information from components that make up these systems. This dissertation presents an approach to validating qualitative function failure results from model abstraction details. The impact of the level of detail available to a system designer during conceptual stages of design is considered for failure space exploration in a complex system. Specifically, the study develops an efficient approach towards detailed function and behavior modeling required for complex system analyses. In addition, a comprehensive research and documentation of existing function failure analysis methodologies is also synthesized into identified structural groupings. Using simulations, known governing equations are evaluated for components and system models to study responses to faults by accounting for detailed failure scenarios, component behaviors, fault propagation paths, and overall system performance. The components were simulated at nominal states and varying degrees of fault representing actual modes of operation. Information on product design and provisions on expected working conditions of components were used in the simulations to address normally overlooked areas during installation. The results of system model simulations were investigated using clustering analysis to develop an efficient grouping method and measure of confidence for the obtained results. The intellectual merit of this work is the use of a simulation based approach in studying how generated failure scenarios reveal component fault interactions leading to a better understanding of fault propagation within design models. The information from using varying fidelity models for system analysis help in identifying models that are sufficient enough at the conceptual design stages to highlight potential faults. This will reduce resources such as cost, manpower and time spent during system design. A broader impact of the project is to help design engineers identifying critical components, quantifying risks associated with using particular components in their prototypes early in the design process and help improving fault tolerant system designs. This research looks to eventually establishing a baseline for validating and comparing theories of complex systems analysis

    Plant-Wide Diagnosis: Cause-and-Effect Analysis Using Process Connectivity and Directionality Information

    Get PDF
    Production plants used in modern process industry must produce products that meet stringent environmental, quality and profitability constraints. In such integrated plants, non-linearity and strong process dynamic interactions among process units complicate root-cause diagnosis of plant-wide disturbances because disturbances may propagate to units at some distance away from the primary source of the upset. Similarly, implemented advanced process control strategies, backup and recovery systems, use of recycle streams and heat integration may hamper detection and diagnostic efforts. It is important to track down the root-cause of a plant-wide disturbance because once corrective action is taken at the source, secondary propagated effects can be quickly eliminated with minimum effort and reduced down time with the resultant positive impact on process efficiency, productivity and profitability. In order to diagnose the root-cause of disturbances that manifest plant-wide, it is crucial to incorporate and utilize knowledge about the overall process topology or interrelated physical structure of the plant, such as is contained in Piping and Instrumentation Diagrams (P&IDs). Traditionally, process control engineers have intuitively referred to the physical structure of the plant by visual inspection and manual tracing of fault propagation paths within the process structures, such as the process drawings on printed P&IDs, in order to make logical conclusions based on the results from data-driven analysis. This manual approach, however, is prone to various sources of errors and can quickly become complicated in real processes. The aim of this thesis, therefore, is to establish innovative techniques for the electronic capture and manipulation of process schematic information from large plants such as refineries in order to provide an automated means of diagnosing plant-wide performance problems. This report also describes the design and implementation of a computer application program that integrates: (i) process connectivity and directionality information from intelligent P&IDs (ii) results from data-driven cause-and-effect analysis of process measurements and (iii) process know-how to aid process control engineers and plant operators gain process insight. This work explored process intelligent P&IDs, created with AVEVAÂź P&ID, a Computer Aided Design (CAD) tool, and exported as an ISO 15926 compliant platform and vendor independent text-based XML description of the plant. The XML output was processed by a software tool developed in MicrosoftÂź .NET environment in this research project to computationally generate connectivity matrix that shows plant items and their connections. The connectivity matrix produced can be exported to ExcelÂź spreadsheet application as a basis for other application and has served as precursor to other research work. The final version of the developed software tool links statistical results of cause-and-effect analysis of process data with the connectivity matrix to simplify and gain insights into the cause and effect analysis using the connectivity information. Process knowhow and understanding is incorporated to generate logical conclusions. The thesis presents a case study in an atmospheric crude heating unit as an illustrative example to drive home key concepts and also describes an industrial case study involving refinery operations. In the industrial case study, in addition to confirming the root-cause candidate, the developed software tool was set the task to determine the physical sequence of fault propagation path within the plant. This was then compared with the hypothesis about disturbance propagation sequence generated by pure data-driven method. The results show a high degree of overlap which helps to validate statistical data-driven technique and easily identify any spurious results from the data-driven multivariable analysis. This significantly increase control engineers confidence in data-driven method being used for root-cause diagnosis. The thesis concludes with a discussion of the approach and presents ideas for further development of the methods

    NASA space station automation: AI-based technology review

    Get PDF
    Research and Development projects in automation for the Space Station are discussed. Artificial Intelligence (AI) based automation technologies are planned to enhance crew safety through reduced need for EVA, increase crew productivity through the reduction of routine operations, increase space station autonomy, and augment space station capability through the use of teleoperation and robotics. AI technology will also be developed for the servicing of satellites at the Space Station, system monitoring and diagnosis, space manufacturing, and the assembly of large space structures

    Modeling and Simulation Methodologies for Digital Twin in Industry 4.0

    Get PDF
    The concept of Industry 4.0 represents an innovative vision of what will be the factory of the future. The principles of this new paradigm are based on interoperability and data exchange between dierent industrial equipment. In this context, Cyber- Physical Systems (CPSs) cover one of the main roles in this revolution. The combination of models and the integration of real data coming from the field allows to obtain the virtual copy of the real plant, also called Digital Twin. The entire factory can be seen as a set of CPSs and the resulting system is also called Cyber-Physical Production System (CPPS). This CPPS represents the Digital Twin of the factory with which it would be possible analyze the real factory. The interoperability between the real industrial equipment and the Digital Twin allows to make predictions concerning the quality of the products. More in details, these analyses are related to the variability of production quality, prediction of the maintenance cycle, the accurate estimation of energy consumption and other extra-functional properties of the system. Several tools [2] allow to model a production line, considering dierent aspects of the factory (i.e. geometrical properties, the information flows etc.) However, these simulators do not provide natively any solution for the design integration of CPSs, making impossible to have precise analysis concerning the real factory. Furthermore, for the best of our knowledge, there are no solution regarding a clear integration of data coming from real equipment into CPS models that composes the entire production line. In this context, the goal of this thesis aims to define an unified methodology to design and simulate the Digital Twin of a plant, integrating data coming from real equipment. In detail, the presented methodologies focus mainly on: integration of heterogeneous models in production line simulators; Integration of heterogeneous models with ad-hoc simulation strategies; Multi-level simulation approach of CPS and integration of real data coming from sensors into models. All the presented contributions produce an environment that allows to perform simulation of the plant based not only on synthetic data, but also on real data coming from equipments

    Knowledge based techniques in plant design for safety

    Get PDF

    A study of novice programmer performance and programming pedagogy.

    Get PDF
    Identifying and mitigating the difficulties experienced by novice programmers is an active area of research that has embraced a number of research areas. The aim of this research was to perform a holistic study into the causes of poor performance in novice programmers and to develop teaching approaches to mitigate them. A grounded action methodology was adopted to enable the primary concepts of programming cognitive psychology and their relationships to be established, in a systematic and formal manner. To further investigate novice programmer behaviour, two sub-studies were conducted into programming performance and ability. The first sub-study was a novel application of the FP-Tree algorithm to determine if novice programmers demonstrated predictable patterns of behaviour. This was the first study to data mine programming behavioural characteristics rather than the learner’s background information such as age and gender. Using the algorithm, patterns of behaviour were generated and associated with the students’ ability. No patterns of behaviour were identified and it was not possible to predict student results using this method. This suggests that novice programmers demonstrate no set patterns of programming behaviour that can be used determine their ability, although problem solving was found to be an important characteristic. Therefore, there was no evidence that performance could be improved by adopting pedagogies to promote simple changes in programming behaviour beyond the provision of specific problem solving instruction. A second sub-study was conducted using Raven’s Matrices which determined that cognitive psychology, specifically working memory, played an important role in novice programmer ability. The implication was that programming pedagogies must take into consideration the cognitive psychology of programming and the cognitive load imposed on learners. Abstracted Construct Instruction was developed based on these findings and forms a new pedagogy for teaching programming that promotes the recall of abstract patterns while reducing the cognitive demands associated with developing code. Cognitive load is determined by the student’s ability to ignore irrelevant surface features of the written problem and to cross-reference between the problem domain and their mental program model. The former is dealt with by producing tersely written exercises to eliminate distractors, while for the latter the teaching of problem solving should be delayed until the student’s program model is formed. While this does delay the development of problem solving skills, the problem solving abilities of students taught using this pedagogy were found to be comparable with students taught using a more traditional approach. Furthermore, monitoring students’ understanding of these patterns enabled micromanagement of the learning process, and hence explanations were provided for novice behaviour such as difficulties using arrays, inert knowledge and “code thrashing”. For teaching more complex problem solving, scaffolding of practice was investigated through a program framework that could be developed in stages by the students. However, personalising the level of scaffolding required was complicated and found to be difficult to achieve in practice. In both cases, these new teaching approaches evolved as part of a grounded theory study and a clear progression of teaching practice was demonstrated with appropriate evaluation at each stage in accordance with action researc

    Developing a distributed electronic health-record store for India

    Get PDF
    The DIGHT project is addressing the problem of building a scalable and highly available information store for the Electronic Health Records (EHRs) of the over one billion citizens of India
    • 

    corecore