5,747 research outputs found

    Learning models of plant behavior for anomaly detection and condition monitoring

    Get PDF
    Providing engineers and asset managers with a too] which can diagnose faults within transformers can greatly assist decision making on such issues as maintenance, performance and safety. However, the onus has always been on personnel to accurately decide how serious a problem is and how urgently maintenance is required. In dealing with the large volumes of data involved, it is possible that faults may not be noticed until serious damage has occurred. This paper proposes the integration of a newly developed anomaly detection technique with an existing diagnosis system. By learning a Hidden Markov Model of healthy transformer behavior, unexpected operation, such as when a fault develops, can be flagged for attention. Faults can then be diagnosed using the existing system and maintenance scheduled as required, all at a much earlier stage than would previously have been possible

    International conference on software engineering and knowledge engineering: Session chair

    Get PDF
    The Thirtieth International Conference on Software Engineering and Knowledge Engineering (SEKE 2018) will be held at the Hotel Pullman, San Francisco Bay, USA, from July 1 to July 3, 2018. SEKE2018 will also be dedicated in memory of Professor Lofti Zadeh, a great scholar, pioneer and leader in fuzzy sets theory and soft computing. The conference aims at bringing together experts in software engineering and knowledge engineering to discuss on relevant results in either software engineering or knowledge engineering or both. Special emphasis will be put on the transference of methods between both domains. The theme this year is soft computing in software engineering & knowledge engineering. Submission of papers and demos are both welcome

    Bayesian Belief Network Model Quantification Using Distribution-Based Node Probability and Experienced Data Updates for Software Reliability Assessment

    Get PDF
    Since digital instrumentation and control systems are expected to play an essential role in safety systems in nuclear power plants (NPPs), the need to incorporate software failures into NPP probabilistic risk assessment has arisen. Based on a Bayesian belief network (BBN) model developed to estimate the number of software faults considering the software development lifecycle, we performed a pilot study of software reliability quantification using the BBN model by aggregating different experts' opinions. In this paper, we suggest the distribution-based node probability table (D-NPT) development method which can efficiently represent diverse expert elicitation in the form of statistical distributions and provides mathematical quantification scheme. Besides, the handbook data on U.S. software development and V&V and testing results for two nuclear safety software were used for a Bayesian update of the D-NPTs in order to reduce the BBN parameter uncertainty due to experts' different background or levels of experience. To analyze the effect of diverse expert opinions on the BBN parameter uncertainties, the sensitivity studies were conducted by eliminating the significantly different NPT estimates among expert opinions. The proposed approach demonstrates a framework that can effectively and systematically integrate different kinds of available source information to quantify BBN NPTs for NPP software reliability assessment

    Search for transient ultralight dark matter signatures with networks of precision measurement devices using a Bayesian statistics method

    Full text link
    We analyze the prospects of employing a distributed global network of precision measurement devices as a dark matter and exotic physics observatory. In particular, we consider the atomic clocks of the Global Positioning System (GPS), consisting of a constellation of 32 medium-Earth orbit satellites equipped with either Cs or Rb microwave clocks and a number of Earth-based receiver stations, some of which employ highly-stable H-maser atomic clocks. High-accuracy timing data is available for almost two decades. By analyzing the satellite and terrestrial atomic clock data, it is possible to search for transient signatures of exotic physics, such as "clumpy" dark matter and dark energy, effectively transforming the GPS constellation into a 50,000km aperture sensor array. Here we characterize the noise of the GPS satellite atomic clocks, describe the search method based on Bayesian statistics, and test the method using simulated clock data. We present the projected discovery reach using our method, and demonstrate that it can surpass the existing constrains by several order of magnitude for certain models. Our method is not limited in scope to GPS or atomic clock networks, and can also be applied to other networks of precision measurement devices.Comment: See also Supplementary Information located in ancillary file

    An investigation into hazard-centric analysis of complex autonomous systems

    Get PDF
    This thesis proposes a hypothesis that a conventional, and essentially manual, HAZOP process can be improved with information obtained with model-based dynamic simulation, using a Monte Carlo approach, to update a Bayesian Belief model representing the expected relations between cause and effects – and thereby produce an enhanced HAZOP. The work considers how the expertise of a hazard and operability study team might be augmented with access to behavioural models, simulations and belief inference models. This incorporates models of dynamically complex system behaviour, considering where these might contribute to the expertise of a hazard and operability study team, and how these might bolster trust in the portrayal of system behaviour. With a questionnaire containing behavioural outputs from a representative systems model, responses were collected from a group with relevant domain expertise. From this it is argued that the quality of analysis is dependent upon the experience and expertise of the participants but this might be artificially augmented using probabilistic data derived from a system dynamics model. Consequently, Monte Carlo simulations of an improved exemplar system dynamics model are used to condition a behavioural inference model and also to generate measures of emergence associated with the deviation parameter used in the study. A Bayesian approach towards probability is adopted where particular events and combinations of circumstances are effectively unique or hypothetical, and perhaps irreproducible in practice. Therefore, it is shown that a Bayesian model, representing beliefs expressed in a hazard and operability study, conditioned by the likely occurrence of flaw events causing specific deviant behaviour from evidence observed in the system dynamical behaviour, may combine intuitive estimates based upon experience and expertise, with quantitative statistical information representing plausible evidence of safety constraint violation. A further behavioural measure identifies potential emergent behaviour by way of a Lyapunov Exponent. Together these improvements enhance the awareness of potential hazard cases

    Deep learning in automated ultrasonic NDE -- developments, axioms and opportunities

    Get PDF
    The analysis of ultrasonic NDE data has traditionally been addressed by a trained operator manually interpreting data with the support of rudimentary automation tools. Recently, many demonstrations of deep learning (DL) techniques that address individual NDE tasks (data pre-processing, defect detection, defect characterisation, and property measurement) have started to emerge in the research community. These methods have the potential to offer high flexibility, efficiency, and accuracy subject to the availability of sufficient training data. Moreover, they enable the automation of complex processes that span one or more NDE steps (e.g. detection, characterisation, and sizing). There is, however, a lack of consensus on the direction and requirements that these new methods should follow. These elements are critical to help achieve automation of ultrasonic NDE driven by artificial intelligence such that the research community, industry, and regulatory bodies embrace it. This paper reviews the state-of-the-art of autonomous ultrasonic NDE enabled by DL methodologies. The review is organised by the NDE tasks that are addressed by means of DL approaches. Key remaining challenges for each task are noted. Basic axiomatic principles for DL methods in NDE are identified based on the literature review, relevant international regulations, and current industrial needs. By placing DL methods in the context of general NDE automation levels, this paper aims to provide a roadmap for future research and development in the area.Comment: Accepted version to be published in NDT & E Internationa

    APPLICATION AND REFINEMENTS OF THE REPS THEORY FOR SAFETY CRITICAL SOFTWARE

    Get PDF
    With the replacement of old analog control systems with software-based digital control systems, there is an urgent need for developing a method to quantitatively and accurately assess the reliability of safety critical software systems. This research focuses on proposing a systematic software metric-based reliability prediction method. The method starts with the measurement of a metric. Measurement results are then either directly linked to software defects through inspections and peer reviews or indirectly linked to software defects through empirical software engineering models. Three types of defect characteristics can be obtained, namely, 1) the number of defects remaining, 2) the number and the exact location of the defects found, and 3) the number and the exact location of defects found in an earlier version. Three models, Musa's exponential model, the PIE model and a mixed Musa-PIE model, are then used to link each of the three categories of defect characteristics with reliability respectively. In addition, the use of the PIE model requires mapping defects identified to an Extended Finite State Machine (EFSM) model. A procedure that can assist in the construction of the EFSM model and increase its repeatability is also provided. This metric-based software reliability prediction method is then applied to a safety-critical software used in the nuclear industry using eleven software metrics. Reliability prediction results are compared with the real reliability assessed by using operational failure data. Experiences and lessons learned from the application are discussed. Based on the results and findings, four software metrics are recommended. This dissertation then focuses on one of the four recommended metrics, Test Coverage. A reliability prediction model based on Test Coverage is discussed in detail and this model is further refined to be able to take into consideration more realistic conditions, such as imperfect debugging and the use of multiple testing phases
    corecore