38,818 research outputs found

    FLAGS : a methodology for adaptive anomaly detection and root cause analysis on sensor data streams by fusing expert knowledge with machine learning

    Get PDF
    Anomalies and faults can be detected, and their causes verified, using both data-driven and knowledge-driven techniques. Data-driven techniques can adapt their internal functioning based on the raw input data but fail to explain the manifestation of any detection. Knowledge-driven techniques inherently deliver the cause of the faults that were detected but require too much human effort to set up. In this paper, we introduce FLAGS, the Fused-AI interpretabLe Anomaly Generation System, and combine both techniques in one methodology to overcome their limitations and optimize them based on limited user feedback. Semantic knowledge is incorporated in a machine learning technique to enhance expressivity. At the same time, feedback about the faults and anomalies that occurred is provided as input to increase adaptiveness using semantic rule mining methods. This new methodology is evaluated on a predictive maintenance case for trains. We show that our method reduces their downtime and provides more insight into frequently occurring problems. (C) 2020 The Authors. Published by Elsevier B.V

    Forming peculiarities and manifestation of tectonic faults in soft rocks

    Get PDF
    Features of distribution of tectonic structures in soft rocks confirm the presence of horizontal tectonic forces in the formation of faults and are based on the manifestation of their morphological features. Linear dependences of the amplitude on the length of tectonic dislocation in the area of wedging were obtained as a result of mathematical processing of the experimental data. Actual position of the crossing lines of fault plane with the seam were considered while studying the distribution of co-fault fracturing. Analysis of the data confirms that the distribution of faulting has an undulating character. Analysis of observations showed that the deviation of the crossing line of fault plane with the seam from the middle line is subject to the normal law of random variable distribution. Thus, the studies and the obtained results allow planning mining operations assessing the utility while developing fault areas

    Interpretation of Bouguer Anomaly to Determine Fault and Subsurface Structure at Blawan-ijen Geothermal Area

    Full text link
    Gravity survey has been acquired by Gravimeter Lacoste & Romberg G-1035 at Blawan-Ijen geothermal area. It was a focusing study from previous research. The residual Bouguer anomaly data was obtain after applying gravity data reduction, reduction to horizontal plane, and upward continuation. Result of Bouguer anomaly interpretation shows occurrence of new faults and their relative movement. Blawan fault (F1), F2, F3, and F6 are normal fault. Blawan fault is main fault controlling hot springs at Blawan-Ijen geothermal area. F4 and F5 are oblique fault and forming a graben at Banyupahit River. F7 is reverse fault. Subsurface model shows that Blawan-Ijen geothermal area was dominated by the Ijen caldera forming ignimbrite (ρ1=2.670 g/cm3), embedded shale and sand (ρ2=2.644 g/cm3) as Blawan lake sediments, magma intrusion (ρ3=2.814 g/cm3 & ρ7=2.821 g/cm3), andesite rock (ρ4=2.448 g/cm3) as geothermal reservoir, pyroclastic air fall deposits (ρ5=2.613 g/cm3) from Mt. Blau, and lava flow (ρ6=2.890 g/cm3)

    Practical Model-Based Diagnosis with Qualitative Possibilistic Uncertainty

    Full text link
    An approach to fault isolation that exploits vastly incomplete models is presented. It relies on separate descriptions of each component behavior, together with the links between them, which enables focusing of the reasoning to the relevant part of the system. As normal observations do not need explanation, the behavior of the components is limited to anomaly propagation. Diagnostic solutions are disorders (fault modes or abnormal signatures) that are consistent with the observations, as well as abductive explanations. An ordinal representation of uncertainty based on possibility theory provides a simple exception-tolerant description of the component behaviors. We can for instance distinguish between effects that are more or less certainly present (or absent) and effects that are more or less certainly present (or absent) when a given anomaly is present. A realistic example illustrates the benefits of this approach.Comment: Appears in Proceedings of the Eleventh Conference on Uncertainty in Artificial Intelligence (UAI1995

    Forming peculiarities and manifestation of tectonic faults in soft rocks

    Get PDF
    Features of distribution of tectonic structures in soft rocks confirm the presence of horizontal tectonic forces in the formation of faults and are based on the manifestation of their morphological features. Linear dependences of the amplitude on the length of tectonic dislocation in the area of wedging were obtained as a result of mathematical processing of the experimental data. Actual position of the crossing lines of fault plane with the seam were considered while studying the distribution of co-fault fracturing. Analysis of the data confirms that the distribution of faulting has an undulating character. Analysis of observations showed that the deviation of the crossing line of fault plane with the seam from the middle line is subject to the normal law of random variable distribution. Thus, the studies and the obtained results allow planning mining operations assessing the utility while developing fault areas

    Optimal discrimination between transient and permanent faults

    Get PDF
    An important practical problem in fault diagnosis is discriminating between permanent faults and transient faults. In many computer systems, the majority of errors are due to transient faults. Many heuristic methods have been used for discriminating between transient and permanent faults; however, we have found no previous work stating this decision problem in clear probabilistic terms. We present an optimal procedure for discriminating between transient and permanent faults, based on applying Bayesian inference to the observed events (correct and erroneous results). We describe how the assessed probability that a module is permanently faulty must vary with observed symptoms. We describe and demonstrate our proposed method on a simple application problem, building the appropriate equations and showing numerical examples. The method can be implemented as a run-time diagnosis algorithm at little computational cost; it can also be used to evaluate any heuristic diagnostic procedure by compariso

    Software-implemented fault insertion: An FTMP example

    Get PDF
    This report presents a model for fault insertion through software; describes its implementation on a fault-tolerant computer, FTMP; presents a summary of fault detection, identification, and reconfiguration data collected with software-implemented fault insertion; and compares the results to hardware fault insertion data. Experimental results show detection time to be a function of time of insertion and system workload. For the fault detection time, there is no correlation between software-inserted faults and hardware-inserted faults; this is because hardware-inserted faults must manifest as errors before detection, whereas software-inserted faults immediately exercise the error detection mechanisms. In summary, the software-implemented fault insertion is able to be used as an evaluation technique for the fault-handling capabilities of a system in fault detection, identification and recovery. Although the software-inserted faults do not map directly to hardware-inserted faults, experiments show software-implemented fault insertion is capable of emulating hardware fault insertion, with greater ease and automation

    Reliability and maintainability assessment factors for reliable fault-tolerant systems

    Get PDF
    A long term goal of the NASA Langley Research Center is the development of a reliability assessment methodology of sufficient power to enable the credible comparison of the stochastic attributes of one ultrareliable system design against others. This methodology, developed over a 10 year period, is a combined analytic and simulative technique. An analytic component is the Computer Aided Reliability Estimation capability, third generation, or simply CARE III. A simulative component is the Gate Logic Software Simulator capability, or GLOSS. The numerous factors that potentially have a degrading effect on system reliability and the ways in which these factors that are peculiar to highly reliable fault tolerant systems are accounted for in credible reliability assessments. Also presented are the modeling difficulties that result from their inclusion and the ways in which CARE III and GLOSS mitigate the intractability of the heretofore unworkable mathematics
    corecore