38,173 research outputs found

    Fault Detection and Isolation Expert System and Kernel Smoothing Techniques to Monitor the Continuous Automated Vault Inventory System (CAVIS)

    Get PDF
    The Continuous Automated Vault Inventory System (CAVIS™) is a system designed to continually monitor the status of special nuclear materials (SNM) at the Oak Ridge based Y-12 facility. CAVIS consists of an integrated package of low-cost sensors used to continuously monitor weight and radiation attributes of the stored items. The CAVIS system detects “change-in-state” of the special nuclear material and generates an appropriate alarm. Unfortunately, the CAVIS system is susceptible to false alarms that do not coincide with the removal of special nuclear material. These false alarms may be due to the random stochastic nature of the measurements, due to failing components, or due to external sources in the vicinity or the facility. The response to a false alarm may be an inventory check, which entails the physical verification of the attributes of the SNM. Thus, it is desirable to limit this costly response. This thesis presents the development of a monitoring system for CAVIS to eliminate the costly responses caused by false alarms. The system merges advanced statistical algorithms, such as the sequential probability ratio test (SPRT), to extract features related to changes in the CAVIS sensors with an expert system that forms a hypothesis on the root cause of any anomaly. In addition, kernel-averaging techniques have been developed as a regional anomaly-monitoring module. This thesis presents the development of the expert system and the kernel-averaging techniques features in the fault detection and isolation system. The implementation of these techniques will enable the monitoring of the CAVIS system and develop alternative hypothesis of the root cause of spurious CAVIS alarms. These alternative hypotheses can be investigated prior to any inventory check, thus reducing cost and lessening radiation exposures

    APHRODITE: an Anomaly-based Architecture for False Positive Reduction

    Get PDF
    We present APHRODITE, an architecture designed to reduce false positives in network intrusion detection systems. APHRODITE works by detecting anomalies in the output traffic, and by correlating them with the alerts raised by the NIDS working on the input traffic. Benchmarks show a substantial reduction of false positives and that APHRODITE is effective also after a "quick setup", i.e. in the realistic case in which it has not been "trained" and set up optimall

    Process Performance Analysis in Large-Scale Systems Integrating Different Sources of Information

    No full text
    Process auditing using historical data can identify causes for poor performance and reveal opportunities to improve process operation. To date, the data used has been limited to process measurements; however other sources hold complementary information about the process behavior. This paper proposes a new approach to root-cause diagnosis, which also takes advantage of the information in utility, mechanical and electrical data, alarms and diagrams. Its benefit is demonstrated in an industrial case study, by tackling an important challenge in root-cause analysis: large-scale systems. This paper also defines specifications for a semi-automated tool to implement the proposed approach. © 2012 IFAC

    AI Solutions for MDS: Artificial Intelligence Techniques for Misuse Detection and Localisation in Telecommunication Environments

    Get PDF
    This report considers the application of Articial Intelligence (AI) techniques to the problem of misuse detection and misuse localisation within telecommunications environments. A broad survey of techniques is provided, that covers inter alia rule based systems, model-based systems, case based reasoning, pattern matching, clustering and feature extraction, articial neural networks, genetic algorithms, arti cial immune systems, agent based systems, data mining and a variety of hybrid approaches. The report then considers the central issue of event correlation, that is at the heart of many misuse detection and localisation systems. The notion of being able to infer misuse by the correlation of individual temporally distributed events within a multiple data stream environment is explored, and a range of techniques, covering model based approaches, `programmed' AI and machine learning paradigms. It is found that, in general, correlation is best achieved via rule based approaches, but that these suffer from a number of drawbacks, such as the difculty of developing and maintaining an appropriate knowledge base, and the lack of ability to generalise from known misuses to new unseen misuses. Two distinct approaches are evident. One attempts to encode knowledge of known misuses, typically within rules, and use this to screen events. This approach cannot generally detect misuses for which it has not been programmed, i.e. it is prone to issuing false negatives. The other attempts to `learn' the features of event patterns that constitute normal behaviour, and, by observing patterns that do not match expected behaviour, detect when a misuse has occurred. This approach is prone to issuing false positives, i.e. inferring misuse from innocent patterns of behaviour that the system was not trained to recognise. Contemporary approaches are seen to favour hybridisation, often combining detection or localisation mechanisms for both abnormal and normal behaviour, the former to capture known cases of misuse, the latter to capture unknown cases. In some systems, these mechanisms even work together to update each other to increase detection rates and lower false positive rates. It is concluded that hybridisation offers the most promising future direction, but that a rule or state based component is likely to remain, being the most natural approach to the correlation of complex events. The challenge, then, is to mitigate the weaknesses of canonical programmed systems such that learning, generalisation and adaptation are more readily facilitated

    Application of Combined Reliability Centered Maintenance and Risk Based Inspection Method to Improve the Effectiveness of Maintenance Management of Compressor ZR 5 Intercooler

    Get PDF
    Compressor play the prominent role of the compressed air system. If The compressor find the trouble itself, then air supply to production line will be terminated. One of the big problem of Compressor ZR 5 take place on intercooler.Intercooler is the compressor component which function to cool compressed air inside the compressor so that the compressor will not be overheated. If overheated happens,then the alarm on compressor display will show “High Air Temperature on LP Element Outlet”.Consequently, the compressor will shut down automatically & must be replaced by other standby compressor.Even intercooler routine maintenance is already performed periodically refer to maintenance manual itself, ( running hour/time based), however this alarm indication frequently shown pior to entering next period of maintenance schedule.PM Analysis (field observation) & root cause Analysis implementation lead to the cause of the alarm indication. They consist of: low flow rate of cooling water that caused by stuck strainer (filter) on the pipe line, imperfect contact between cooling water and compressed air due to poor quality of cooling water, Delay on Detection problem and lack of skill to analyse alarm trouble data.The proposed countermeasure will be provided as follows : relocate strainer position so that easy maintenance could be executed, to modify new pipe line with new cooling water source (from treated reuse water to public water) with more compliance to water requirement, Introduction of stochastic method (replacement & inspection model) to predict effective inspection, cleaning and replacement of intercooler, maintenance management training provision including diagnostic problem and water cooling system maintenance of compressor. However, the countermeasure mentioned above only to encounter abnormality found at the time of observation to compressor cooling water system. By applying combined RCM & RBI method, other potential cause that might result in the same problem could be identified, so that the problem will not be recurrence. The countermeasure (corrective action) & preventive action need should be analised for the pros & cons (cost & benefit analysis) so that this activity will provide important information for management. Based on the CBA, this activity will provide payback period for 3 months. The activity will involve external party (contractor) to perform corrective action & internal party (operator member) to conduct maintenance task as already scheduled

    SENATUS: An Approach to Joint Traffic Anomaly Detection and Root Cause Analysis

    Full text link
    In this paper, we propose a novel approach, called SENATUS, for joint traffic anomaly detection and root-cause analysis. Inspired from the concept of a senate, the key idea of the proposed approach is divided into three stages: election, voting and decision. At the election stage, a small number of \nop{traffic flow sets (termed as senator flows)}senator flows are chosen\nop{, which are used} to represent approximately the total (usually huge) set of traffic flows. In the voting stage, anomaly detection is applied on the senator flows and the detected anomalies are correlated to identify the most possible anomalous time bins. Finally in the decision stage, a machine learning technique is applied to the senator flows of each anomalous time bin to find the root cause of the anomalies. We evaluate SENATUS using traffic traces collected from the Pan European network, GEANT, and compare against another approach which detects anomalies using lossless compression of traffic histograms. We show the effectiveness of SENATUS in diagnosing anomaly types: network scans and DoS/DDoS attacks

    Identifying how automation can lose its intended benefit along the development process : a research plan

    Get PDF
    Doctoral Consortium Presentation © The Authors 2009Automation is usually considered to improve performance in virtually any domain. However it can fail to deliver the target benefit as intended by those managers and designers advocating the introduction of the tool. In safety critical domains this problem is of significance not only because the unexpected effects of automation might prevent its widespread usage but also because they might turn out to be a contributor to incident and accidents. Research on failures of automation to deliver the intended benefit has focused mainly on human automation interaction. This paper presents a PhD research plan that aims at characterizing decisions for those involved in development process of automation for safety critical domains, taken under productive pressure, to identify where and when the initial intention the automation is supposed to deliver can be lost along the development process. We tentatively call such decisions as drift and the final objective is to develop principles that will allow to identify and compensate for possible sources of drift in the development of new automation. The research is based on case studies and is currently entering Year 2
    corecore