4,122 research outputs found

    A kernel density estimate-based approach to component goodness modeling

    Get PDF
    Intermittent fault localization approaches account for the fact that faulty components may fail intermittently by considering a parameter (known as goodness) that quantifies the probability that faulty components may still exhibit correct behavior. Current, state-of-the-art approaches (1) assume that this goodness probability is context independent and (2) do not provide means for integrating past diagnosis experience in the diagnostic mechanism. In this paper, we present a novel approach, coined Non-linear Feedback-based Goodness Estimate (NFGE), that uses kernel density estimations (KDE) to address such limitations. We evaluated the approach with both synthetic and real data, yielding lower estimation errors, thus increasing the diagnosis performance

    Care 3, Phase 1, volume 1

    Get PDF
    A computer program to aid in accessing the reliability of fault tolerant avionics systems was developed. A simple mathematical expression was used to evaluate the reliability of any redundant configuration over any interval during which the failure rates and coverage parameters remained unaffected by configuration changes. Provision was made for convolving such expressions in order to evaluate the reliability of a dual mode system. A coverage model was also developed to determine the various relevant coverage coefficients as a function of the available hardware and software fault detector characteristics, and subsequent isolation and recovery delay statistics

    Enhancing reasoning approaches to diagnose functional and non-functional errors

    Get PDF
    Most approaches to automatic software diagnosis abstract the system under analysis in terms of component activity and correct/incorrect behaviour (colectivelly known as spectra). While this binary error abstraction has been shown to be capable of diagnosing functional errors, when diagnosing non-functional errors it yields suboptimal accuracy. The main reason for this limitation is related to the lack of mechanisms for encoding error symptoms (such as performance degradation) in such a binary schema. In this paper, we propose a novel approach to diagnose both functional and non-functional errors by incorporating into the classic, bayesian reasoning approaches to error diagnosis concepts from the fuzzy logic domain. The empirical evaluation on 27000 synthetic scenarios demonstrates that the proposed fuzzy logic-based approach considerably improves the diagnostic accuracy (20% on average, with 99% statistical significance) when compared to the classic, state-of-the-art approach

    Distributed Intermittent Fault Diagnosis in Wireless Sensor Network Using Likelihood Ratio Test

    Get PDF
    In current days, sensor nodes are deployed in hostile environments for various military and commercial applications. Sensor nodes are becoming faulty and having adverse effects in the network if they are not diagnosed and inform the fault status to other nodes. Fault diagnosis is difficult when the nodes behave faulty some times and provide good data at other times. The intermittent disturbances may be random or kind of spikes either in regular or irregular intervals. In literature, the fault diagnosis algorithms are based on statistical methods using repeated testing or machine learning. To avoid more complex and time consuming repeated test processes and computationally complex machine learning methods, we proposed a one shot likelihood ratio test (LRT) here to determine the fault status of the sensor node. The proposed method measures the statistics of the received data over a certain period of time and then compares the likelihood ratio with the threshold value associated with a certain tolerance limit. The simulation results using a real time data set shows that the new method provides better detection accuracy (DA) with minimum false positive rate (FPR) and false alarm rate (FAR) over the modified three sigma test. LRT based hybrid fault diagnosis method detecting the fault status of a sensor node in wireless sensor network (WSN) for real time measured data with 100% DA, 0% FAR and 0% FPR if the probability of the data from faulty node exceeds 25%

    Automatic systems diagnosis without behavioral models

    Get PDF
    Recent feedback obtained while applying Model-based diagnosis (MBD) in industry suggests that the costs involved in behavioral modeling (both expertise and labor) can outweigh the benefits of MBD as a high-performance diagnosis approach. In this paper, we propose an automatic approach, called ANTARES, that completely avoids behavioral modeling. Decreasing modeling sacrifices diagnostic accuracy, as the size of the ambiguity group (i.e., components which cannot be discriminated because of the lack of information) increases, which in turn increases misdiagnosis penalty. ANTARES further breaks the ambiguity group size by considering the component's false negative rate (FNR), which is estimated using an analytical expression. Furthermore, we study the performance of ANTARES for a number of logic circuits taken from the 74XXX/ISCAS benchmark suite. Our results clearly indicate that sacrificing modeling information degrades the diagnosis quality. However, considering FNR information improves the quality, attaining the diagnostic performance of an MBD approach

    New Fault Detection, Mitigation and Injection Strategies for Current and Forthcoming Challenges of HW Embedded Designs

    Full text link
    Tesis por compendio[EN] Relevance of electronics towards safety of common devices has only been growing, as an ever growing stake of the functionality is assigned to them. But of course, this comes along the constant need for higher performances to fulfill such functionality requirements, while keeping power and budget low. In this scenario, industry is struggling to provide a technology which meets all the performance, power and price specifications, at the cost of an increased vulnerability to several types of known faults or the appearance of new ones. To provide a solution for the new and growing faults in the systems, designers have been using traditional techniques from safety-critical applications, which offer in general suboptimal results. In fact, modern embedded architectures offer the possibility of optimizing the dependability properties by enabling the interaction of hardware, firmware and software levels in the process. However, that point is not yet successfully achieved. Advances in every level towards that direction are much needed if flexible, robust, resilient and cost effective fault tolerance is desired. The work presented here focuses on the hardware level, with the background consideration of a potential integration into a holistic approach. The efforts in this thesis have focused several issues: (i) to introduce additional fault models as required for adequate representativity of physical effects blooming in modern manufacturing technologies, (ii) to provide tools and methods to efficiently inject both the proposed models and classical ones, (iii) to analyze the optimum method for assessing the robustness of the systems by using extensive fault injection and later correlation with higher level layers in an effort to cut development time and cost, (iv) to provide new detection methodologies to cope with challenges modeled by proposed fault models, (v) to propose mitigation strategies focused towards tackling such new threat scenarios and (vi) to devise an automated methodology for the deployment of many fault tolerance mechanisms in a systematic robust way. The outcomes of the thesis constitute a suite of tools and methods to help the designer of critical systems in his task to develop robust, validated, and on-time designs tailored to his application.[ES] La relevancia que la electrónica adquiere en la seguridad de los productos ha crecido inexorablemente, puesto que cada vez ésta copa una mayor influencia en la funcionalidad de los mismos. Pero, por supuesto, este hecho viene acompañado de una necesidad constante de mayores prestaciones para cumplir con los requerimientos funcionales, al tiempo que se mantienen los costes y el consumo en unos niveles reducidos. En este escenario, la industria está realizando esfuerzos para proveer una tecnología que cumpla con todas las especificaciones de potencia, consumo y precio, a costa de un incremento en la vulnerabilidad a múltiples tipos de fallos conocidos o la introducción de nuevos. Para ofrecer una solución a los fallos nuevos y crecientes en los sistemas, los diseñadores han recurrido a técnicas tradicionalmente asociadas a sistemas críticos para la seguridad, que ofrecen en general resultados sub-óptimos. De hecho, las arquitecturas empotradas modernas ofrecen la posibilidad de optimizar las propiedades de confiabilidad al habilitar la interacción de los niveles de hardware, firmware y software en el proceso. No obstante, ese punto no está resulto todavía. Se necesitan avances en todos los niveles en la mencionada dirección para poder alcanzar los objetivos de una tolerancia a fallos flexible, robusta, resiliente y a bajo coste. El trabajo presentado aquí se centra en el nivel de hardware, con la consideración de fondo de una potencial integración en una estrategia holística. Los esfuerzos de esta tesis se han centrado en los siguientes aspectos: (i) la introducción de modelos de fallo adicionales requeridos para la representación adecuada de efectos físicos surgentes en las tecnologías de manufactura actuales, (ii) la provisión de herramientas y métodos para la inyección eficiente de los modelos propuestos y de los clásicos, (iii) el análisis del método óptimo para estudiar la robustez de sistemas mediante el uso de inyección de fallos extensiva, y la posterior correlación con capas de más alto nivel en un esfuerzo por recortar el tiempo y coste de desarrollo, (iv) la provisión de nuevos métodos de detección para cubrir los retos planteados por los modelos de fallo propuestos, (v) la propuesta de estrategias de mitigación enfocadas hacia el tratamiento de dichos escenarios de amenaza y (vi) la introducción de una metodología automatizada de despliegue de diversos mecanismos de tolerancia a fallos de forma robusta y sistemática. Los resultados de la presente tesis constituyen un conjunto de herramientas y métodos para ayudar al diseñador de sistemas críticos en su tarea de desarrollo de diseños robustos, validados y en tiempo adaptados a su aplicación.[CA] La rellevància que l'electrònica adquireix en la seguretat dels productes ha crescut inexorablement, puix cada volta més aquesta abasta una major influència en la funcionalitat dels mateixos. Però, per descomptat, aquest fet ve acompanyat d'un constant necessitat de majors prestacions per acomplir els requeriments funcionals, mentre es mantenen els costos i consums en uns nivells reduïts. Donat aquest escenari, la indústria està fent esforços per proveir una tecnologia que complisca amb totes les especificacions de potència, consum i preu, tot a costa d'un increment en la vulnerabilitat a diversos tipus de fallades conegudes, i a la introducció de nous tipus. Per oferir una solució a les noves i creixents fallades als sistemes, els dissenyadors han recorregut a tècniques tradicionalment associades a sistemes crítics per a la seguretat, que en general oferixen resultats sub-òptims. De fet, les arquitectures empotrades modernes oferixen la possibilitat d'optimitzar les propietats de confiabilitat en habilitar la interacció dels nivells de hardware, firmware i software en el procés. Tot i això eixe punt no està resolt encara. Es necessiten avanços a tots els nivells en l'esmentada direcció per poder assolir els objectius d'una tolerància a fallades flexible, robusta, resilient i a baix cost. El treball ací presentat se centra en el nivell de hardware, amb la consideració de fons d'una potencial integració en una estratègia holística. Els esforços d'esta tesi s'han centrat en els següents aspectes: (i) la introducció de models de fallada addicionals requerits per a la representació adequada d'efectes físics que apareixen en les tecnologies de fabricació actuals, (ii) la provisió de ferramentes i mètodes per a la injecció eficient del models proposats i dels clàssics, (iii) l'anàlisi del mètode òptim per estudiar la robustesa de sistemes mitjançant l'ús d'injecció de fallades extensiva, i la posterior correlació amb capes de més alt nivell en un esforç per retallar el temps i cost de desenvolupament, (iv) la provisió de nous mètodes de detecció per cobrir els reptes plantejats pels models de fallades proposats, (v) la proposta d'estratègies de mitigació enfocades cap al tractament dels esmentats escenaris d'amenaça i (vi) la introducció d'una metodologia automatitzada de desplegament de diversos mecanismes de tolerància a fallades de forma robusta i sistemàtica. Els resultats de la present tesi constitueixen un conjunt de ferramentes i mètodes per ajudar el dissenyador de sistemes crítics en la seua tasca de desenvolupament de dissenys robustos, validats i a temps adaptats a la seua aplicació.Espinosa García, J. (2016). New Fault Detection, Mitigation and Injection Strategies for Current and Forthcoming Challenges of HW Embedded Designs [Tesis doctoral no publicada]. Universitat Politècnica de València. https://doi.org/10.4995/Thesis/10251/73146TESISCompendi

    Cooperative fault detection and isolation in a surveillance sensor network: a case study

    Get PDF
    International audienceThis work focuses on Fault Detection and Isolation (FDI) among sensors of a surveillance network. A review of the main characteristics of faults in sensor networks and the associated diagnosis techniques is first proposed. An extensive study has then been performed on the case study of the persistent monitoring of an area by a sensor network which provides binary measurements of the occurrence of events to be detected (intrusions). The performance of a reference FDI method with and without simultaneous intrusions has been quantified through Monte Carlo simulations. The combination of static and mobile sensors has also been considered and shows a significant performance improvement for the detection of faults and intrusions in this context

    An efficient distributed algorithm for computing minimal hitting sets

    Get PDF
    Computing minimal hitting sets for a collection of sets is an important problem in many domains (e.g., Spectrum-based Fault Localization). Being an NP-Hard problem, exhaustive algorithms are usually prohibitive for real-world, often large, problems. In practice, the usage of heuristic based approaches trade-off completeness for time efficiency. An example of such heuristic approaches is STACCATO, which was proposed in the context of reasoning-based fault localization. In this paper, we propose an efficient distributed algorithm, dubbed MHS2, that renders the sequential search algorithm STACCATO suitable to distributed, Map-Reduce environments. The results show that MHS2 scales to larger systems (when compared to STACCATO), while entailing either marginal or small run time overhead

    Model-Based Fault Diagnosis in Information Poor Processes

    Get PDF
    A theory of model-based fault diagnosis is proposed which is suitable for non-linear plants that are information poor. That is, there are a bare minimum of sensors available to operate the process without recourse to analytical redundancy, the sensors output at frequencies which are likely to be low, relative to the dynamics of the plant, and there is considerable uncertainty surrounding any mathematical models that are available. Other approaches are likely to be more suitable for information rich plants. However, it should be of, at least, philosophical interest to the diagnostician who assumes that he is dealing with such a plant, if only because it should lead him to question whether his plant actually satisfies criteria necessary to support this assumption

    Development of a Prognostic Method for the Production of Undeclared Enriched Uranium

    Get PDF
    As global demand for nuclear energy and threats to nuclear security increase, the need for verification of the peaceful application of nuclear materials and technology also rises. In accordance with the Nuclear Nonproliferation Treaty, the International Atomic Energy Agency is tasked with verification of the declared enrichment activities of member states. Due to the increased cost of inspection and verification of a globally growing nuclear energy industry, remote process monitoring has been proposed as part of a next-generation, information-driven safeguards program. To further enhance this safeguards approach, it is proposed that process monitoring data may be used to not only verify the past but to anticipate the future via prognostic analysis. While prognostic methods exist for health monitoring of physical processes, the literature is absent of methods to predict the outcome of decision-based events, such as the production of undeclared enriched uranium. This dissertation introduces a method to predict the time at which a significant quantity of unaccounted material is expected to be diverted during an enrichment process. This method utilizes a particle filter to model the data and provide a Type III (degradation-based) prognostic estimate of time to diversion of a significant quantity. Measurement noise for the particle filter is estimated using historical data and may be updated with Bayesian estimates from the analyzed data. Dynamic noise estimates are updated based on observed changes in process data. The reliability of the prognostic model for a given range of data is validated via information complexity scores and goodness of fit statistics. The developed prognostic method is tested using data produced from the Oak Ridge Mock Feed and Withdrawal Facility, a 1:100 scale test platform for developing gas centrifuge remote monitoring techniques. Four case studies are considered: no diversion, slow diversion, fast diversion, and intermittent diversion. All intervals of diversion and non-diversion were correctly identified and significant quantity diversion time was accurately estimated. A diversion of 0.8 kg over 85 minutes was detected after 10 minutes and predicted to be 84 minutes and 10 seconds after 46 minutes and 40 seconds with an uncertainty of 2 minutes and 52 seconds
    corecore