254 research outputs found

    A model-based reasoning architecture for system-level fault diagnosis

    Get PDF
    This dissertation presents a model-based reasoning architecture with a two fold purpose: to detect and classify component faults from observable system behavior, and to generate fault propagation models so as to make a more accurate estimation of current operational risks. It incorporates a novel approach to system level diagnostics by addressing the need to reason about low-level inaccessible components from observable high-level system behavior. In the field of complex system maintenance it can be invaluable as an aid to human operators. The first step is the compilation of the database of functional descriptions and associated fault-specific features for each of the system components. The system is then analyzed to extract structural information, which, in addition to the functional database, is used to create the structural and functional models. A fault-symptom matrix is constructed from the functional model and the features database. The fault threshold levels for these symptoms are founded on the nominal baseline data. Based on the fault-symptom matrix and these thresholds, a diagnostic decision tree is formulated in order to intelligently query about the system health. For each faulty candidate, a fault propagation tree is generated from the structural model. Finally, the overall system health status report includes both the faulty components and the associated at risk components, as predicted by the fault propagation model.Ph.D.Committee Chair: Vachtsevanos, George; Committee Member: Liang, Steven; Committee Member: Michaels, Thomas; Committee Member: Vela, Patricio; Committee Member: Wardi, Yora

    CBR and MBR techniques: review for an application in the emergencies domain

    Get PDF
    The purpose of this document is to provide an in-depth analysis of current reasoning engine practice and the integration strategies of Case Based Reasoning and Model Based Reasoning that will be used in the design and development of the RIMSAT system. RIMSAT (Remote Intelligent Management Support and Training) is a European Commission funded project designed to: a.. Provide an innovative, 'intelligent', knowledge based solution aimed at improving the quality of critical decisions b.. Enhance the competencies and responsiveness of individuals and organisations involved in highly complex, safety critical incidents - irrespective of their location. In other words, RIMSAT aims to design and implement a decision support system that using Case Base Reasoning as well as Model Base Reasoning technology is applied in the management of emergency situations. This document is part of a deliverable for RIMSAT project, and although it has been done in close contact with the requirements of the project, it provides an overview wide enough for providing a state of the art in integration strategies between CBR and MBR technologies.Postprint (published version

    Mailbox Abstractions for Static Analysis of Actor Programs

    Get PDF
    Properties such as the absence of errors or bounds on mailbox sizes are hard to deduce statically for actor-based programs. This is because actor-based programs exhibit several sources of unboundedness, in addition to the non-determinism that is inherent to the concurrent execution of actors. We developed a static technique based on abstract interpretation to soundly reason in a finite amount of time about the possible executions of an actor-based program. We use our technique to statically verify the absence of errors in actor-based programs, and to compute upper bounds on the actors\u27 mailboxes. Sound abstraction of these mailboxes is crucial to the precision of any such technique. We provide several mailbox abstractions and categorize them according to the extent to which they preserve message ordering and multiplicity of messages in a mailbox. We formally prove the soundness of each mailbox abstraction, and empirically evaluate their precision and performance trade-offs on a corpus of benchmark programs. The results show that our technique can statically verify the absence of errors for more benchmark programs than the state-of-the-art analysis

    Optimizing regression testing with AHP-TOPSIS metric system for effective technical debt evaluation

    Get PDF
    Regression testing is essential to ensure that the actual software product confirms the expected requirements following modification. However, it can be costly and time-consuming. To address this issue, various approaches have been proposed for selecting test cases that provide adequate coverage of the modified software. Nonetheless, problems related to omitting and/or rerunning unnecessary test cases continue to pose challenges, particularly with regard to technical debt (TD) resulting from code coverage shortcomings and/or overtesting. In the case of testing-related shortcomings, incurring TD may result in cost and time savings in the short run, but it can lead to future maintenance and testing expenses. Most prior studies have treated test case selection as a single-objective or two-objective optimization problem. This study introduces a multi-objective decision-making approach to quantify and evaluate TD in regression testing. The proposed approach combines the analytic-hierarchy-process (AHP) method and the technique of order preference by similarity to an ideal solution (TOPSIS) to select the most ideal test cases in terms of objective values defined by the test cost, code coverage, and test risk. This approach effectively manages the software regression testing problems. The AHP method was used to eliminate subjective bias when optimizing objective weights, while the TOPSIS method was employed to evaluate and select test-case alternatives based on TD. The effectiveness of this approach was compared to that of a specific multi-objective optimization method and a standard coverage methodology. Unlike other approaches, our proposed approach always accepts solutions based on balanced decisions by considering modifications and using risk analysis and testing costs against potential technical debt. The results demonstrate that our proposed approach reduces both TD and regression testing efforts

    Cosmic strings and their induced non-Gaussianities in the cosmic microwave background

    Get PDF
    Motivated by the fact that cosmological perturbations of inflationary quantum origin were born Gaussian, the search for non-Gaussianities in the cosmic microwave background (CMB) anisotropies is considered as the privileged probe of non-linear physics in the early universe. Cosmic strings are active sources of gravitational perturbations and incessantly produce non-Gaussian distortions in the CMB. Even if, on the currently observed angular scales, they can only contribute a small fraction of the CMB angular power spectrum, cosmic strings could actually be the main source of its non-Gaussianities. In this article, after having reviewed the basic cosmological properties of a string network, we present the signatures Nambu-Goto cosmic strings would induce in various observables ranging from the one-point function of the temperature anisotropies to the bispectrum and trispectrum. It is shown that string imprints are significantly different than those expected from the primordial type of non-Gaussianity and could therefore be easily distinguished.Comment: 50 pages, 20 figures, uses iopart. Misprints corrected, references added, matches published versio

    Modélisation des écoulements de gaz raréfiés au travers de filtres fibreux par la méthode de Boltzmann sur réseau

    Get PDF
    RÉSUMÉ: Les particules fines suspendues dans l’air (aussi nommées aérosols) sont nocives pour la santé humaine et pour l’environnement. La filtration des aérosols (ou la séparation de ces particules de l’air) est donc un procédé d’une importance cruciale. Les filtres fibreux sont généralement choisis pour leur haute performance et leur compacité. L’ajout de nanofibres (<1 μm) déposées sur une couche de microfibres ou mélangées à des microfibres a été proposé pour améliorer ces filtres. La théorie de la fibre unique est souvent utilisée pour prédire la performance des filtres à aérosols. Cependant, cette théorie prend pour acquis que les fibres d’un filtre sont toutes du même diamètre et ignore donc les impacts potentiels de la structure multicouche. La simulation numérique directe des écoulements gazeux au travers de milieux fibreux doit être utilisée pour tenir compte des interactions entre les fibres. Or, les effets de raréfaction qui apparaissent autour des nanofibres doivent être considérés pour prédire quantitativement la performance des milieux filtrants.----------ABSTRACT: Suspensions of fine particles (also called aerosols) are harmful to human health and the environment. The filtration of airborne particles (or the separation of these particles from the air) is therefore a process of crucial importance. Fibrous filters are generally chosen for their high performance and compactness. The addition of nanofibers (<1 μm) deposited on a layer of microfibers or mixed with microfibers has been proposed to improve these filters. The single fiber theory is often used to predict the performance of aerosol filters. However, this theory assumes that the fibers of a filter are all the same diameter and therefore ignores the potential impacts of the multilayer structure. Direct numerical simulation of gas flows through fibrous media must be used to account for the interactions between the fibers. However, the rarefaction effects that occur around nanofibers must be considered to quantitatively predict the performance of the filter media

    Three Decades of Deception Techniques in Active Cyber Defense -- Retrospect and Outlook

    Full text link
    Deception techniques have been widely seen as a game changer in cyber defense. In this paper, we review representative techniques in honeypots, honeytokens, and moving target defense, spanning from the late 1980s to the year 2021. Techniques from these three domains complement with each other and may be leveraged to build a holistic deception based defense. However, to the best of our knowledge, there has not been a work that provides a systematic retrospect of these three domains all together and investigates their integrated usage for orchestrated deceptions. Our paper aims to fill this gap. By utilizing a tailored cyber kill chain model which can reflect the current threat landscape and a four-layer deception stack, a two-dimensional taxonomy is developed, based on which the deception techniques are classified. The taxonomy literally answers which phases of a cyber attack campaign the techniques can disrupt and which layers of the deception stack they belong to. Cyber defenders may use the taxonomy as a reference to design an organized and comprehensive deception plan, or to prioritize deception efforts for a budget conscious solution. We also discuss two important points for achieving active and resilient cyber defense, namely deception in depth and deception lifecycle, where several notable proposals are illustrated. Finally, some outlooks on future research directions are presented, including dynamic integration of different deception techniques, quantified deception effects and deception operation cost, hardware-supported deception techniques, as well as techniques developed based on better understanding of the human element.Comment: 19 page
    • …
    corecore