23,166 research outputs found

    Understanding Learned Models by Identifying Important Features at the Right Resolution

    Full text link
    In many application domains, it is important to characterize how complex learned models make their decisions across the distribution of instances. One way to do this is to identify the features and interactions among them that contribute to a model's predictive accuracy. We present a model-agnostic approach to this task that makes the following specific contributions. Our approach (i) tests feature groups, in addition to base features, and tries to determine the level of resolution at which important features can be determined, (ii) uses hypothesis testing to rigorously assess the effect of each feature on the model's loss, (iii) employs a hierarchical approach to control the false discovery rate when testing feature groups and individual base features for importance, and (iv) uses hypothesis testing to identify important interactions among features and feature groups. We evaluate our approach by analyzing random forest and LSTM neural network models learned in two challenging biomedical applications.Comment: First two authors contributed equally to this work, Accepted for presentation at the Thirty-Third AAAI Conference on Artificial Intelligence (AAAI-19

    An Integration of FDI and DX Techniques for Determining the Minimal Diagnosis in an Automatic Way

    Get PDF
    Two communities work in parallel in model-based diagnosis: FDI and DX. In this work an integration of the FDI and the DX communities is proposed. Only relevant information for the identification of the minimal diagnosis is used. In the first step, the system is divided into clusters of components, and each cluster is separated into nodes. The minimal and necessary set of contexts is then obtained for each cluster. These two steps automatically reduce the computational complexity since only the essential contexts are generated. In the last step, a signature matrix and a set of rules are used in order to obtain the minimal diagnosis. The evaluation of the signature matrix is on-line, the rest of the process is totally off-line.Ministerio de Ciencia y Tecnología DPI2003-07146-C02-0

    A Topological-Based Method for Allocating Sensors by Using CSP Techniques

    Get PDF
    Model-based diagnosis enables isolation of faults of a system. The diagnosis process uses a set of sensors (observations) and a model of the system in order to explain a wrong behaviour. In this work, a new approach is proposed with the aim of improving the computational complexity for isolating faults in a system. The key idea is the addition of a set of new sensors which allows the improvement of the diagnosability of the system. The methodology is based on constraint programming and a greedy method for improving the computational complexity of the CSP resolution. Our approach maintains the requirements of the user (detectability, diagnosability,. . .).Ministerio de Ciencia y Tecnología DPI2003-07146-C02-0

    Online Fault Classification in HPC Systems through Machine Learning

    Full text link
    As High-Performance Computing (HPC) systems strive towards the exascale goal, studies suggest that they will experience excessive failure rates. For this reason, detecting and classifying faults in HPC systems as they occur and initiating corrective actions before they can transform into failures will be essential for continued operation. In this paper, we propose a fault classification method for HPC systems based on machine learning that has been designed specifically to operate with live streamed data. We cast the problem and its solution within realistic operating constraints of online use. Our results show that almost perfect classification accuracy can be reached for different fault types with low computational overhead and minimal delay. We have based our study on a local dataset, which we make publicly available, that was acquired by injecting faults to an in-house experimental HPC system.Comment: Accepted for publication at the Euro-Par 2019 conferenc

    Analysis of cross-correlations in electroencephalogram signals as an approach to proactive diagnosis of schizophrenia

    Full text link
    We apply flicker-noise spectroscopy (FNS), a time series analysis method operating on structure functions and power spectrum estimates, to study the clinical electroencephalogram (EEG) signals recorded in children/adolescents (11 to 14 years of age) with diagnosed schizophrenia-spectrum symptoms at the National Center for Psychiatric Health (NCPH) of the Russian Academy of Medical Sciences. The EEG signals for these subjects were compared with the signals for a control sample of chronically depressed children/adolescents. The purpose of the study is to look for diagnostic signs of subjects' susceptibility to schizophrenia in the FNS parameters for specific electrodes and cross-correlations between the signals simultaneously measured at different points on the scalp. Our analysis of EEG signals from scalp-mounted electrodes at locations F3 and F4, which are symmetrically positioned in the left and right frontal areas of cerebral cortex, respectively, demonstrates an essential role of frequency-phase synchronization, a phenomenon representing specific correlations between the characteristic frequencies and phases of excitations in the brain. We introduce quantitative measures of frequency-phase synchronization and systematize the values of FNS parameters for the EEG data. The comparison of our results with the medical diagnoses for 84 subjects performed at NCPH makes it possible to group the EEG signals into 4 categories corresponding to different risk levels of subjects' susceptibility to schizophrenia. We suggest that the introduced quantitative characteristics and classification of cross-correlations may be used for the diagnosis of schizophrenia at the early stages of its development.Comment: 36 pages, 6 figures, 2 tables; to be published in "Physica A

    Preferential Multi-Context Systems

    Full text link
    Multi-context systems (MCS) presented by Brewka and Eiter can be considered as a promising way to interlink decentralized and heterogeneous knowledge contexts. In this paper, we propose preferential multi-context systems (PMCS), which provide a framework for incorporating a total preorder relation over contexts in a multi-context system. In a given PMCS, its contexts are divided into several parts according to the total preorder relation over them, moreover, only information flows from a context to ones of the same part or less preferred parts are allowed to occur. As such, the first ll preferred parts of an PMCS always fully capture the information exchange between contexts of these parts, and then compose another meaningful PMCS, termed the ll-section of that PMCS. We generalize the equilibrium semantics for an MCS to the (maximal) ll_{\leq}-equilibrium which represents belief states at least acceptable for the ll-section of an PMCS. We also investigate inconsistency analysis in PMCS and related computational complexity issues
    corecore