18,392 research outputs found

    Experimental analysis of computer system dependability

    Get PDF
    This paper reviews an area which has evolved over the past 15 years: experimental analysis of computer system dependability. Methodologies and advances are discussed for three basic approaches used in the area: simulated fault injection, physical fault injection, and measurement-based analysis. The three approaches are suited, respectively, to dependability evaluation in the three phases of a system's life: design phase, prototype phase, and operational phase. Before the discussion of these phases, several statistical techniques used in the area are introduced. For each phase, a classification of research methods or study topics is outlined, followed by discussion of these methods or topics as well as representative studies. The statistical techniques introduced include the estimation of parameters and confidence intervals, probability distribution characterization, and several multivariate analysis methods. Importance sampling, a statistical technique used to accelerate Monte Carlo simulation, is also introduced. The discussion of simulated fault injection covers electrical-level, logic-level, and function-level fault injection methods as well as representative simulation environments such as FOCUS and DEPEND. The discussion of physical fault injection covers hardware, software, and radiation fault injection methods as well as several software and hybrid tools including FIAT, FERARI, HYBRID, and FINE. The discussion of measurement-based analysis covers measurement and data processing techniques, basic error characterization, dependency analysis, Markov reward modeling, software-dependability, and fault diagnosis. The discussion involves several important issues studies in the area, including fault models, fast simulation techniques, workload/failure dependency, correlated failures, and software fault tolerance

    Accuracy of transcranial magnetic stimulation and a Bayesian latent class model for diagnosis of spinal cord dysfunction in horses

    Get PDF
    Background: Spinal cord dysfunction/compression and ataxia are common in horses. Presumptive diagnosis is most commonly based on neurological examination and cervical radiography, but the interest into the diagnostic value of transcranial magnetic stimulation (TMS) with recording of magnetic motor evoked potentials has increased. The problem for the evaluation of diagnostic tests for spinal cord dysfunction is the absence of a gold standard in the living animal. Objectives: To compare diagnostic accuracy of TMS, cervical radiography, and neurological examination. Animals: One hundred seventy-four horses admitted at the clinic for neurological examination. Methods: Retrospective comparison of neurological examination, cervical radiography, and different TMS criteria, using Bayesian latent class modeling to account for the absence of a gold standard. Results: The Bayesian estimate of the prevalence (95% CI) of spinal cord dysfunction was 58.1 (48.3%-68.3%). Sensitivity and specificity of neurological examination were 97.6 (91.4%-99.9%) and 74.7 (61.0%-96.3%), for radiography they were 43.0 (32.3%-54.6%) and 77.3 (67.1%-86.1%), respectively. Transcranial magnetic stimulation reached a sensitivity and specificity of 87.5 (68.2%-99.2%) and 97.4 (90.4%-99.9%). For TMS, the highest accuracy was obtained using the minimum latency time for the pelvic limbs (Youden's index = 0.85). In all evaluated models, cervical radiography performed poorest. Clinical Relevance: Transcranial magnetic stimulation-magnetic motor evoked potential (TMS-MMEP) was the best test to diagnose spinal cord disease, the neurological examination was the second best, but the accuracy of cervical radiography was low. Selecting animals based on neurological examination (highest sensitivity) and confirming disease by TMS-MMEP (highest specificity) would currently be the optimal diagnostic strategy

    ExplainIt! -- A declarative root-cause analysis engine for time series data (extended version)

    Full text link
    We present ExplainIt!, a declarative, unsupervised root-cause analysis engine that uses time series monitoring data from large complex systems such as data centres. ExplainIt! empowers operators to succinctly specify a large number of causal hypotheses to search for causes of interesting events. ExplainIt! then ranks these hypotheses, reducing the number of causal dependencies from hundreds of thousands to a handful for human understanding. We show how a declarative language, such as SQL, can be effective in declaratively enumerating hypotheses that probe the structure of an unknown probabilistic graphical causal model of the underlying system. Our thesis is that databases are in a unique position to enable users to rapidly explore the possible causal mechanisms in data collected from diverse sources. We empirically demonstrate how ExplainIt! had helped us resolve over 30 performance issues in a commercial product since late 2014, of which we discuss a few cases in detail.Comment: SIGMOD Industry Track 201
    • …
    corecore