253,141 research outputs found
A Human Error Analysis of Commercial Aviation Accidents Using the Human Factors Analysis and Classification System (HFACS)
The Human Factors Analysis and Classification System (HFACS) is a general human error framework originally developed and tested within the U.S. military as a tool for investigating and analyzing the human causes of aviation accidents. Based upon Reason’s (1990) model of latent and active failures, HFACS addresses human error at all levels of the system, including the condition of aircrew and organizational factors. The purpose of the present study was to assess the utility of the HFACS framework as an error analysis and classification tool outside the military. Specifically, HFACS was applied to commercial aviation accident records maintained by the National Transportation Safety Board (NTSB). Using accidents that occurred between January 1990 and December 1996, it was demonstrated that HFACS reliably accommodated all human causal factors associated with the commercial accidents examined. In addition, the classification of data using HFACS highlighted several critical safety issues in need of intervention research. These results demonstrate that the HFACS framework can be a viable tool for use within the civil aviation arena
Learning Discriminative Bayesian Networks from High-dimensional Continuous Neuroimaging Data
Due to its causal semantics, Bayesian networks (BN) have been widely employed
to discover the underlying data relationship in exploratory studies, such as
brain research. Despite its success in modeling the probability distribution of
variables, BN is naturally a generative model, which is not necessarily
discriminative. This may cause the ignorance of subtle but critical network
changes that are of investigation values across populations. In this paper, we
propose to improve the discriminative power of BN models for continuous
variables from two different perspectives. This brings two general
discriminative learning frameworks for Gaussian Bayesian networks (GBN). In the
first framework, we employ Fisher kernel to bridge the generative models of GBN
and the discriminative classifiers of SVMs, and convert the GBN parameter
learning to Fisher kernel learning via minimizing a generalization error bound
of SVMs. In the second framework, we employ the max-margin criterion and build
it directly upon GBN models to explicitly optimize the classification
performance of the GBNs. The advantages and disadvantages of the two frameworks
are discussed and experimentally compared. Both of them demonstrate strong
power in learning discriminative parameters of GBNs for neuroimaging based
brain network analysis, as well as maintaining reasonable representation
capacity. The contributions of this paper also include a new Directed Acyclic
Graph (DAG) constraint with theoretical guarantee to ensure the graph validity
of GBN.Comment: 16 pages and 5 figures for the article (excluding appendix
Estimation of interventional effects of features on prediction
The interpretability of prediction mechanisms with respect to the underlying
prediction problem is often unclear. While several studies have focused on
developing prediction models with meaningful parameters, the causal
relationships between the predictors and the actual prediction have not been
considered. Here, we connect the underlying causal structure of a data
generation process and the causal structure of a prediction mechanism. To
achieve this, we propose a framework that identifies the feature with the
greatest causal influence on the prediction and estimates the necessary causal
intervention of a feature such that a desired prediction is obtained. The
general concept of the framework has no restrictions regarding data linearity;
however, we focus on an implementation for linear data here. The framework
applicability is evaluated using artificial data and demonstrated using
real-world data.Comment: To appear in Proc. IEEE International Workshop on Machine Learning
for Signal Processing (MLSP2017
Letting Go of “Natural Kind”: Toward a Multidimensional Framework of Nonarbitrary Classification
This article uses the case study of ethnobiological classification to develop a positive and a negative thesis about the state of natural kind debates. On the one hand, I argue that current accounts of natural kinds can be integrated in a multidimensional framework that advances understanding of classificatory practices in ethnobiology. On the other hand, I argue that such a multidimensional framework does not leave any substantial work for the notion “natural kind” and that attempts to formulate a general account of naturalness have become an obstacle to understanding classificatory practices
Supervised estimation of Granger-based causality between time series
Brain effective connectivity aims to detect causal interactions between distinct brain units and it is typically studied through the analysis of direct measurements of the neural activity, e.g., magneto/electroencephalography (M/EEG) signals. The literature on methods for causal inference is vast. It includes model-based methods in which a generative model of the data is assumed and model-free methods that directly infer causality from the probability distribution of the underlying stochastic process. Here, we firstly focus on the model-based methods developed from the Granger criterion of causality, which assumes the autoregressive model of the data. Secondly, we introduce a new perspective, that looks at the problem in a way that is typical of the machine learning literature. Then, we formulate the problem of causality detection as a supervised learning task, by proposing a classification-based approach. A classifier is trained to identify causal interactions between time series for the chosen model and by means of a proposed feature space. In this paper, we are interested in comparing this classification-based approach with the standard Geweke measure of causality in the time domain, through simulation study. Thus, we customized our approach to the case of a MAR model and designed a feature space which contains causality measures based on the idea of precedence and predictability in time. Two variations of the supervised method are proposed and compared to a standard Granger causal analysis method. The results of the simulations show that the supervised method outperforms the standard approach, in particular it is more robust to noise. As evidence of the efficacy of the proposed method, we report the details of our submission to the causality detection competition of Biomag2014, where the proposed method reached the 2nd place. Moreover, as empirical application, we applied the supervised approach on a dataset of neural recordings of rats obtaining an important reduction in the false positive rate
A Distribution-Free Independence Test for High Dimension Data
Test of independence is of fundamental importance in modern data analysis,
with broad applications in variable selection, graphical models, and causal
inference. When the data is high dimensional and the potential dependence
signal is sparse, independence testing becomes very challenging without
distributional or structural assumptions. In this paper we propose a general
framework for independence testing by first fitting a classifier that
distinguishes the joint and product distributions, and then testing the
significance of the fitted classifier. This framework allows us to borrow the
strength of the most advanced classification algorithms developed from the
modern machine learning community, making it applicable to high dimensional,
complex data. By combining a sample split and a fixed permutation, our test
statistic has a universal, fixed Gaussian null distribution that is independent
of the underlying data distribution. Extensive simulations demonstrate the
advantages of the newly proposed test compared with existing methods. We
further apply the new test to a single cell data set to test the independence
between two types of single cell sequencing measurements, whose high
dimensionality and sparsity make existing methods hard to apply
- …