887 research outputs found
Non-linear Causal Inference using Gaussianity Measures
We provide theoretical and empirical evidence for a type of asymmetry between
causes and effects that is present when these are related via linear models
contaminated with additive non-Gaussian noise. Assuming that the causes and the
effects have the same distribution, we show that the distribution of the
residuals of a linear fit in the anti-causal direction is closer to a Gaussian
than the distribution of the residuals in the causal direction. This
Gaussianization effect is characterized by reduction of the magnitude of the
high-order cumulants and by an increment of the differential entropy of the
residuals. The problem of non-linear causal inference is addressed by
performing an embedding in an expanded feature space, in which the relation
between causes and effects can be assumed to be linear. The effectiveness of a
method to discriminate between causes and effects based on this type of
asymmetry is illustrated in a variety of experiments using different measures
of Gaussianity. The proposed method is shown to be competitive with
state-of-the-art techniques for causal inference.Comment: 35 pages, 9 figure
In Search of Non-Gaussian Components of a High-Dimensional Distribution
Finding non-Gaussian components of high-dimensional data is an important preprocessing step for effcient information processing. This article proposes a new linear method to identify the ``non-Gaussian subspace´´ within a very general semi-parametric framework. Our proposed method, called NGCA (Non-Gaussian Component Analysis), is essentially based on a linear operator which, to any arbitrary nonlinear (smooth) function, associates a vector which belongs to the low dimensional non-Gaussian target subspace up to an estimation error. By applying this operator to a family of different nonlinear functions, one obtains a family of different vectors lying in a vicinity of the target space. As a final step, the target space itself is estimated by applying PCA to this family of vectors. We show that this procedure is consistent in the sense that the estimaton error tends to zero at a parametric rate, uniformly over the family, Numerical examples demonstrate the usefulness of our method.non-Gaussian components, dimension reduction
- …