9,507 research outputs found
Kernel method for nonlinear Granger causality
Important information on the structure of complex systems, consisting of more
than one component, can be obtained by measuring to which extent the individual
components exchange information among each other. Such knowledge is needed to
reach a deeper comprehension of phenomena ranging from turbulent fluids to
neural networks, as well as complex physiological signals. The linear Granger
approach, to detect cause-effect relationships between time series, has emerged
in recent years as a leading statistical technique to accomplish this task.
Here we generalize Granger causality to the nonlinear case using the theory of
reproducing kernel Hilbert spaces. Our method performs linear Granger causality
in the feature space of suitable kernel functions, assuming arbitrary degree of
nonlinearity. We develop a new strategy to cope with the problem of
overfitting, based on the geometry of reproducing kernel Hilbert spaces.
Applications to coupled chaotic maps and physiological data sets are presented.Comment: Revised version, accepted for publication on Physical Review Letter
Making Indefinite Kernel Learning Practical
In this paper we embed evolutionary computation into statistical learning theory. First, we outline the connection between large margin optimization and statistical learning and see why this paradigm is successful for many pattern recognition problems. We then embed evolutionary computation into the most prominent representative of this class of learning methods, namely into Support Vector Machines (SVM). In contrast to former applications of evolutionary algorithms to SVM we do not only optimize the method or kernel parameters. We rather use evolution strategies in order to directly solve the posed constrained optimization problem. Transforming the problem into the Wolfe dual reduces the total runtime and allows the usage of kernel functions just as for traditional SVM. We will show that evolutionary SVM are at least as accurate as their quadratic programming counterparts on eight real-world benchmark data sets in terms of generalization performance. They always outperform traditional approaches in terms of the original optimization problem. Additionally, the proposed algorithm is more generic than existing traditional solutions since it will also work for non-positive semidefinite or indefinite kernel functions. The evolutionary SVM variants frequently outperform their quadratic programming competitors in cases where such an indefinite Kernel function is used. --
Increasing power for voxel-wise genome-wide association studies : the random field theory, least square kernel machines and fast permutation procedures
Imaging traits are thought to have more direct links to genetic variation than diagnostic measures based on cognitive or clinical assessments and provide a powerful substrate to examine the influence of genetics on human brains. Although imaging genetics has attracted growing attention and interest, most brain-wide genome-wide association studies focus on voxel-wise single-locus approaches, without taking advantage of the spatial information in images or combining the effect of multiple genetic variants. In this paper we present a fast implementation of voxel- and cluster-wise inferences based on the random field theory to fully use the spatial information in images. The approach is combined with a multi-locus model based on least square kernel machines to associate the joint effect of several single nucleotide polymorphisms (SNP) with imaging traits. A fast permutation procedure is also proposed which significantly reduces the number of permutations needed relative to the standard empirical method and provides accurate small p-value estimates based on parametric tail approximation. We explored the relation between 448,294 single nucleotide polymorphisms and 18,043 genes in 31,662 voxels of the entire brain across 740 elderly subjects from the Alzheimer's Disease Neuroimaging Initiative (ADNI). Structural MRI scans were analyzed using tensor-based morphometry (TBM) to compute 3D maps of regional brain volume differences compared to an average template image based on healthy elderly subjects. We find method to be more sensitive compared with voxel-wise single-locus approaches. A number of genes were identified as having significant associations with volumetric changes. The most associated gene was GRIN2B, which encodes the N-methyl-d-aspartate (NMDA) glutamate receptor NR2B subunit and affects both the parietal and temporal lobes in human brains. Its role in Alzheimer's disease has been widely acknowledged and studied, suggesting the validity of the approach. The various advantages over existing approaches indicate a great potential offered by this novel framework to detect genetic influences on human brains
Kernel Methods and their derivatives: Concept and perspectives for the Earth system sciences
Kernel methods are powerful machine learning techniques which implement
generic non-linear functions to solve complex tasks in a simple way. They Have
a solid mathematical background and exhibit excellent performance in practice.
However, kernel machines are still considered black-box models as the feature
mapping is not directly accessible and difficult to interpret.The aim of this
work is to show that it is indeed possible to interpret the functions learned
by various kernel methods is intuitive despite their complexity. Specifically,
we show that derivatives of these functions have a simple mathematical
formulation, are easy to compute, and can be applied to many different
problems. We note that model function derivatives in kernel machines is
proportional to the kernel function derivative. We provide the explicit
analytic form of the first and second derivatives of the most common kernel
functions with regard to the inputs as well as generic formulas to compute
higher order derivatives. We use them to analyze the most used supervised and
unsupervised kernel learning methods: Gaussian Processes for regression,
Support Vector Machines for classification, Kernel Entropy Component Analysis
for density estimation, and the Hilbert-Schmidt Independence Criterion for
estimating the dependency between random variables. For all cases we expressed
the derivative of the learned function as a linear combination of the kernel
function derivative. Moreover we provide intuitive explanations through
illustrative toy examples and show how to improve the interpretation of real
applications in the context of spatiotemporal Earth system data cubes. This
work reflects on the observation that function derivatives may play a crucial
role in kernel methods analysis and understanding.Comment: 21 pages, 10 figures, PLOS One Journa
The analysis of facial beauty: an emerging area of research in pattern analysis
Much research presented recently supports the idea that the human perception of attractiveness is data-driven and largely irrespective of the perceiver. This suggests using pattern analysis techniques for beauty analysis. Several scientific papers on this subject are appearing in image processing, computer vision and pattern analysis contexts, or use techniques of these areas. In this paper, we will survey the recent studies on automatic analysis of facial beauty, and discuss research lines and practical application
Pattern recognition and machine learning for magnetic resonance images with kernel methods
The aim of this thesis is to apply a particular category of machine learning and
pattern recognition algorithms, namely the kernel methods, to both functional and
anatomical magnetic resonance images (MRI). This work specifically focused on
supervised learning methods. Both methodological and practical aspects are described
in this thesis.
Kernel methods have the computational advantage for high dimensional data,
therefore they are idea for imaging data. The procedures can be broadly divided into
two components: the construction of the kernels and the actual kernel algorithms
themselves. Pre-processed functional or anatomical images can be computed into a
linear kernel or a non-linear kernel. We introduce both kernel regression and kernel
classification algorithms in two main categories: probabilistic methods and
non-probabilistic methods. For practical applications, kernel classification methods
were applied to decode the cognitive or sensory states of the subject from the fMRI
signal and were also applied to discriminate patients with neurological diseases from
normal people using anatomical MRI. Kernel regression methods were used to predict
the regressors in the design of fMRI experiments, and clinical ratings from the
anatomical scans
- …