3,601 research outputs found
Positive Definite Kernels in Machine Learning
This survey is an introduction to positive definite kernels and the set of
methods they have inspired in the machine learning literature, namely kernel
methods. We first discuss some properties of positive definite kernels as well
as reproducing kernel Hibert spaces, the natural extension of the set of
functions associated with a kernel defined
on a space . We discuss at length the construction of kernel
functions that take advantage of well-known statistical models. We provide an
overview of numerous data-analysis methods which take advantage of reproducing
kernel Hilbert spaces and discuss the idea of combining several kernels to
improve the performance on certain tasks. We also provide a short cookbook of
different kernels which are particularly useful for certain data-types such as
images, graphs or speech segments.Comment: draft. corrected a typo in figure
Spectral comparison of large urban graphs
The spectrum of an axial graph is proposed as a means for comparison between spaces,
particularly for measuring between very large and complex graphs. A number of methods have
been used in recent years for comparative analysis within large sets of urban areas, both to
investigate properties of specific known types of street network or to propose a taxonomy of urban
morphology based on an analytical technique. In many cases, a single or small range of predefined,
scalar measures such as metric distance, integration, control or clustering coefficient have
been used to compare the graphs. While these measures are well understood theoretically, their
low dimensionality determines the range of observations that can ultimately be drawn from the data.
Spectral analysis consists of a high dimensional vector representing each space, between which
metric distance may be measured to indicate the overall difference between two spaces, or
subspaces may be extracted to correspond to certain features. It is used for comparison of entire
urban graphs, to determine similarities (and differences) in their overall structure.
Results are shown of a comparison of 152 cities distributed around the world. The clustering of
cities of similar properties in a high dimensional space is discussed. Principal and nonlinear
components of the data set indicate significant correlations in the graph similarities between cities
and their proximity to one another, suggesting that cultural features based on location are evident in
the city form and that these can be quantified by the proposed method. Results of classification
tests show that a city’s location can be estimated based purely on its form.
The high dimensionality of the spectra is beneficial for its utility in data-mining applications that can
draw correlations with other data sets such as land use information. It is shown how further
processing by supervised learning allows the extraction of relevant features. A methodological
comparison is also drawn with statistical studies that use a strong correlation between human
genetic markers and geographical location of populations to derive detailed reconstructions of
prehistoric migration. Thus, it is suggested that the method may be utilised for mapping the transfer
of cultural memes by measuring comparison between cities
Riemannian tangent space mapping and elastic net regularization for cost-effective EEG markers of brain atrophy in Alzheimer's disease
The diagnosis of Alzheimer's disease (AD) in routine clinical practice is
most commonly based on subjective clinical interpretations. Quantitative
electroencephalography (QEEG) measures have been shown to reflect
neurodegenerative processes in AD and might qualify as affordable and thereby
widely available markers to facilitate the objectivization of AD assessment.
Here, we present a novel framework combining Riemannian tangent space mapping
and elastic net regression for the development of brain atrophy markers. While
most AD QEEG studies are based on small sample sizes and psychological test
scores as outcome measures, here we train and test our models using data of one
of the largest prospective EEG AD trials ever conducted, including MRI
biomarkers of brain atrophy.Comment: Presented at NIPS 2017 Workshop on Machine Learning for Healt
Robust Face Recognition With Kernelized Locality-Sensitive Group Sparsity Representation
In this paper, a novel joint sparse representation method is proposed for robust face recognition. We embed both group sparsity and kernelized locality-sensitive constraints into the framework of sparse representation. The group sparsity constraint is designed to utilize the grouped structure information in the training data. The local similarity between test and training data is measured in the kernel space instead of the Euclidian space. As a result, the embedded nonlinear information can be effectively captured, leading to a more discriminative representation. We show that, by integrating the kernelized local-sensitivity constraint and the group sparsity constraint, the embedded structure information can be better explored, and significant performance improvement can be achieved. On the one hand, experiments on the ORL, AR, extended Yale B, and LFW data sets verify the superiority of our method. On the other hand, experiments on two unconstrained data sets, the LFW and the IJB-A, show that the utilization of sparsity can improve recognition performance, especially on the data sets with large pose variation
- …