773 research outputs found
Kernel Multivariate Analysis Framework for Supervised Subspace Learning: A Tutorial on Linear and Kernel Multivariate Methods
Feature extraction and dimensionality reduction are important tasks in many
fields of science dealing with signal processing and analysis. The relevance of
these techniques is increasing as current sensory devices are developed with
ever higher resolution, and problems involving multimodal data sources become
more common. A plethora of feature extraction methods are available in the
literature collectively grouped under the field of Multivariate Analysis (MVA).
This paper provides a uniform treatment of several methods: Principal Component
Analysis (PCA), Partial Least Squares (PLS), Canonical Correlation Analysis
(CCA) and Orthonormalized PLS (OPLS), as well as their non-linear extensions
derived by means of the theory of reproducing kernel Hilbert spaces. We also
review their connections to other methods for classification and statistical
dependence estimation, and introduce some recent developments to deal with the
extreme cases of large-scale and low-sized problems. To illustrate the wide
applicability of these methods in both classification and regression problems,
we analyze their performance in a benchmark of publicly available data sets,
and pay special attention to specific real applications involving audio
processing for music genre prediction and hyperspectral satellite images for
Earth and climate monitoring
Kernel discriminant analysis and clustering with parsimonious Gaussian process models
This work presents a family of parsimonious Gaussian process models which
allow to build, from a finite sample, a model-based classifier in an infinite
dimensional space. The proposed parsimonious models are obtained by
constraining the eigen-decomposition of the Gaussian processes modeling each
class. This allows in particular to use non-linear mapping functions which
project the observations into infinite dimensional spaces. It is also
demonstrated that the building of the classifier can be directly done from the
observation space through a kernel function. The proposed classification method
is thus able to classify data of various types such as categorical data,
functional data or networks. Furthermore, it is possible to classify mixed data
by combining different kernels. The methodology is as well extended to the
unsupervised classification case. Experimental results on various data sets
demonstrate the effectiveness of the proposed method
Deep learning in remote sensing: a review
Standing at the paradigm shift towards data-intensive science, machine
learning techniques are becoming increasingly important. In particular, as a
major breakthrough in the field, deep learning has proven as an extremely
powerful tool in many fields. Shall we embrace deep learning as the key to all?
Or, should we resist a 'black-box' solution? There are controversial opinions
in the remote sensing community. In this article, we analyze the challenges of
using deep learning for remote sensing data analysis, review the recent
advances, and provide resources to make deep learning in remote sensing
ridiculously simple to start with. More importantly, we advocate remote sensing
scientists to bring their expertise into deep learning, and use it as an
implicit general model to tackle unprecedented large-scale influential
challenges, such as climate change and urbanization.Comment: Accepted for publication IEEE Geoscience and Remote Sensing Magazin
Graph Embedding via High Dimensional Model Representation for Hyperspectral Images
Learning the manifold structure of remote sensing images is of paramount
relevance for modeling and understanding processes, as well as to encapsulate
the high dimensionality in a reduced set of informative features for subsequent
classification, regression, or unmixing. Manifold learning methods have shown
excellent performance to deal with hyperspectral image (HSI) analysis but,
unless specifically designed, they cannot provide an explicit embedding map
readily applicable to out-of-sample data. A common assumption to deal with the
problem is that the transformation between the high-dimensional input space and
the (typically low) latent space is linear. This is a particularly strong
assumption, especially when dealing with hyperspectral images due to the
well-known nonlinear nature of the data. To address this problem, a manifold
learning method based on High Dimensional Model Representation (HDMR) is
proposed, which enables to present a nonlinear embedding function to project
out-of-sample samples into the latent space. The proposed method is compared to
manifold learning methods along with its linear counterparts and achieves
promising performance in terms of classification accuracy of a representative
set of hyperspectral images.Comment: This is an accepted version of work to be published in the IEEE
Transactions on Geoscience and Remote Sensing. 11 page
- …