8,736 research outputs found
Kernel Multivariate Analysis Framework for Supervised Subspace Learning: A Tutorial on Linear and Kernel Multivariate Methods
Feature extraction and dimensionality reduction are important tasks in many
fields of science dealing with signal processing and analysis. The relevance of
these techniques is increasing as current sensory devices are developed with
ever higher resolution, and problems involving multimodal data sources become
more common. A plethora of feature extraction methods are available in the
literature collectively grouped under the field of Multivariate Analysis (MVA).
This paper provides a uniform treatment of several methods: Principal Component
Analysis (PCA), Partial Least Squares (PLS), Canonical Correlation Analysis
(CCA) and Orthonormalized PLS (OPLS), as well as their non-linear extensions
derived by means of the theory of reproducing kernel Hilbert spaces. We also
review their connections to other methods for classification and statistical
dependence estimation, and introduce some recent developments to deal with the
extreme cases of large-scale and low-sized problems. To illustrate the wide
applicability of these methods in both classification and regression problems,
we analyze their performance in a benchmark of publicly available data sets,
and pay special attention to specific real applications involving audio
processing for music genre prediction and hyperspectral satellite images for
Earth and climate monitoring
Matrix completion and extrapolation via kernel regression
Matrix completion and extrapolation (MCEX) are dealt with here over
reproducing kernel Hilbert spaces (RKHSs) in order to account for prior
information present in the available data. Aiming at a faster and
low-complexity solver, the task is formulated as a kernel ridge regression. The
resultant MCEX algorithm can also afford online implementation, while the class
of kernel functions also encompasses several existing approaches to MC with
prior information. Numerical tests on synthetic and real datasets show that the
novel approach performs faster than widespread methods such as alternating
least squares (ALS) or stochastic gradient descent (SGD), and that the recovery
error is reduced, especially when dealing with noisy data
A scalable saliency-based Feature selection method with instance level information
Classic feature selection techniques remove those features that are either
irrelevant or redundant, achieving a subset of relevant features that help to
provide a better knowledge extraction. This allows the creation of compact
models that are easier to interpret. Most of these techniques work over the
whole dataset, but they are unable to provide the user with successful
information when only instance information is needed. In short, given any
example, classic feature selection algorithms do not give any information about
which the most relevant information is, regarding this sample. This work aims
to overcome this handicap by developing a novel feature selection method,
called Saliency-based Feature Selection (SFS), based in deep-learning saliency
techniques. Our experimental results will prove that this algorithm can be
successfully used not only in Neural Networks, but also under any given
architecture trained by using Gradient Descent techniques
Deep Learning for Single Image Super-Resolution: A Brief Review
Single image super-resolution (SISR) is a notoriously challenging ill-posed
problem, which aims to obtain a high-resolution (HR) output from one of its
low-resolution (LR) versions. To solve the SISR problem, recently powerful deep
learning algorithms have been employed and achieved the state-of-the-art
performance. In this survey, we review representative deep learning-based SISR
methods, and group them into two categories according to their major
contributions to two essential aspects of SISR: the exploration of efficient
neural network architectures for SISR, and the development of effective
optimization objectives for deep SISR learning. For each category, a baseline
is firstly established and several critical limitations of the baseline are
summarized. Then representative works on overcoming these limitations are
presented based on their original contents as well as our critical
understandings and analyses, and relevant comparisons are conducted from a
variety of perspectives. Finally we conclude this review with some vital
current challenges and future trends in SISR leveraging deep learning
algorithms.Comment: Accepted by IEEE Transactions on Multimedia (TMM
Deep Learning versus Classical Regression for Brain Tumor Patient Survival Prediction
Deep learning for regression tasks on medical imaging data has shown
promising results. However, compared to other approaches, their power is
strongly linked to the dataset size. In this study, we evaluate
3D-convolutional neural networks (CNNs) and classical regression methods with
hand-crafted features for survival time regression of patients with high grade
brain tumors. The tested CNNs for regression showed promising but unstable
results. The best performing deep learning approach reached an accuracy of
51.5% on held-out samples of the training set. All tested deep learning
experiments were outperformed by a Support Vector Classifier (SVC) using 30
radiomic features. The investigated features included intensity, shape,
location and deep features. The submitted method to the BraTS 2018 survival
prediction challenge is an ensemble of SVCs, which reached a cross-validated
accuracy of 72.2% on the BraTS 2018 training set, 57.1% on the validation set,
and 42.9% on the testing set. The results suggest that more training data is
necessary for a stable performance of a CNN model for direct regression from
magnetic resonance images, and that non-imaging clinical patient information is
crucial along with imaging information.Comment: Contribution to The International Multimodal Brain Tumor Segmentation
(BraTS) Challenge 2018, survival prediction tas
- …