5,746 research outputs found
Low-Rank Discriminative Least Squares Regression for Image Classification
Latest least squares regression (LSR) methods mainly try to learn slack
regression targets to replace strict zero-one labels. However, the difference
of intra-class targets can also be highlighted when enlarging the distance
between different classes, and roughly persuing relaxed targets may lead to the
problem of overfitting. To solve above problems, we propose a low-rank
discriminative least squares regression model (LRDLSR) for multi-class image
classification. Specifically, LRDLSR class-wisely imposes low-rank constraint
on the intra-class regression targets to encourage its compactness and
similarity. Moreover, LRDLSR introduces an additional regularization term on
the learned targets to avoid the problem of overfitting. These two improvements
are helpful to learn a more discriminative projection for regression and thus
achieving better classification performance. Experimental results over a range
of image databases demonstrate the effectiveness of the proposed LRDLSR method
Highly Efficient Regression for Scalable Person Re-Identification
Existing person re-identification models are poor for scaling up to large
data required in real-world applications due to: (1) Complexity: They employ
complex models for optimal performance resulting in high computational cost for
training at a large scale; (2) Inadaptability: Once trained, they are
unsuitable for incremental update to incorporate any new data available. This
work proposes a truly scalable solution to re-id by addressing both problems.
Specifically, a Highly Efficient Regression (HER) model is formulated by
embedding the Fisher's criterion to a ridge regression model for very fast
re-id model learning with scalable memory/storage usage. Importantly, this new
HER model supports faster than real-time incremental model updates therefore
making real-time active learning feasible in re-id with human-in-the-loop.
Extensive experiments show that such a simple and fast model not only
outperforms notably the state-of-the-art re-id methods, but also is more
scalable to large data with additional benefits to active learning for reducing
human labelling effort in re-id deployment
Kernel Multivariate Analysis Framework for Supervised Subspace Learning: A Tutorial on Linear and Kernel Multivariate Methods
Feature extraction and dimensionality reduction are important tasks in many
fields of science dealing with signal processing and analysis. The relevance of
these techniques is increasing as current sensory devices are developed with
ever higher resolution, and problems involving multimodal data sources become
more common. A plethora of feature extraction methods are available in the
literature collectively grouped under the field of Multivariate Analysis (MVA).
This paper provides a uniform treatment of several methods: Principal Component
Analysis (PCA), Partial Least Squares (PLS), Canonical Correlation Analysis
(CCA) and Orthonormalized PLS (OPLS), as well as their non-linear extensions
derived by means of the theory of reproducing kernel Hilbert spaces. We also
review their connections to other methods for classification and statistical
dependence estimation, and introduce some recent developments to deal with the
extreme cases of large-scale and low-sized problems. To illustrate the wide
applicability of these methods in both classification and regression problems,
we analyze their performance in a benchmark of publicly available data sets,
and pay special attention to specific real applications involving audio
processing for music genre prediction and hyperspectral satellite images for
Earth and climate monitoring
Generative discriminative models for multivariate inference and statistical mapping in medical imaging
This paper presents a general framework for obtaining interpretable
multivariate discriminative models that allow efficient statistical inference
for neuroimage analysis. The framework, termed generative discriminative
machine (GDM), augments discriminative models with a generative regularization
term. We demonstrate that the proposed formulation can be optimized in closed
form and in dual space, allowing efficient computation for high dimensional
neuroimaging datasets. Furthermore, we provide an analytic estimation of the
null distribution of the model parameters, which enables efficient statistical
inference and p-value computation without the need for permutation testing. We
compared the proposed method with both purely generative and discriminative
learning methods in two large structural magnetic resonance imaging (sMRI)
datasets of Alzheimer's disease (AD) (n=415) and Schizophrenia (n=853). Using
the AD dataset, we demonstrated the ability of GDM to robustly handle
confounding variations. Using Schizophrenia dataset, we demonstrated the
ability of GDM to handle multi-site studies. Taken together, the results
underline the potential of the proposed approach for neuroimaging analyses.Comment: To appear in MICCAI 2018 proceeding
- …