9,545 research outputs found
Subspace-Based Holistic Registration for Low-Resolution Facial Images
Subspace-based holistic registration is introduced as an alternative to landmark-based face registration, which has a poor performance on low-resolution images, as obtained in camera surveillance applications. The proposed registration method finds the alignment by maximizing the similarity score between a probe and a gallery image. We use a novel probabilistic framework for both user-independent as well as user-specific face registration. The similarity is calculated using the probability that the face image is correctly aligned in a face subspace, but additionally we take the probability into account that the face is misaligned based on the residual error in the dimensions perpendicular to the face subspace. We perform extensive experiments on the FRGCv2 database to evaluate the impact that the face registration methods have on face recognition. Subspace-based holistic registration on low-resolution images can improve face recognition in comparison with landmark-based registration on high-resolution images. The performance of the tested face recognition methods after subspace-based holistic registration on a low-resolution version of the FRGC database is similar to that after manual registration
MIRACLE-FI at ImageCLEFphoto 2008: Experiences in merging text-based and content-based retrievals
This paper describes the participation of the MIRACLE consortium at the ImageCLEF Photographic Retrieval task of ImageCLEF 2008. In this is new participation of the group, our first purpose is to evaluate our own tools for text-based retrieval and for content-based retrieval using different similarity metrics and the aggregation OWA operator to fuse the three topic images. From the MIRACLE last year experience, we implemented a new merging module combining the text-based and the content-based information in three different ways: FILTER-N, ENRICH and TEXT-FILTER. The former approaches try to improve the text-based baseline results using the content-based results lists. The last one was used to select the relevant images to the content-based module. No clustering strategies were analyzed. Finally, 41 runs were submitted: 1 for the text-based baseline, 10 content-based runs, and 30 mixed experiments merging text and content-based results. Results in general can be considered nearly acceptable comparing with the best results of other groups. Obtained results from textbased retrieval are better than content-based. Merging both textual and visual retrieval we improve the text-based baseline when applying the ENRICH merging algorithm although visual results are lower than textual ones. From these results we were going to try to improve merged results by clustering methods applied to this image collection
Gait Recognition from Motion Capture Data
Gait recognition from motion capture data, as a pattern classification
discipline, can be improved by the use of machine learning. This paper
contributes to the state-of-the-art with a statistical approach for extracting
robust gait features directly from raw data by a modification of Linear
Discriminant Analysis with Maximum Margin Criterion. Experiments on the CMU
MoCap database show that the suggested method outperforms thirteen relevant
methods based on geometric features and a method to learn the features by a
combination of Principal Component Analysis and Linear Discriminant Analysis.
The methods are evaluated in terms of the distribution of biometric templates
in respective feature spaces expressed in a number of class separability
coefficients and classification metrics. Results also indicate a high
portability of learned features, that means, we can learn what aspects of walk
people generally differ in and extract those as general gait features.
Recognizing people without needing group-specific features is convenient as
particular people might not always provide annotated learning data. As a
contribution to reproducible research, our evaluation framework and database
have been made publicly available. This research makes motion capture
technology directly applicable for human recognition.Comment: Preprint. Full paper accepted at the ACM Transactions on Multimedia
Computing, Communications, and Applications (TOMM), special issue on
Representation, Analysis and Recognition of 3D Humans. 18 pages. arXiv admin
note: substantial text overlap with arXiv:1701.00995, arXiv:1609.04392,
arXiv:1609.0693
Quality criteria benchmark for hyperspectral imagery
Hyperspectral data appear to be of a growing interest
over the past few years. However, applications for hyperspectral
data are still in their infancy as handling the significant size of
the data presents a challenge for the user community. Efficient
compression techniques are required, and lossy compression,
specifically, will have a role to play, provided its impact on remote
sensing applications remains insignificant. To assess the data
quality, suitable distortion measures relevant to end-user applications
are required. Quality criteria are also of a major interest
for the conception and development of new sensors to define their
requirements and specifications. This paper proposes a method to
evaluate quality criteria in the context of hyperspectral images.
The purpose is to provide quality criteria relevant to the impact
of degradations on several classification applications. Different
quality criteria are considered. Some are traditionnally used in
image and video coding and are adapted here to hyperspectral
images. Others are specific to hyperspectral data.We also propose
the adaptation of two advanced criteria in the presence of different
simulated degradations on AVIRIS hyperspectral images. Finally,
five criteria are selected to give an accurate representation of the
nature and the level of the degradation affecting hyperspectral
data
SPLODE: Semi-Probabilistic Point and Line Odometry with Depth Estimation from RGB-D Camera Motion
Active depth cameras suffer from several limitations, which cause incomplete
and noisy depth maps, and may consequently affect the performance of RGB-D
Odometry. To address this issue, this paper presents a visual odometry method
based on point and line features that leverages both measurements from a depth
sensor and depth estimates from camera motion. Depth estimates are generated
continuously by a probabilistic depth estimation framework for both types of
features to compensate for the lack of depth measurements and inaccurate
feature depth associations. The framework models explicitly the uncertainty of
triangulating depth from both point and line observations to validate and
obtain precise estimates. Furthermore, depth measurements are exploited by
propagating them through a depth map registration module and using a
frame-to-frame motion estimation method that considers 3D-to-2D and 2D-to-3D
reprojection errors, independently. Results on RGB-D sequences captured on
large indoor and outdoor scenes, where depth sensor limitations are critical,
show that the combination of depth measurements and estimates through our
approach is able to overcome the absence and inaccuracy of depth measurements.Comment: IROS 201
European exchange trading funds trading with locally weighted support vector regression
In this paper, two different Locally Weighted Support Vector Regression (wSVR) algorithms are generated and applied to the task of forecasting and trading five European Exchange Traded Funds. The trading application covers the recent European Monetary Union debt crisis. The performance of the proposed models is benchmarked against traditional Support Vector Regression (SVR) models. The Radial Basis Function, the Wavelet and the Mahalanobis kernel are explored and tested as SVR kernels. Finally, a novel statistical SVR input selection procedure is introduced based on a principal component analysis and the Hansen, Lunde, and Nason (2011) model confidence test. The results demonstrate the superiority of the wSVR models over the traditional SVRs and of the v-SVR over the ε-SVR algorithms. We note that the performance of all models varies and considerably deteriorates in the peak of the debt crisis. In terms of the kernels, our results do not confirm the belief that the Radial Basis Function is the optimum choice for financial series
Parsimonious Mahalanobis Kernel for the Classification of High Dimensional Data
The classification of high dimensional data with kernel methods is considered
in this article. Exploit- ing the emptiness property of high dimensional
spaces, a kernel based on the Mahalanobis distance is proposed. The computation
of the Mahalanobis distance requires the inversion of a covariance matrix. In
high dimensional spaces, the estimated covariance matrix is ill-conditioned and
its inversion is unstable or impossible. Using a parsimonious statistical
model, namely the High Dimensional Discriminant Analysis model, the specific
signal and noise subspaces are estimated for each considered class making the
inverse of the class specific covariance matrix explicit and stable, leading to
the definition of a parsimonious Mahalanobis kernel. A SVM based framework is
used for selecting the hyperparameters of the parsimonious Mahalanobis kernel
by optimizing the so-called radius-margin bound. Experimental results on three
high dimensional data sets show that the proposed kernel is suitable for
classifying high dimensional data, providing better classification accuracies
than the conventional Gaussian kernel
Non-sparse Linear Representations for Visual Tracking with Online Reservoir Metric Learning
Most sparse linear representation-based trackers need to solve a
computationally expensive L1-regularized optimization problem. To address this
problem, we propose a visual tracker based on non-sparse linear
representations, which admit an efficient closed-form solution without
sacrificing accuracy. Moreover, in order to capture the correlation information
between different feature dimensions, we learn a Mahalanobis distance metric in
an online fashion and incorporate the learned metric into the optimization
problem for obtaining the linear representation. We show that online metric
learning using proximity comparison significantly improves the robustness of
the tracking, especially on those sequences exhibiting drastic appearance
changes. Furthermore, in order to prevent the unbounded growth in the number of
training samples for the metric learning, we design a time-weighted reservoir
sampling method to maintain and update limited-sized foreground and background
sample buffers for balancing sample diversity and adaptability. Experimental
results on challenging videos demonstrate the effectiveness and robustness of
the proposed tracker.Comment: Appearing in IEEE Conf. Computer Vision and Pattern Recognition, 201
- …