85,175 research outputs found
Accurate and Robust Scale Recovery for Monocular Visual Odometry Based on Plane Geometry
Scale ambiguity is a fundamental problem in monocular visual odometry.
Typical solutions include loop closure detection and environment information
mining. For applications like self-driving cars, loop closure is not always
available, hence mining prior knowledge from the environment becomes a more
promising approach. In this paper, with the assumption of a constant height of
the camera above the ground, we develop a light-weight scale recovery framework
leveraging an accurate and robust estimation of the ground plane. The framework
includes a ground point extraction algorithm for selecting high-quality points
on the ground plane, and a ground point aggregation algorithm for joining the
extracted ground points in a local sliding window. Based on the aggregated
data, the scale is finally recovered by solving a least-squares problem using a
RANSAC-based optimizer. Sufficient data and robust optimizer enable a highly
accurate scale recovery. Experiments on the KITTI dataset show that the
proposed framework can achieve state-of-the-art accuracy in terms of
translation errors, while maintaining competitive performance on the rotation
error. Due to the light-weight design, our framework also demonstrates a high
frequency of 20Hz on the dataset.Comment: Submitting to IEEE International Conference on Robotics and
Automation 202
Scale-Adaptive Neural Dense Features: Learning via Hierarchical Context Aggregation
How do computers and intelligent agents view the world around them? Feature
extraction and representation constitutes one the basic building blocks towards
answering this question. Traditionally, this has been done with carefully
engineered hand-crafted techniques such as HOG, SIFT or ORB. However, there is
no ``one size fits all'' approach that satisfies all requirements. In recent
years, the rising popularity of deep learning has resulted in a myriad of
end-to-end solutions to many computer vision problems. These approaches, while
successful, tend to lack scalability and can't easily exploit information
learned by other systems. Instead, we propose SAND features, a dedicated deep
learning solution to feature extraction capable of providing hierarchical
context information. This is achieved by employing sparse relative labels
indicating relationships of similarity/dissimilarity between image locations.
The nature of these labels results in an almost infinite set of dissimilar
examples to choose from. We demonstrate how the selection of negative examples
during training can be used to modify the feature space and vary it's
properties. To demonstrate the generality of this approach, we apply the
proposed features to a multitude of tasks, each requiring different properties.
This includes disparity estimation, semantic segmentation, self-localisation
and SLAM. In all cases, we show how incorporating SAND features results in
better or comparable results to the baseline, whilst requiring little to no
additional training. Code can be found at:
https://github.com/jspenmar/SAND_featuresComment: CVPR201
Deep Adaptive Feature Embedding with Local Sample Distributions for Person Re-identification
Person re-identification (re-id) aims to match pedestrians observed by
disjoint camera views. It attracts increasing attention in computer vision due
to its importance to surveillance system. To combat the major challenge of
cross-view visual variations, deep embedding approaches are proposed by
learning a compact feature space from images such that the Euclidean distances
correspond to their cross-view similarity metric. However, the global Euclidean
distance cannot faithfully characterize the ideal similarity in a complex
visual feature space because features of pedestrian images exhibit unknown
distributions due to large variations in poses, illumination and occlusion.
Moreover, intra-personal training samples within a local range are robust to
guide deep embedding against uncontrolled variations, which however, cannot be
captured by a global Euclidean distance. In this paper, we study the problem of
person re-id by proposing a novel sampling to mine suitable \textit{positives}
(i.e. intra-class) within a local range to improve the deep embedding in the
context of large intra-class variations. Our method is capable of learning a
deep similarity metric adaptive to local sample structure by minimizing each
sample's local distances while propagating through the relationship between
samples to attain the whole intra-class minimization. To this end, a novel
objective function is proposed to jointly optimize similarity metric learning,
local positive mining and robust deep embedding. This yields local
discriminations by selecting local-ranged positive samples, and the learned
features are robust to dramatic intra-class variations. Experiments on
benchmarks show state-of-the-art results achieved by our method.Comment: Published on Pattern Recognitio
Robust audio indexing for Dutch spoken-word collections
Abstract—Whereas the growth of storage capacity is in accordance with widely acknowledged predictions, the possibilities to index and access the archives created is lagging behind. This is especially the case in the oral history domain and much of the rich content in these collections runs the risk to remain inaccessible for lack of robust search technologies. This paper addresses the history and development of robust audio indexing technology for searching Dutch spoken-word collections and compares Dutch audio indexing in the well-studied broadcast news domain with an oral-history case-study. It is concluded that despite significant advances in Dutch audio indexing technology and demonstrated applicability in several domains, further research is indispensable for successful automatic disclosure of spoken-word collections
Deep Boosting: Layered Feature Mining for General Image Classification
Constructing effective representations is a critical but challenging problem
in multimedia understanding. The traditional handcraft features often rely on
domain knowledge, limiting the performances of exiting methods. This paper
discusses a novel computational architecture for general image feature mining,
which assembles the primitive filters (i.e. Gabor wavelets) into compositional
features in a layer-wise manner. In each layer, we produce a number of base
classifiers (i.e. regression stumps) associated with the generated features,
and discover informative compositions by using the boosting algorithm. The
output compositional features of each layer are treated as the base components
to build up the next layer. Our framework is able to generate expressive image
representations while inducing very discriminate functions for image
classification. The experiments are conducted on several public datasets, and
we demonstrate superior performances over state-of-the-art approaches.Comment: 6 pages, 4 figures, ICME 201
Robust Classification for Imprecise Environments
In real-world environments it usually is difficult to specify target
operating conditions precisely, for example, target misclassification costs.
This uncertainty makes building robust classification systems problematic. We
show that it is possible to build a hybrid classifier that will perform at
least as well as the best available classifier for any target conditions. In
some cases, the performance of the hybrid actually can surpass that of the best
known classifier. This robust performance extends across a wide variety of
comparison frameworks, including the optimization of metrics such as accuracy,
expected cost, lift, precision, recall, and workforce utilization. The hybrid
also is efficient to build, to store, and to update. The hybrid is based on a
method for the comparison of classifier performance that is robust to imprecise
class distributions and misclassification costs. The ROC convex hull (ROCCH)
method combines techniques from ROC analysis, decision analysis and
computational geometry, and adapts them to the particulars of analyzing learned
classifiers. The method is efficient and incremental, minimizes the management
of classifier performance data, and allows for clear visual comparisons and
sensitivity analyses. Finally, we point to empirical evidence that a robust
hybrid classifier indeed is needed for many real-world problems.Comment: 24 pages, 12 figures. To be published in Machine Learning Journal.
For related papers, see http://www.hpl.hp.com/personal/Tom_Fawcett/ROCCH
- …