6 research outputs found
CoMaL Tracking: Tracking Points at the Object Boundaries
Traditional point tracking algorithms such as the KLT use local 2D
information aggregation for feature detection and tracking, due to which their
performance degrades at the object boundaries that separate multiple objects.
Recently, CoMaL Features have been proposed that handle such a case. However,
they proposed a simple tracking framework where the points are re-detected in
each frame and matched. This is inefficient and may also lose many points that
are not re-detected in the next frame. We propose a novel tracking algorithm to
accurately and efficiently track CoMaL points. For this, the level line segment
associated with the CoMaL points is matched to MSER segments in the next frame
using shape-based matching and the matches are further filtered using
texture-based matching. Experiments show improvements over a simple
re-detect-and-match framework as well as KLT in terms of speed/accuracy on
different real-world applications, especially at the object boundaries.Comment: 10 pages, 10 figures, to appear in 1st Joint BMTT-PETS Workshop on
Tracking and Surveillance, CVPR 201
Analysis of feature detector and descriptor combinations with a localization experiment for various performance metrics
The purpose of this study is to provide a detailed performance comparison of
feature detector/descriptor methods, particularly when their various
combinations are used for image-matching. The localization experiments of a
mobile robot in an indoor environment are presented as a case study. In these
experiments, 3090 query images and 127 dataset images were used. This study
includes five methods for feature detectors (features from accelerated segment
test (FAST), oriented FAST and rotated binary robust independent elementary
features (BRIEF) (ORB), speeded-up robust features (SURF), scale invariant
feature transform (SIFT), and binary robust invariant scalable keypoints
(BRISK)) and five other methods for feature descriptors (BRIEF, BRISK, SIFT,
SURF, and ORB). These methods were used in 23 different combinations and it was
possible to obtain meaningful and consistent comparison results using the
performance criteria defined in this study. All of these methods were used
independently and separately from each other as either feature detector or
descriptor. The performance analysis shows the discriminative power of various
combinations of detector and descriptor methods. The analysis is completed
using five parameters: (i) accuracy, (ii) time, (iii) angle difference between
keypoints, (iv) number of correct matches, and (v) distance between correctly
matched keypoints. In a range of 60{\deg}, covering five rotational pose points
for our system, the FAST-SURF combination had the lowest distance and angle
difference values and the highest number of matched keypoints. SIFT-SURF was
the most accurate combination with a 98.41% correct classification rate. The
fastest algorithm was ORB-BRIEF, with a total running time of 21,303.30 s to
match 560 images captured during motion with 127 dataset images.Comment: 11 pages, 3 figures, 1 tabl
Comparison of affine-invariant local detectors and descriptors
International audienceIn this paper we summarize recent progress on local photometric invariants. The photometric invariants can be used to find correspondences in the presence of significant viewpoint changes. We evaluate the performance of region detectors and descriptors. We compare several methods for detecting affine regions [4, 9, 11, 18, 17]. We evaluate the repeatability of the detected regions, the accuracy of the detectors and the invariance to geometric as well as photometric image transformations. Furthermore, we compare several local descriptors [3, 5, 8, 14, 19]. The local descriptors are evaluated in terms of two properties: robustness and distinctiveness. The evaluation is carried out for different image transformations and scene types. We observe that the ranking of the detectors and descriptors remains the same regardless the scene type or image transformation
Comparison of affine-invariant local detectors and descriptors
International audienceIn this paper we summarize recent progress on local photometric invariants. The photometric invariants can be used to find correspondences in the presence of significant viewpoint changes. We evaluate the performance of region detectors and descriptors. We compare several methods for detecting affine regions [4, 9, 11, 18, 17]. We evaluate the repeatability of the detected regions, the accuracy of the detectors and the invariance to geometric as well as photometric image transformations. Furthermore, we compare several local descriptors [3, 5, 8, 14, 19]. The local descriptors are evaluated in terms of two properties: robustness and distinctiveness. The evaluation is carried out for different image transformations and scene types. We observe that the ranking of the detectors and descriptors remains the same regardless the scene type or image transformation