1,167 research outputs found
Visual Object Tracking: The Initialisation Problem
Model initialisation is an important component of object tracking. Tracking
algorithms are generally provided with the first frame of a sequence and a
bounding box (BB) indicating the location of the object. This BB may contain a
large number of background pixels in addition to the object and can lead to
parts-based tracking algorithms initialising their object models in background
regions of the BB. In this paper, we tackle this as a missing labels problem,
marking pixels sufficiently away from the BB as belonging to the background and
learning the labels of the unknown pixels. Three techniques, One-Class SVM
(OC-SVM), Sampled-Based Background Model (SBBM) (a novel background model based
on pixel samples), and Learning Based Digital Matting (LBDM), are adapted to
the problem. These are evaluated with leave-one-video-out cross-validation on
the VOT2016 tracking benchmark. Our evaluation shows both OC-SVMs and SBBM are
capable of providing a good level of segmentation accuracy but are too
parameter-dependent to be used in real-world scenarios. We show that LBDM
achieves significantly increased performance with parameters selected by cross
validation and we show that it is robust to parameter variation.Comment: 15th Conference on Computer and Robot Vision (CRV 2018). Source code
available at https://github.com/georgedeath/initialisation-proble
Egocentric Hand Detection Via Dynamic Region Growing
Egocentric videos, which mainly record the activities carried out by the
users of the wearable cameras, have drawn much research attentions in recent
years. Due to its lengthy content, a large number of ego-related applications
have been developed to abstract the captured videos. As the users are
accustomed to interacting with the target objects using their own hands while
their hands usually appear within their visual fields during the interaction,
an egocentric hand detection step is involved in tasks like gesture
recognition, action recognition and social interaction understanding. In this
work, we propose a dynamic region growing approach for hand region detection in
egocentric videos, by jointly considering hand-related motion and egocentric
cues. We first determine seed regions that most likely belong to the hand, by
analyzing the motion patterns across successive frames. The hand regions can
then be located by extending from the seed regions, according to the scores
computed for the adjacent superpixels. These scores are derived from four
egocentric cues: contrast, location, position consistency and appearance
continuity. We discuss how to apply the proposed method in real-life scenarios,
where multiple hands irregularly appear and disappear from the videos.
Experimental results on public datasets show that the proposed method achieves
superior performance compared with the state-of-the-art methods, especially in
complicated scenarios
- …