67 research outputs found
Visual Feature Learning
Categorization is a fundamental problem of many computer vision applications, e.g., image
classification, pedestrian detection and face recognition. The robustness of a categorization
system heavily relies on the quality of features, by which data are represented. The prior
arts of feature extraction can be concluded in different levels, which, in a bottom up order,
are low level features (e.g., pixels and gradients) and middle/high-level features (e.g., the
BoW model and sparse coding). Low level features can be directly extracted from images
or videos, while middle/high-level features are constructed upon low-level features, and are
designed to enhance the capability of categorization systems based on different considerations
(e.g., guaranteeing the domain-invariance and improving the discriminative power).
This thesis focuses on the study of visual feature learning. Challenges that remain in designing
visual features lie in intra-class variation, occlusions, illumination and view-point
changes and insufficient prior knowledge. To address these challenges, I present several
visual feature learning methods, where these methods cover the following sub-topics: (i)
I start by introducing a segmentation-based object recognition system. (ii) When training
data are insufficient, I seek data from other resources, which include images or videos in a
different domain, actions captured from a different viewpoint and information in a different
media form. In order to appropriately transfer such resources into the target categorization
system, four transfer learning-based feature learning methods are presented in this section,
where both cross-view, cross-domain and cross-modality scenarios are addressed accordingly.
(iii) Finally, I present a random-forest based feature fusion method for multi-view
action recognition
Accurate Long-Term Multiple People Tracking Using Video and Body-Worn IMUs
Most modern approaches for video-based multiple people tracking rely on human appearance to exploit similarities between person detections. Consequently, tracking accuracy degrades if this kind of information is not discriminative or if people change apparel. In contrast, we present a method to fuse video information with additional motion signals from body-worn inertial measurement units (IMUs). In particular, we propose a neural network to relate person detections with IMU orientations, and formulate a graph labeling problem to obtain a tracking solution that is globally consistent with the video and inertial recordings. The fusion of visual and inertial cues provides several advantages. The association of detection boxes in the video and IMU devices is based on motion, which is independent of a person's outward appearance. Furthermore, inertial sensors provide motion information irrespective of visual occlusions. Hence, once detections in the video are associated with an IMU device, intermediate positions can be reconstructed from corresponding inertial sensor data, which would be unstable using video only. Since no dataset exists for this new setting, we release a dataset of challenging tracking sequences, containing video and IMU recordings together with ground-truth annotations. We evaluate our approach on our new dataset, achieving an average IDF1 score of 91.2%. The proposed method is applicable to any situation that allows one to equip people with inertial sensors. © 1992-2012 IEEE
- …