4,253 research outputs found
Vision systems with the human in the loop
The emerging cognitive vision paradigm deals with vision systems that apply machine learning and automatic reasoning in order to learn from what they perceive. Cognitive vision systems can rate the relevance and consistency of newly acquired knowledge, they can adapt to their environment and thus will exhibit high robustness. This contribution presents vision systems that aim at flexibility and robustness. One is tailored for content-based image retrieval, the others are cognitive vision systems that constitute prototypes of visual active memories which evaluate, gather, and integrate contextual knowledge for visual analysis. All three systems are designed to interact with human users. After we will have discussed adaptive content-based image retrieval and object and action recognition in an office environment, the issue of assessing cognitive systems will be raised. Experiences from psychologically evaluated human-machine interactions will be reported and the promising potential of psychologically-based usability experiments will be stressed
Online Domain Adaptation for Multi-Object Tracking
Automatically detecting, labeling, and tracking objects in videos depends
first and foremost on accurate category-level object detectors. These might,
however, not always be available in practice, as acquiring high-quality large
scale labeled training datasets is either too costly or impractical for all
possible real-world application scenarios. A scalable solution consists in
re-using object detectors pre-trained on generic datasets. This work is the
first to investigate the problem of on-line domain adaptation of object
detectors for causal multi-object tracking (MOT). We propose to alleviate the
dataset bias by adapting detectors from category to instances, and back: (i) we
jointly learn all target models by adapting them from the pre-trained one, and
(ii) we also adapt the pre-trained model on-line. We introduce an on-line
multi-task learning algorithm to efficiently share parameters and reduce drift,
while gradually improving recall. Our approach is applicable to any linear
object detector, and we evaluate both cheap "mini-Fisher Vectors" and expensive
"off-the-shelf" ConvNet features. We quantitatively measure the benefit of our
domain adaptation strategy on the KITTI tracking benchmark and on a new dataset
(PASCAL-to-KITTI) we introduce to study the domain mismatch problem in MOT.Comment: To appear at BMVC 201
Feature Mapping for Learning Fast and Accurate 3D Pose Inference from Synthetic Images
We propose a simple and efficient method for exploiting synthetic images when
training a Deep Network to predict a 3D pose from an image. The ability of
using synthetic images for training a Deep Network is extremely valuable as it
is easy to create a virtually infinite training set made of such images, while
capturing and annotating real images can be very cumbersome. However, synthetic
images do not resemble real images exactly, and using them for training can
result in suboptimal performance. It was recently shown that for exemplar-based
approaches, it is possible to learn a mapping from the exemplar representations
of real images to the exemplar representations of synthetic images. In this
paper, we show that this approach is more general, and that a network can also
be applied after the mapping to infer a 3D pose: At run time, given a real
image of the target object, we first compute the features for the image, map
them to the feature space of synthetic images, and finally use the resulting
features as input to another network which predicts the 3D pose. Since this
network can be trained very effectively by using synthetic images, it performs
very well in practice, and inference is faster and more accurate than with an
exemplar-based approach. We demonstrate our approach on the LINEMOD dataset for
3D object pose estimation from color images, and the NYU dataset for 3D hand
pose estimation from depth maps. We show that it allows us to outperform the
state-of-the-art on both datasets.Comment: CVPR 201
- ā¦