11,816 research outputs found
Training Object Class Detectors from Eye Tracking Data
Abstract. Training an object class detector typically requires a large set of im-ages annotated with bounding-boxes, which is expensive and time consuming to create. We propose novel approach to annotate object locations which can sub-stantially reduce annotation time. We first track the eye movements of annota-tors instructed to find the object and then propose a technique for deriving ob-ject bounding-boxes from these fixations. To validate our idea, we collected eye tracking data for the trainval part of 10 object classes of Pascal VOC 2012 (6,270 images, 5 observers). Our technique correctly produces bounding-boxes in 50% of the images, while reducing the total annotation time by factor 6.8 × compared to drawing bounding-boxes. Any standard object class detector can be trained on the bounding-boxes predicted by our model. Our large scale eye tracking dataset is available at groups.inf.ed.ac.uk/calvin/eyetrackdataset/.
Efficient human annotation schemes for training object class detectors
A central task in computer vision is detecting object classes such as cars and horses
in complex scenes. Training an object class detector typically requires a large set of
images labeled with tight bounding boxes around every object instance. Obtaining
such data requires human annotation, which is very expensive and time consuming.
Alternatively, researchers have tried to train models in a weakly supervised setting (i.e.,
given only image-level labels), which is much cheaper but leads to weaker detectors.
In this thesis, we propose new and efficient human annotation schemes for training
object class detectors that bypass the need for drawing bounding boxes and reduce the
annotation cost while still obtaining high quality object detectors.
First, we propose to train object class detectors from eye tracking data. Instead
of drawing tight bounding boxes, the annotators only need to look at the image and
find the target object. We track the eye movements of annotators while they perform
this visual search task and we propose a technique for deriving object bounding boxes
from these eye fixations. To validate our idea, we augment an existing object detection
dataset with eye tracking data.
Second, we propose a scheme for training object class detectors, which only requires
annotators to verify bounding-boxes produced automatically by the learning
algorithm. Our scheme introduces human verification as a new step into a standard
weakly supervised framework which typically iterates between re-training object detectors
and re-localizing objects in the training images. We use the verification signal
to improve both re-training and re-localization.
Third, we propose another scheme where annotators are asked to click on the center
of an imaginary bounding box, which tightly encloses the object. We then incorporate
these clicks into a weakly supervised object localization technique, to jointly localize
object bounding boxes over all training images. Both our center-clicking and human
verification schemes deliver detectors performing almost as well as those trained in a
fully supervised setting.
Finally, we propose extreme clicking. We ask the annotator to click on four physical
points on the object: the top, bottom, left- and right-most points. This task is more
natural than the traditional way of drawing boxes and these points are easy to find. Our
experiments show that annotating objects with extreme clicking is 5 X faster than the
traditional way of drawing boxes and it leads to boxes of the same quality as the original
ground-truth drawn the traditional way. Moreover, we use the resulting extreme
points to obtain more accurate segmentations than those derived from bounding boxes
GazeDPM: Early Integration of Gaze Information in Deformable Part Models
An increasing number of works explore collaborative human-computer systems in
which human gaze is used to enhance computer vision systems. For object
detection these efforts were so far restricted to late integration approaches
that have inherent limitations, such as increased precision without increase in
recall. We propose an early integration approach in a deformable part model,
which constitutes a joint formulation over gaze and visual data. We show that
our GazeDPM method improves over the state-of-the-art DPM baseline by 4% and a
recent method for gaze-supported object detection by 3% on the public POET
dataset. Our approach additionally provides introspection of the learnt models,
can reveal salient image structures, and allows us to investigate the interplay
between gaze attracting and repelling areas, the importance of view-specific
models, as well as viewers' personal biases in gaze patterns. We finally study
important practical aspects of our approach, such as the impact of using
saliency maps instead of real fixations, the impact of the number of fixations,
as well as robustness to gaze estimation error
Complexer-YOLO: Real-Time 3D Object Detection and Tracking on Semantic Point Clouds
Accurate detection of 3D objects is a fundamental problem in computer vision
and has an enormous impact on autonomous cars, augmented/virtual reality and
many applications in robotics. In this work we present a novel fusion of neural
network based state-of-the-art 3D detector and visual semantic segmentation in
the context of autonomous driving. Additionally, we introduce
Scale-Rotation-Translation score (SRTs), a fast and highly parameterizable
evaluation metric for comparison of object detections, which speeds up our
inference time up to 20\% and halves training time. On top, we apply
state-of-the-art online multi target feature tracking on the object
measurements to further increase accuracy and robustness utilizing temporal
information. Our experiments on KITTI show that we achieve same results as
state-of-the-art in all related categories, while maintaining the performance
and accuracy trade-off and still run in real-time. Furthermore, our model is
the first one that fuses visual semantic with 3D object detection
Learning Intelligent Dialogs for Bounding Box Annotation
We introduce Intelligent Annotation Dialogs for bounding box annotation. We
train an agent to automatically choose a sequence of actions for a human
annotator to produce a bounding box in a minimal amount of time. Specifically,
we consider two actions: box verification, where the annotator verifies a box
generated by an object detector, and manual box drawing. We explore two kinds
of agents, one based on predicting the probability that a box will be
positively verified, and the other based on reinforcement learning. We
demonstrate that (1) our agents are able to learn efficient annotation
strategies in several scenarios, automatically adapting to the image
difficulty, the desired quality of the boxes, and the detector strength; (2) in
all scenarios the resulting annotation dialogs speed up annotation compared to
manual box drawing alone and box verification alone, while also outperforming
any fixed combination of verification and drawing in most scenarios; (3) in a
realistic scenario where the detector is iteratively re-trained, our agents
evolve a series of strategies that reflect the shifting trade-off between
verification and drawing as the detector grows stronger.Comment: This paper appeared at CVPR 201
Gaze Embeddings for Zero-Shot Image Classification
Zero-shot image classification using auxiliary information, such as
attributes describing discriminative object properties, requires time-consuming
annotation by domain experts. We instead propose a method that relies on human
gaze as auxiliary information, exploiting that even non-expert users have a
natural ability to judge class membership. We present a data collection
paradigm that involves a discrimination task to increase the information
content obtained from gaze data. Our method extracts discriminative descriptors
from the data and learns a compatibility function between image and gaze using
three novel gaze embeddings: Gaze Histograms (GH), Gaze Features with Grid
(GFG) and Gaze Features with Sequence (GFS). We introduce two new
gaze-annotated datasets for fine-grained image classification and show that
human gaze data is indeed class discriminative, provides a competitive
alternative to expert-annotated attributes, and outperforms other baselines for
zero-shot image classification
- …