8,400 research outputs found
Online Object Tracking with Proposal Selection
Tracking-by-detection approaches are some of the most successful object
trackers in recent years. Their success is largely determined by the detector
model they learn initially and then update over time. However, under
challenging conditions where an object can undergo transformations, e.g.,
severe rotation, these methods are found to be lacking. In this paper, we
address this problem by formulating it as a proposal selection task and making
two contributions. The first one is introducing novel proposals estimated from
the geometric transformations undergone by the object, and building a rich
candidate set for predicting the object location. The second one is devising a
novel selection strategy using multiple cues, i.e., detection score and
edgeness score computed from state-of-the-art object edges and motion
boundaries. We extensively evaluate our approach on the visual object tracking
2014 challenge and online tracking benchmark datasets, and show the best
performance.Comment: ICCV 201
Multi-Scale 3D Scene Flow from Binocular Stereo Sequences
Scene flow methods estimate the three-dimensional motion field for points in the world, using multi-camera video data. Such methods combine multi-view reconstruction with motion estimation. This paper describes an alternative formulation for dense scene flow estimation that provides reliable results using only two cameras by fusing stereo and optical flow estimation into a single coherent framework. Internally, the proposed algorithm generates probability distributions for optical flow and disparity. Taking into account the uncertainty in the intermediate stages allows for more reliable estimation of the 3D scene flow than previous methods allow. To handle the aperture problems inherent in the estimation of optical flow and disparity, a multi-scale method along with a novel region-based technique is used within a regularized solution. This combined approach both preserves discontinuities and prevents over-regularization – two problems commonly associated with the basic multi-scale approaches. Experiments with synthetic and real test data demonstrate the strength of the proposed approach.National Science Foundation (CNS-0202067, IIS-0208876); Office of Naval Research (N00014-03-1-0108
End-to-End Learning of Representations for Asynchronous Event-Based Data
Event cameras are vision sensors that record asynchronous streams of
per-pixel brightness changes, referred to as "events". They have appealing
advantages over frame-based cameras for computer vision, including high
temporal resolution, high dynamic range, and no motion blur. Due to the sparse,
non-uniform spatiotemporal layout of the event signal, pattern recognition
algorithms typically aggregate events into a grid-based representation and
subsequently process it by a standard vision pipeline, e.g., Convolutional
Neural Network (CNN). In this work, we introduce a general framework to convert
event streams into grid-based representations through a sequence of
differentiable operations. Our framework comes with two main advantages: (i)
allows learning the input event representation together with the task dedicated
network in an end to end manner, and (ii) lays out a taxonomy that unifies the
majority of extant event representations in the literature and identifies novel
ones. Empirically, we show that our approach to learning the event
representation end-to-end yields an improvement of approximately 12% on optical
flow estimation and object recognition over state-of-the-art methods.Comment: To appear at ICCV 201
Convolutional neural network architecture for geometric matching
We address the problem of determining correspondences between two images in
agreement with a geometric model such as an affine or thin-plate spline
transformation, and estimating its parameters. The contributions of this work
are three-fold. First, we propose a convolutional neural network architecture
for geometric matching. The architecture is based on three main components that
mimic the standard steps of feature extraction, matching and simultaneous
inlier detection and model parameter estimation, while being trainable
end-to-end. Second, we demonstrate that the network parameters can be trained
from synthetically generated imagery without the need for manual annotation and
that our matching layer significantly increases generalization capabilities to
never seen before images. Finally, we show that the same model can perform both
instance-level and category-level matching giving state-of-the-art results on
the challenging Proposal Flow dataset.Comment: In 2017 IEEE Conference on Computer Vision and Pattern Recognition
(CVPR 2017
SCNet: Learning Semantic Correspondence
This paper addresses the problem of establishing semantic correspondences
between images depicting different instances of the same object or scene
category. Previous approaches focus on either combining a spatial regularizer
with hand-crafted features, or learning a correspondence model for appearance
only. We propose instead a convolutional neural network architecture, called
SCNet, for learning a geometrically plausible model for semantic
correspondence. SCNet uses region proposals as matching primitives, and
explicitly incorporates geometric consistency in its loss function. It is
trained on image pairs obtained from the PASCAL VOC 2007 keypoint dataset, and
a comparative evaluation on several standard benchmarks demonstrates that the
proposed approach substantially outperforms both recent deep learning
architectures and previous methods based on hand-crafted features.Comment: ICCV 201
Lucid Data Dreaming for Video Object Segmentation
Convolutional networks reach top quality in pixel-level video object
segmentation but require a large amount of training data (1k~100k) to deliver
such results. We propose a new training strategy which achieves
state-of-the-art results across three evaluation datasets while using 20x~1000x
less annotated data than competing methods. Our approach is suitable for both
single and multiple object segmentation. Instead of using large training sets
hoping to generalize across domains, we generate in-domain training data using
the provided annotation on the first frame of each video to synthesize ("lucid
dream") plausible future video frames. In-domain per-video training data allows
us to train high quality appearance- and motion-based models, as well as tune
the post-processing stage. This approach allows to reach competitive results
even when training from only a single annotated frame, without ImageNet
pre-training. Our results indicate that using a larger training set is not
automatically better, and that for the video object segmentation task a smaller
training set that is closer to the target domain is more effective. This
changes the mindset regarding how many training samples and general
"objectness" knowledge are required for the video object segmentation task.Comment: Accepted in International Journal of Computer Vision (IJCV
AgriColMap: Aerial-Ground Collaborative 3D Mapping for Precision Farming
The combination of aerial survey capabilities of Unmanned Aerial Vehicles
with targeted intervention abilities of agricultural Unmanned Ground Vehicles
can significantly improve the effectiveness of robotic systems applied to
precision agriculture. In this context, building and updating a common map of
the field is an essential but challenging task. The maps built using robots of
different types show differences in size, resolution and scale, the associated
geolocation data may be inaccurate and biased, while the repetitiveness of both
visual appearance and geometric structures found within agricultural contexts
render classical map merging techniques ineffective. In this paper we propose
AgriColMap, a novel map registration pipeline that leverages a grid-based
multimodal environment representation which includes a vegetation index map and
a Digital Surface Model. We cast the data association problem between maps
built from UAVs and UGVs as a multimodal, large displacement dense optical flow
estimation. The dominant, coherent flows, selected using a voting scheme, are
used as point-to-point correspondences to infer a preliminary non-rigid
alignment between the maps. A final refinement is then performed, by exploiting
only meaningful parts of the registered maps. We evaluate our system using real
world data for 3 fields with different crop species. The results show that our
method outperforms several state of the art map registration and matching
techniques by a large margin, and has a higher tolerance to large initial
misalignments. We release an implementation of the proposed approach along with
the acquired datasets with this paper.Comment: Published in IEEE Robotics and Automation Letters, 201
- …