3,260 research outputs found
Occlusion Aware Unsupervised Learning of Optical Flow
It has been recently shown that a convolutional neural network can learn
optical flow estimation with unsupervised learning. However, the performance of
the unsupervised methods still has a relatively large gap compared to its
supervised counterpart. Occlusion and large motion are some of the major
factors that limit the current unsupervised learning of optical flow methods.
In this work we introduce a new method which models occlusion explicitly and a
new warping way that facilitates the learning of large motion. Our method shows
promising results on Flying Chairs, MPI-Sintel and KITTI benchmark datasets.
Especially on KITTI dataset where abundant unlabeled samples exist, our
unsupervised method outperforms its counterpart trained with supervised
learning.Comment: CVPR 2018 Camera-read
Generalized Boundaries from Multiple Image Interpretations
Boundary detection is essential for a variety of computer vision tasks such
as segmentation and recognition. In this paper we propose a unified formulation
and a novel algorithm that are applicable to the detection of different types
of boundaries, such as intensity edges, occlusion boundaries or object category
specific boundaries. Our formulation leads to a simple method with
state-of-the-art performance and significantly lower computational cost than
existing methods. We evaluate our algorithm on different types of boundaries,
from low-level boundaries extracted in natural images, to occlusion boundaries
obtained using motion cues and RGB-D cameras, to boundaries from
soft-segmentation. We also propose a novel method for figure/ground
soft-segmentation that can be used in conjunction with our boundary detection
method and improve its accuracy at almost no extra computational cost
A Deep-structured Conditional Random Field Model for Object Silhouette Tracking
In this work, we introduce a deep-structured conditional random field
(DS-CRF) model for the purpose of state-based object silhouette tracking. The
proposed DS-CRF model consists of a series of state layers, where each state
layer spatially characterizes the object silhouette at a particular point in
time. The interactions between adjacent state layers are established by
inter-layer connectivity dynamically determined based on inter-frame optical
flow. By incorporate both spatial and temporal context in a dynamic fashion
within such a deep-structured probabilistic graphical model, the proposed
DS-CRF model allows us to develop a framework that can accurately and
efficiently track object silhouettes that can change greatly over time, as well
as under different situations such as occlusion and multiple targets within the
scene. Experiment results using video surveillance datasets containing
different scenarios such as occlusion and multiple targets showed that the
proposed DS-CRF approach provides strong object silhouette tracking performance
when compared to baseline methods such as mean-shift tracking, as well as
state-of-the-art methods such as context tracking and boosted particle
filtering.Comment: 17 page
Optical Flow in Mostly Rigid Scenes
The optical flow of natural scenes is a combination of the motion of the
observer and the independent motion of objects. Existing algorithms typically
focus on either recovering motion and structure under the assumption of a
purely static world or optical flow for general unconstrained scenes. We
combine these approaches in an optical flow algorithm that estimates an
explicit segmentation of moving objects from appearance and physical
constraints. In static regions we take advantage of strong constraints to
jointly estimate the camera motion and the 3D structure of the scene over
multiple frames. This allows us to also regularize the structure instead of the
motion. Our formulation uses a Plane+Parallax framework, which works even under
small baselines, and reduces the motion estimation to a one-dimensional search
problem, resulting in more accurate estimation. In moving regions the flow is
treated as unconstrained, and computed with an existing optical flow method.
The resulting Mostly-Rigid Flow (MR-Flow) method achieves state-of-the-art
results on both the MPI-Sintel and KITTI-2015 benchmarks.Comment: 15 pages, 10 figures; accepted for publication at CVPR 201
A Fusion Approach for Multi-Frame Optical Flow Estimation
To date, top-performing optical flow estimation methods only take pairs of
consecutive frames into account. While elegant and appealing, the idea of using
more than two frames has not yet produced state-of-the-art results. We present
a simple, yet effective fusion approach for multi-frame optical flow that
benefits from longer-term temporal cues. Our method first warps the optical
flow from previous frames to the current, thereby yielding multiple plausible
estimates. It then fuses the complementary information carried by these
estimates into a new optical flow field. At the time of writing, our method
ranks first among published results in the MPI Sintel and KITTI 2015
benchmarks. Our models will be available on https://github.com/NVlabs/PWC-Net.Comment: Work accepted at IEEE Winter Conference on Applications of Computer
Vision (WACV 2019
Learning Articulated Motions From Visual Demonstration
Many functional elements of human homes and workplaces consist of rigid
components which are connected through one or more sliding or rotating
linkages. Examples include doors and drawers of cabinets and appliances;
laptops; and swivel office chairs. A robotic mobile manipulator would benefit
from the ability to acquire kinematic models of such objects from observation.
This paper describes a method by which a robot can acquire an object model by
capturing depth imagery of the object as a human moves it through its range of
motion. We envision that in future, a machine newly introduced to an
environment could be shown by its human user the articulated objects particular
to that environment, inferring from these "visual demonstrations" enough
information to actuate each object independently of the user.
Our method employs sparse (markerless) feature tracking, motion segmentation,
component pose estimation, and articulation learning; it does not require prior
object models. Using the method, a robot can observe an object being exercised,
infer a kinematic model incorporating rigid, prismatic and revolute joints,
then use the model to predict the object's motion from a novel vantage point.
We evaluate the method's performance, and compare it to that of a previously
published technique, for a variety of household objects.Comment: Published in Robotics: Science and Systems X, Berkeley, CA. ISBN:
978-0-9923747-0-
- …