1,891 research outputs found
SENSE: a Shared Encoder Network for Scene-flow Estimation
We introduce a compact network for holistic scene flow estimation, called
SENSE, which shares common encoder features among four closely-related tasks:
optical flow estimation, disparity estimation from stereo, occlusion
estimation, and semantic segmentation. Our key insight is that sharing features
makes the network more compact, induces better feature representations, and can
better exploit interactions among these tasks to handle partially labeled data.
With a shared encoder, we can flexibly add decoders for different tasks during
training. This modular design leads to a compact and efficient model at
inference time. Exploiting the interactions among these tasks allows us to
introduce distillation and self-supervised losses in addition to supervised
losses, which can better handle partially labeled real-world data. SENSE
achieves state-of-the-art results on several optical flow benchmarks and runs
as fast as networks specifically designed for optical flow. It also compares
favorably against the state of the art on stereo and scene flow, while
consuming much less memory.Comment: ICCV 2019 Ora
Recommended from our members
Understanding the Dynamic Visual World: From Motion to Semantics
We live in a dynamic world, which is continuously in motion. Perceiving and interpreting the dynamic surroundings is an essential capability for an intelligent agent. Human beings have the remarkable capability to learn from limited data, with partial or little annotation, in sharp contrast to computational perception models that rely on large-scale, manually labeled data. Reliance on strongly supervised models with manually labeled data inherently prohibits us from modeling the dynamic visual world, as manual annotations are tedious, expensive, and not scalable, especially if we would like to solve multiple scene understanding tasks at the same time. Even worse, in some cases, manual annotations are completely infeasible, such as the motion vector of each pixel (i.e., optical flow) since humans cannot reliably produce these types of labeling. In fact, living in a dynamic world, when we move around, motion information, as a result of moving camera, independently moving objects, and scene geometry, consists of abundant information, revealing the structure and complexity of our dynamic visual world. As the famous psychologist James J. Gibson suggested, āwe must perceive in order to move, but we also must move in order to perceiveā. In this thesis, we investigate how to use the motion information contained in unlabeled or partially labeled videos to better understand and synthesize the dynamic visual world.
This thesis consists of three parts. In the first part, we focus on the āmove to perceiveā aspect. When moving through the world, it is natural for an intelligent agent to associate image patterns with the magnitude of their displacement over time: as the agent moves, far away mountains donāt move much; nearby trees move a lot. This natural relationship between the appearance of objects and their apparent motion is a rich source of information about the relationship between the distance of objects and their appearance in images. We present a pretext task of estimating the relative depth of elements of a scene (i.e., ordering the pixels in an image according to distance from the viewer) recovered from motion field of unlabeled videos. The goal of this pretext task was to induce useful feature representations in deep Convolutional Neural Networks (CNNs). These induced representations, using 1.1 million video frames crawled from YouTube within one hour without any manual labeling, provide valuable starting features for the training of neural networks for downstream tasks. It is promising to match or even surpass what ImageNet pre-training gives us today, which needs a huge amount of manual labeling, on tasks such as semantic image segmentation as all of our training data comes almost for free.
In the second part, we study the āperceive to moveā aspect. As we humans look around, we do not solve a single vision task at a time. Instead, we perceive our surroundings in a holistic manner, doing visual understanding using all visual cues jointly. By simultaneously solving multiple tasks together, one task can influence another. In specific, we propose a neural network architecture, called SENSE, which shares common feature representations among four closely-related tasks: optical flow estimation, disparity estimation from stereo, occlusion detection, and semantic segmentation. The key insight is that sharing features makes the network more compact and induces better feature representations. For real-world data, however, not all an- notations of the four tasks mentioned above are always available at the same time. To this end, loss functions are designed to exploit interactions of different tasks and do not need manual annotations, to better handle partially labeled data in a semi- supervised manner, leading to superior understanding performance of the dynamic visual world.
Understanding the motion contained in a video enables us to perceive the dynamic visual world in a novel manner. In the third part, we present an approach, called SuperSloMo, which synthesizes slow-motion videos from a standard frame-rate video. Converting a plain video into a slow-motion version enables us to see memorable moments in our life that are hard to see clearly otherwise with naked eyes: a difficult skateboard trick, a dog catching a ball, etc. Such a technique also has wide applications such as generating smooth view transition on a head-mounted virtual reality (VR) devices, compressing videos, synthesizing videos with motion blur, etc
Competitive Collaboration: Joint Unsupervised Learning of Depth, Camera Motion, Optical Flow and Motion Segmentation
We address the unsupervised learning of several interconnected problems in
low-level vision: single view depth prediction, camera motion estimation,
optical flow, and segmentation of a video into the static scene and moving
regions. Our key insight is that these four fundamental vision problems are
coupled through geometric constraints. Consequently, learning to solve them
together simplifies the problem because the solutions can reinforce each other.
We go beyond previous work by exploiting geometry more explicitly and
segmenting the scene into static and moving regions. To that end, we introduce
Competitive Collaboration, a framework that facilitates the coordinated
training of multiple specialized neural networks to solve complex problems.
Competitive Collaboration works much like expectation-maximization, but with
neural networks that act as both competitors to explain pixels that correspond
to static or moving regions, and as collaborators through a moderator that
assigns pixels to be either static or independently moving. Our novel method
integrates all these problems in a common framework and simultaneously reasons
about the segmentation of the scene into moving objects and the static
background, the camera motion, depth of the static scene structure, and the
optical flow of moving objects. Our model is trained without any supervision
and achieves state-of-the-art performance among joint unsupervised methods on
all sub-problems.Comment: CVPR 201
Motion Segmentation for Autonomous Robots Using 3D Point Cloud Data
Achieving robot autonomy is an extremely challenging task and it starts with developing algorithms that help the robot understand how humans perceive the environment around them. Once the robot understands how to make sense of its environment, it is easy to make efficient decisions about safe movement. It is hard for robots to perform tasks that come naturally to humans like understanding signboards, classifying traffic lights, planning path around dynamic obstacles, etc. In this work, we take up one such challenge of motion segmentation using Light Detection and Ranging (LiDAR) point clouds. Motion segmentation is the task of classifying a point as either moving or static. As the ego-vehicle moves along the road, it needs to detect moving cars with very high certainty as they are the areas of interest which provide cues to the ego-vehicle to plan it\u27s motion. Motion segmentation algorithms segregate moving cars from static cars to give more importance to dynamic obstacles. In contrast to the usual LiDAR scan representations like range images and regular grid, this work uses a modern representation of LiDAR scans using permutohedral lattices. This representation gives ease of representing unstructured LiDAR points in an efficient lattice structure. We propose a machine learning approach to perform motion segmentation. The network architecture takes in two sequential point clouds and performs convolutions on them to estimate if 3D points from the first point cloud are moving or static. Using two temporal point clouds help the network in learning what features constitute motion. We have trained and tested our learning algorithm on the FlyingThings3D dataset and a modified KITTI dataset with simulated motion
Devon: Deformable Volume Network for Learning Optical Flow
State-of-the-art neural network models estimate large displacement optical
flow in multi-resolution and use warping to propagate the estimation between
two resolutions. Despite their impressive results, it is known that there are
two problems with the approach. First, the multi-resolution estimation of
optical flow fails in situations where small objects move fast. Second, warping
creates artifacts when occlusion or dis-occlusion happens. In this paper, we
propose a new neural network module, Deformable Cost Volume, which alleviates
the two problems. Based on this module, we designed the Deformable Volume
Network (Devon) which can estimate multi-scale optical flow in a single high
resolution. Experiments show Devon is more suitable in handling small objects
moving fast and achieves comparable results to the state-of-the-art methods in
public benchmarks
- ā¦