838,999 research outputs found
Hallucinating dense optical flow from sparse lidar for autonomous vehicles
© 20xx IEEE. Personal use of this material is permitted. Permission from IEEE must be obtained for all other uses, in any current or future media, including reprinting/republishing this material for advertising or promotional purposes, creating new collective works, for resale or redistribution to servers or lists, or reuse of any copyrighted component of this work in other works.In this paper we propose a novel approach to estimate dense optical flow from sparse lidar data acquired on an autonomous vehicle. This is intended to be used as a drop-in replacement of any image-based optical flow system when images are not reliable due to e.g. adverse weather conditions or at night. In order to infer high resolution 2D flows from discrete range data we devise a three-block architecture of multiscale filters that combines multiple intermediate objectives, both in the lidar and image domain. To train this network we introduce a dataset with approximately 20K lidar samples of the Kitti dataset which we have augmented with a pseudo ground-truth image-based optical flow computed using FlowNet2. We demonstrate the effectiveness of our approach on Kitti, and show that despite using the low-resolution and sparse measurements of the lidar, we can regress dense optical flow maps which are at par with those estimated with image-based methods.Peer ReviewedPostprint (author's final draft
SegFlow: Joint Learning for Video Object Segmentation and Optical Flow
This paper proposes an end-to-end trainable network, SegFlow, for
simultaneously predicting pixel-wise object segmentation and optical flow in
videos. The proposed SegFlow has two branches where useful information of
object segmentation and optical flow is propagated bidirectionally in a unified
framework. The segmentation branch is based on a fully convolutional network,
which has been proved effective in image segmentation task, and the optical
flow branch takes advantage of the FlowNet model. The unified framework is
trained iteratively offline to learn a generic notion, and fine-tuned online
for specific objects. Extensive experiments on both the video object
segmentation and optical flow datasets demonstrate that introducing optical
flow improves the performance of segmentation and vice versa, against the
state-of-the-art algorithms.Comment: Accepted in ICCV'17. Code is available at
https://sites.google.com/site/yihsuantsai/research/iccv17-segflo
An improved 2D optical flow sensor for motion segmentation
A functional focal-plane implementation of a 2D optical flow system is presented that detects an
preserves motion discontinuities. The system is composed of two different network layers of analog
computational units arranged in a retinotopical order. The units in the first layer (the optical
flow network) estimate the local optical flow field in two visual dimensions, where the strength
of their nearest-neighbor connections determines the amount of motion integration. Whereas in an
earlier implementation \cite{Stocker_Douglas99} the connection strength was set constant in the
complete image space, it is now \emph{dynamically and locally} controlled by the second network
layer (the motion discontinuities network) that is recurrently connected to the optical flow
network. The connection strengths in the optical flow network are modulated such that visual
motion integration is ideally only facilitated within image areas that are likely to represent
common
motion sources.
Results of an experimental aVLSI chip illustrate the potential of the approach and its
functionality under real-world conditions
Optical flow versus retinal flow as sources of information for flight guidance
The appropriate description is considered of visual information for flight guidance, optical flow vs. retinal flow. Most descriptions in the psychological literature are based on the optical flow. However, human eyes move and this movement complicates the issues at stake, particularly when movement of the observer is involved. The question addressed is whether an observer, whose eyes register only retinal flow, use information in optical flow. It is suggested that the observer cannot and does not reconstruct the image in optical flow; instead they use retinal flow. Retinal array is defined as the projections of a three space onto a point and beyond to a movable, nearly hemispheric sensing device, like the retina. Optical array is defined as the projection of a three space environment to a point within that space. And flow is defined as global motion as a field of vectors, best placed on a spherical projection surface. Specifically, flow is the mapping of the field of changes in position of corresponding points on objects in three space onto a point, where that point has moved in position
Optical network technologies for future digital cinema
Digital technology has transformed the information flow and support infrastructure for numerous application domains, such as cellular communications. Cinematography, traditionally, a film based medium, has embraced digital technology leading to innovative transformations in its work flow. Digital cinema supports transmission of high resolution content enabled by the latest advancements in optical communications and video compression. In this paper we provide a survey of the optical network technologies for supporting this bandwidth intensive traffic class. We also highlight the significance and benefits of the state of the art in optical technologies that support the digital cinema work flow
Sparse optical flow regularisation for real-time visual tracking
Optical flow can greatly improve the robustness of visual tracking algorithms. While dense optical flow algorithms have various applications, they can not be used for real-time solutions without resorting to GPU calculations. Furthermore, most optical flow algorithms fail in challenging lighting environments due to the violation of the brightness constraint. We propose a simple but effective iterative regularisation scheme for real-time, sparse optical flow algorithms, that is shown to be robust to sudden illumination changes and can handle large displacements. The algorithm proves to outperform well known techniques in real life video sequences, while being much faster to calculate. Our solution increases the robustness of a real-time particle filter based tracking application, consuming only a fraction of the available CPU power. Furthermore, a new and realistic optical flow dataset with annotated ground truth is created and made freely available for research purposes
- …
