1,926 research outputs found

    An Epipolar Line from a Single Pixel

    Full text link
    Computing the epipolar geometry from feature points between cameras with very different viewpoints is often error prone, as an object's appearance can vary greatly between images. For such cases, it has been shown that using motion extracted from video can achieve much better results than using a static image. This paper extends these earlier works based on the scene dynamics. In this paper we propose a new method to compute the epipolar geometry from a video stream, by exploiting the following observation: For a pixel p in Image A, all pixels corresponding to p in Image B are on the same epipolar line. Equivalently, the image of the line going through camera A's center and p is an epipolar line in B. Therefore, when cameras A and B are synchronized, the momentary images of two objects projecting to the same pixel, p, in camera A at times t1 and t2, lie on an epipolar line in camera B. Based on this observation we achieve fast and precise computation of epipolar lines. Calibrating cameras based on our method of finding epipolar lines is much faster and more robust than previous methods.Comment: WACV 201

    Joint Optical Flow and Temporally Consistent Semantic Segmentation

    Full text link
    The importance and demands of visual scene understanding have been steadily increasing along with the active development of autonomous systems. Consequently, there has been a large amount of research dedicated to semantic segmentation and dense motion estimation. In this paper, we propose a method for jointly estimating optical flow and temporally consistent semantic segmentation, which closely connects these two problem domains and leverages each other. Semantic segmentation provides information on plausible physical motion to its associated pixels, and accurate pixel-level temporal correspondences enhance the accuracy of semantic segmentation in the temporal domain. We demonstrate the benefits of our approach on the KITTI benchmark, where we observe performance gains for flow and segmentation. We achieve state-of-the-art optical flow results, and outperform all published algorithms by a large margin on challenging, but crucial dynamic objects.Comment: 14 pages, Accepted for CVRSUAD workshop at ECCV 201

    Solar stereoscopy - where are we and what developments do we require to progress?

    Get PDF
    Observations from the two STEREO-spacecraft give us for the first time the possibility to use stereoscopic methods to reconstruct the 3D solar corona. Classical stereoscopy works best for solid objects with clear edges. Consequently an application of classical stereoscopic methods to the faint structures visible in the optically thin coronal plasma is by no means straight forward and several problems have to be treated adequately: 1.)First there is the problem of identifying one dimensional structures -e.g. active region coronal loops or polar plumes- from the two individual EUV-images observed with STEREO/EUVI. 2.) As a next step one has the association problem to find corresponding structures in both images. 3.) Within the reconstruction problem stereoscopic methods are used to compute the 3D-geometry of the identified structures. Without any prior assumptions, e.g., regarding the footpoints of coronal loops, the reconstruction problem has not one unique solution. 4.) One has to estimate the reconstruction error or accuracy of the reconstructed 3D-structure, which depends on the accuracy of the identified structures in 2D, the separation angle between the spacecraft, but also on the location, e.g., for east-west directed coronal loops the reconstruction error is highest close to the loop top. 5.) Eventually we are not only interested in the 3D-geometry of loops or plumes, but also in physical parameters like density, temperature, plasma flow, magnetic field strength etc. Helpful for treating some of these problems are coronal magnetic field models extrapolated from photospheric measurements, because observed EUV-loops outline the magnetic field. This feature has been used for a new method dubbed 'magnetic stereoscopy'. As examples we show recent application to active region loops.Comment: 12 Pages, 9 Figures, a Review articl

    A Variational Stereo Method for the Three-Dimensional Reconstruction of Ocean Waves

    Get PDF
    We develop a novel remote sensing technique for the observation of waves on the ocean surface. Our method infers the 3-D waveform and radiance of oceanic sea states via a variational stereo imagery formulation. In this setting, the shape and radiance of the wave surface are given by minimizers of a composite energy functional that combines a photometric matching term along with regularization terms involving the smoothness of the unknowns. The desired ocean surface shape and radiance are the solution of a system of coupled partial differential equations derived from the optimality conditions of the energy functional. The proposed method is naturally extended to study the spatiotemporal dynamics of ocean waves and applied to three sets of stereo video data. Statistical and spectral analysis are carried out. Our results provide evidence that the observed omnidirectional wavenumber spectrum S(k) decays as k-2.5 is in agreement with Zakharov's theory (1999). Furthermore, the 3-D spectrum of the reconstructed wave surface is exploited to estimate wave dispersion and currents

    Depth Fields: Extending Light Field Techniques to Time-of-Flight Imaging

    Full text link
    A variety of techniques such as light field, structured illumination, and time-of-flight (TOF) are commonly used for depth acquisition in consumer imaging, robotics and many other applications. Unfortunately, each technique suffers from its individual limitations preventing robust depth sensing. In this paper, we explore the strengths and weaknesses of combining light field and time-of-flight imaging, particularly the feasibility of an on-chip implementation as a single hybrid depth sensor. We refer to this combination as depth field imaging. Depth fields combine light field advantages such as synthetic aperture refocusing with TOF imaging advantages such as high depth resolution and coded signal processing to resolve multipath interference. We show applications including synthesizing virtual apertures for TOF imaging, improved depth mapping through partial and scattering occluders, and single frequency TOF phase unwrapping. Utilizing space, angle, and temporal coding, depth fields can improve depth sensing in the wild and generate new insights into the dimensions of light's plenoptic function.Comment: 9 pages, 8 figures, Accepted to 3DV 201

    Depth Estimation Through a Generative Model of Light Field Synthesis

    Full text link
    Light field photography captures rich structural information that may facilitate a number of traditional image processing and computer vision tasks. A crucial ingredient in such endeavors is accurate depth recovery. We present a novel framework that allows the recovery of a high quality continuous depth map from light field data. To this end we propose a generative model of a light field that is fully parametrized by its corresponding depth map. The model allows for the integration of powerful regularization techniques such as a non-local means prior, facilitating accurate depth map estimation.Comment: German Conference on Pattern Recognition (GCPR) 201
    corecore