1,228 research outputs found
Fast Disparity Estimation from a Single Compressed Light Field Measurement
The abundant spatial and angular information from light fields has allowed
the development of multiple disparity estimation approaches. However, the
acquisition of light fields requires high storage and processing cost, limiting
the use of this technology in practical applications. To overcome these
drawbacks, the compressive sensing (CS) theory has allowed the development of
optical architectures to acquire a single coded light field measurement. This
measurement is decoded using an optimization algorithm or deep neural network
that requires high computational costs. The traditional approach for disparity
estimation from compressed light fields requires first recovering the entire
light field and then a post-processing step, thus requiring long times. In
contrast, this work proposes a fast disparity estimation from a single
compressed measurement by omitting the recovery step required in traditional
approaches. Specifically, we propose to jointly optimize an optical
architecture for acquiring a single coded light field snapshot and a
convolutional neural network (CNN) for estimating the disparity maps.
Experimentally, the proposed method estimates disparity maps comparable with
those obtained from light fields reconstructed using deep learning approaches.
Furthermore, the proposed method is 20 times faster in training and inference
than the best method that estimates the disparity from reconstructed light
fields
Depth Super-Resolution with Hybrid Camera System
An important field of research in computer vision is the 3D analysis and reconstruction of objects and scenes. Currently, among all the the techniques for 3D acquisition, stereo vision systems are the most common. More recently, Time-of-Flight (ToF) range cameras have been introduced. The focus of this thesis is to combine the information from the ToF with one or two standard cameras, in order to obtain a high- resolution depth imageopenEmbargo per motivi di segretezza e/o di proprietĂ dei risultati e informazioni di enti esterni o aziende private che hanno partecipato alla realizzazione del lavoro di ricerca relativo alla tes
Computational Imaging Systems for High-speed, Adaptive Sensing Applications
Driven by the advances in signal processing and ubiquitous availability of high-speed low-cost computing resources over the past decade, computational imaging has seen the growing interest. Improvements on spatial, temporal, and spectral resolutions have been made with novel designs of imaging systems and optimization methods. However, there are two limitations in computational imaging. 1), Computational imaging requires full knowledge and representation of the imaging system called the forward model to reconstruct the object of interest. This limits the applications in the systems with a parameterized unknown forward model such as range imaging systems. 2), the regularization in the optimization process incorporates strong assumptions which may not accurately reflect the a priori distribution of the object. To overcome these limitations, we propose 1) novel optimization frameworks for applying computational imaging on active and passive range imaging systems and achieve 5-10 folds improvement on temporal resolution in various range imaging systems; 2) a data-driven method for estimating the distribution of high dimensional objects and a framework of adaptive sensing for maximum information gain. The adaptive strategy with our proposed method outperforms Gaussian process-based method consistently. The work would potentially benefit high-speed 3D imaging applications such as autonomous driving and adaptive sensing applications such as low-dose adaptive computed tomography(CT)
Temporal shape super-resolution by intra-frame motion encoding using high-fps structured light
One of the solutions of depth imaging of moving scene is to project a static
pattern on the object and use just a single image for reconstruction. However,
if the motion of the object is too fast with respect to the exposure time of
the image sensor, patterns on the captured image are blurred and reconstruction
fails. In this paper, we impose multiple projection patterns into each single
captured image to realize temporal super resolution of the depth image
sequences. With our method, multiple patterns are projected onto the object
with higher fps than possible with a camera. In this case, the observed pattern
varies depending on the depth and motion of the object, so we can extract
temporal information of the scene from each single image. The decoding process
is realized using a learning-based approach where no geometric calibration is
needed. Experiments confirm the effectiveness of our method where sequential
shapes are reconstructed from a single image. Both quantitative evaluations and
comparisons with recent techniques were also conducted.Comment: 9 pages, Published at the International Conference on Computer Vision
(ICCV 2017
- …