437 research outputs found

    Depth Fields: Extending Light Field Techniques to Time-of-Flight Imaging

    Full text link
    A variety of techniques such as light field, structured illumination, and time-of-flight (TOF) are commonly used for depth acquisition in consumer imaging, robotics and many other applications. Unfortunately, each technique suffers from its individual limitations preventing robust depth sensing. In this paper, we explore the strengths and weaknesses of combining light field and time-of-flight imaging, particularly the feasibility of an on-chip implementation as a single hybrid depth sensor. We refer to this combination as depth field imaging. Depth fields combine light field advantages such as synthetic aperture refocusing with TOF imaging advantages such as high depth resolution and coded signal processing to resolve multipath interference. We show applications including synthesizing virtual apertures for TOF imaging, improved depth mapping through partial and scattering occluders, and single frequency TOF phase unwrapping. Utilizing space, angle, and temporal coding, depth fields can improve depth sensing in the wild and generate new insights into the dimensions of light's plenoptic function.Comment: 9 pages, 8 figures, Accepted to 3DV 201

    Deep Eyes: Binocular Depth-from-Focus on Focal Stack Pairs

    Full text link
    Human visual system relies on both binocular stereo cues and monocular focusness cues to gain effective 3D perception. In computer vision, the two problems are traditionally solved in separate tracks. In this paper, we present a unified learning-based technique that simultaneously uses both types of cues for depth inference. Specifically, we use a pair of focal stacks as input to emulate human perception. We first construct a comprehensive focal stack training dataset synthesized by depth-guided light field rendering. We then construct three individual networks: a Focus-Net to extract depth from a single focal stack, a EDoF-Net to obtain the extended depth of field (EDoF) image from the focal stack, and a Stereo-Net to conduct stereo matching. We show how to integrate them into a unified BDfF-Net to obtain high-quality depth maps. Comprehensive experiments show that our approach outperforms the state-of-the-art in both accuracy and speed and effectively emulates human vision systems

    Understanding camera trade-offs through a Bayesian analysis of light field projections

    Get PDF
    Computer vision has traditionally focused on extracting structure,such as depth, from images acquired using thin-lens or pinhole optics. The development of computational imaging is broadening this scope; a variety of unconventional cameras do not directly capture a traditional image anymore, but instead require the joint reconstruction of structure and image information. For example, recent coded aperture designs have been optimized to facilitate the joint reconstruction of depth and intensity. The breadth of imaging designs requires new tools to understand the tradeoffs implied by different strategies.This paper introduces a unified framework for analyzing computational imagingapproaches. Each sensor element is modeled as an inner product over the 4D light field. The imaging task is then posed as Bayesian inference: given the observed noisy light field projections and a new prior on light field signals, estimatethe original light field. Under common imaging conditions, we compare the performance of various camera designs using 2D light field simulations. This framework allows us to better understand the tradeoffs of each camera type andanalyze their limitations

    Understanding camera trade-offs through a Bayesian analysis of light field projections - A revision

    Get PDF
    Computer vision has traditionally focused on extracting structure,such as depth, from images acquired using thin-lens or pinholeoptics. The development of computational imaging is broadening thisscope; a variety of unconventional cameras do not directly capture atraditional image anymore, but instead require the jointreconstruction of structure and image information. For example, recentcoded aperture designs have been optimized to facilitate the jointreconstruction of depth and intensity. The breadth of imaging designs requires new tools to understand the tradeoffs implied bydifferent strategies. This paper introduces a unified framework for analyzing computational imaging approaches.Each sensor element is modeled as an inner product over the 4D light field.The imaging task is then posed as Bayesian inference: giventhe observed noisy light field projections and a new prior on light field signals, estimate the original light field. Under common imaging conditions, we compare theperformance of various camera designs using 2D light field simulations. Thisframework allows us to better understand the tradeoffs of each camera type and analyze their limitations

    Capturing and Reconstructing the Appearance of Complex {3D} Scenes

    No full text
    In this thesis, we present our research on new acquisition methods for reflectance properties of real-world objects. Specifically, we first show a method for acquiring spatially varying densities in volumes of translucent, gaseous material with just a single image. This makes the method applicable to constantly changing phenomena like smoke without the use of high-speed camera equipment. Furthermore, we investigated how two well known techniques -- synthetic aperture confocal imaging and algorithmic descattering -- can be combined to help looking through a translucent medium like fog or murky water. We show that the depth at which we can still see an object embedded in the scattering medium is increased. In a related publication, we show how polarization and descattering based on phase-shifting can be combined for efficient 3D~scanning of translucent objects. Normally, subsurface scattering hinders the range estimation by offsetting the peak intensity beneath the surface away from the point of incidence. With our method, the subsurface scattering is reduced to a minimum and therefore reliable 3D~scanning is made possible. Finally, we present a system which recovers surface geometry, reflectance properties of opaque objects, and prevailing lighting conditions at the time of image capture from just a small number of input photographs. While there exist previous approaches to recover reflectance properties, our system is the first to work on images taken under almost arbitrary, changing lighting conditions. This enables us to use images we took from a community photo collection website

    Light field image processing: an overview

    Get PDF
    Light field imaging has emerged as a technology allowing to capture richer visual information from our world. As opposed to traditional photography, which captures a 2D projection of the light in the scene integrating the angular domain, light fields collect radiance from rays in all directions, demultiplexing the angular information lost in conventional photography. On the one hand, this higher dimensional representation of visual data offers powerful capabilities for scene understanding, and substantially improves the performance of traditional computer vision problems such as depth sensing, post-capture refocusing, segmentation, video stabilization, material classification, etc. On the other hand, the high-dimensionality of light fields also brings up new challenges in terms of data capture, data compression, content editing, and display. Taking these two elements together, research in light field image processing has become increasingly popular in the computer vision, computer graphics, and signal processing communities. In this paper, we present a comprehensive overview and discussion of research in this field over the past 20 years. We focus on all aspects of light field image processing, including basic light field representation and theory, acquisition, super-resolution, depth estimation, compression, editing, processing algorithms for light field display, and computer vision applications of light field data

    Efficient and Accurate Disparity Estimation from MLA-Based Plenoptic Cameras

    Get PDF
    This manuscript focuses on the processing images from microlens-array based plenoptic cameras. These cameras enable the capturing of the light field in a single shot, recording a greater amount of information with respect to conventional cameras, allowing to develop a whole new set of applications. However, the enhanced information introduces additional challenges and results in higher computational effort. For one, the image is composed of thousand of micro-lens images, making it an unusual case for standard image processing algorithms. Secondly, the disparity information has to be estimated from those micro-images to create a conventional image and a three-dimensional representation. Therefore, the work in thesis is devoted to analyse and propose methodologies to deal with plenoptic images. A full framework for plenoptic cameras has been built, including the contributions described in this thesis. A blur-aware calibration method to model a plenoptic camera, an optimization method to accurately select the best microlenses combination, an overview of the different types of plenoptic cameras and their representation. Datasets consisting of both real and synthetic images have been used to create a benchmark for different disparity estimation algorithm and to inspect the behaviour of disparity under different compression rates. A robust depth estimation approach has been developed for light field microscopy and image of biological samples

    Quantifying the Influence of Surface Texture and Shape on Structure from Motion 3D Reconstructions

    Get PDF
    In general, optical methods for geometrical measurements are influenced by the surface properties of the examined object. In Structure from Motion (SfM), local variations in surface color or topography are necessary for detecting feature points for point-cloud triangulation. Thus, the level of contrast or texture is important for an accurate reconstruction. However, quantitative studies of the influence of surface texture on geometrical reconstruction are largely missing. This study tries to remedy that by investigating the influence of object texture levels on reconstruction accuracy using a set of reference artifacts. The artifacts are designed with well-defined surface geometries, and quantitative metrics are introduced to evaluate the lateral resolution, vertical geometric variation, and spatial–frequency information of the reconstructions. The influence of texture level is compared to variations in capturing range. For the SfM measurements, the ContextCapture software solution and a 50 Mpx DSLR camera are used. The findings are compared to results using calibrated optical microscopes. The results show that the proposed pipeline can be used for investigating the influence of texture on SfM reconstructions. The introduced metrics allow for a quantitative comparison of the reconstructions at varying texture levels and ranges. Both range and texture level are seen to affect the reconstructed geometries although in different ways. While an increase in range at a fixed focal length reduces the spatial resolution, an insufficient texture level causes an increased noise level and may introduce errors in the reconstruction. The artifacts are designed to be easily replicable, and by providing a step-by-step procedure of our testing and comparison methodology, we hope that other researchers will make use of the proposed testing pipeline

    Modeling and Simulation in Engineering

    Get PDF
    This book provides an open platform to establish and share knowledge developed by scholars, scientists, and engineers from all over the world, about various applications of the modeling and simulation in the design process of products, in various engineering fields. The book consists of 12 chapters arranged in two sections (3D Modeling and Virtual Prototyping), reflecting the multidimensionality of applications related to modeling and simulation. Some of the most recent modeling and simulation techniques, as well as some of the most accurate and sophisticated software in treating complex systems, are applied. All the original contributions in this book are jointed by the basic principle of a successful modeling and simulation process: as complex as necessary, and as simple as possible. The idea is to manipulate the simplifying assumptions in a way that reduces the complexity of the model (in order to make a real-time simulation), but without altering the precision of the results
    • …
    corecore