2,232 research outputs found

    Light field image processing: an overview

    Get PDF
    Light field imaging has emerged as a technology allowing to capture richer visual information from our world. As opposed to traditional photography, which captures a 2D projection of the light in the scene integrating the angular domain, light fields collect radiance from rays in all directions, demultiplexing the angular information lost in conventional photography. On the one hand, this higher dimensional representation of visual data offers powerful capabilities for scene understanding, and substantially improves the performance of traditional computer vision problems such as depth sensing, post-capture refocusing, segmentation, video stabilization, material classification, etc. On the other hand, the high-dimensionality of light fields also brings up new challenges in terms of data capture, data compression, content editing, and display. Taking these two elements together, research in light field image processing has become increasingly popular in the computer vision, computer graphics, and signal processing communities. In this paper, we present a comprehensive overview and discussion of research in this field over the past 20 years. We focus on all aspects of light field image processing, including basic light field representation and theory, acquisition, super-resolution, depth estimation, compression, editing, processing algorithms for light field display, and computer vision applications of light field data

    A simulation framework for the design and evaluation of computational cameras

    Get PDF
    In the emerging field of computational imaging, rapid prototyping of new camera concepts becomes increasingly difficult since the signal processing is intertwined with the physical design of a camera. As novel computational cameras capture information other than the traditional two-dimensional information, ground truth data, which can be used to thoroughly benchmark a new system design, is also hard to acquire. We propose to bridge this gap by using simulation. In this article, we present a raytracing framework tailored for the design and evaluation of computational imaging systems. We show that, depending on the application, the image formation on a sensor and phenomena like image noise have to be simulated accurately in order to achieve meaningful results while other aspects, such as photorealistic scene modeling, can be omitted. Therefore, we focus on accurately simulating the mandatory components of computational cameras, namely apertures, lenses, spectral filters and sensors. Besides the simulation of the imaging process, the framework is capable of generating various ground truth data, which can be used to evaluate and optimize the performance of a particular imaging system. Due to its modularity, it is easy to further extend the framework to the needs of other fields of application. We make the source code of our simulation framework publicly available and encourage other researchers to use it to design and evaluate their own camera designs

    Computational Imaging Systems for High-speed, Adaptive Sensing Applications

    Get PDF
    Driven by the advances in signal processing and ubiquitous availability of high-speed low-cost computing resources over the past decade, computational imaging has seen the growing interest. Improvements on spatial, temporal, and spectral resolutions have been made with novel designs of imaging systems and optimization methods. However, there are two limitations in computational imaging. 1), Computational imaging requires full knowledge and representation of the imaging system called the forward model to reconstruct the object of interest. This limits the applications in the systems with a parameterized unknown forward model such as range imaging systems. 2), the regularization in the optimization process incorporates strong assumptions which may not accurately reflect the a priori distribution of the object. To overcome these limitations, we propose 1) novel optimization frameworks for applying computational imaging on active and passive range imaging systems and achieve 5-10 folds improvement on temporal resolution in various range imaging systems; 2) a data-driven method for estimating the distribution of high dimensional objects and a framework of adaptive sensing for maximum information gain. The adaptive strategy with our proposed method outperforms Gaussian process-based method consistently. The work would potentially benefit high-speed 3D imaging applications such as autonomous driving and adaptive sensing applications such as low-dose adaptive computed tomography(CT)
    • …
    corecore