39 research outputs found

    Joint Blind Motion Deblurring and Depth Estimation of Light Field

    Full text link
    Removing camera motion blur from a single light field is a challenging task since it is highly ill-posed inverse problem. The problem becomes even worse when blur kernel varies spatially due to scene depth variation and high-order camera motion. In this paper, we propose a novel algorithm to estimate all blur model variables jointly, including latent sub-aperture image, camera motion, and scene depth from the blurred 4D light field. Exploiting multi-view nature of a light field relieves the inverse property of the optimization by utilizing strong depth cues and multi-view blur observation. The proposed joint estimation achieves high quality light field deblurring and depth estimation simultaneously under arbitrary 6-DOF camera motion and unconstrained scene depth. Intensive experiment on real and synthetic blurred light field confirms that the proposed algorithm outperforms the state-of-the-art light field deblurring and depth estimation methods

    VommaNet: an End-to-End Network for Disparity Estimation from Reflective and Texture-less Light Field Images

    Full text link
    The precise combination of image sensor and micro-lens array enables lenslet light field cameras to record both angular and spatial information of incoming light, therefore, one can calculate disparity and depth from light field images. In turn, 3D models of the recorded objects can be recovered, which is a great advantage over other imaging system. However, reflective and texture-less areas in light field images have complicated conditions, making it hard to correctly calculate disparity with existing algorithms. To tackle this problem, we introduce a novel end-to-end network VommaNet to retrieve multi-scale features from reflective and texture-less regions for accurate disparity estimation. Meanwhile, our network has achieved similar or better performance in other regions for both synthetic light field images and real-world data compared to the state-of-the-art algorithms. Currently, we achieve the best score for mean squared error (MSE) on HCI 4D Light Field Benchmark

    Light Field Reconstruction using a Generic Imaging Model

    Get PDF

    Geometric calibration of focused light field camera for 3-D flame temperature measurement

    Get PDF
    Focused light field camera can be used to measure three-dimensional (3-D) temperature field of a flame because of its ability to record intensity and direction information of each ray from flame simultaneously. This work aims to develop a suitable geometric calibration method of focused light field camera for 3-D flame temperature measurement. A modified method based on Zhang's camera calibration is developed to calibrate the camera and the measurement system. A single focused light-field camera is used to capture images of bespoke calibration board for calibration in this study. Geometric parameters including intrinsic (i.e., camera parameters) and extrinsic (i.e., camera connecting with the calibration board) of the focused light field camera are calibrated to trace the ray projecting onto each pixel on CCD (charge-coupled device) sensor. Instead of using line features, corner point features are directly utilized for the calibration. The characteristics of focused light field camera including one 3-D point corresponding to several image points and matching main lens and microlens f-numbers, are used for calibration. Results with a focused light field camera are presented and discussed. Preliminary 3-D temperature distribution of a flame is also investigated and presented
    corecore