166 research outputs found

    Deep Eyes: Binocular Depth-from-Focus on Focal Stack Pairs

    Full text link
    Human visual system relies on both binocular stereo cues and monocular focusness cues to gain effective 3D perception. In computer vision, the two problems are traditionally solved in separate tracks. In this paper, we present a unified learning-based technique that simultaneously uses both types of cues for depth inference. Specifically, we use a pair of focal stacks as input to emulate human perception. We first construct a comprehensive focal stack training dataset synthesized by depth-guided light field rendering. We then construct three individual networks: a Focus-Net to extract depth from a single focal stack, a EDoF-Net to obtain the extended depth of field (EDoF) image from the focal stack, and a Stereo-Net to conduct stereo matching. We show how to integrate them into a unified BDfF-Net to obtain high-quality depth maps. Comprehensive experiments show that our approach outperforms the state-of-the-art in both accuracy and speed and effectively emulates human vision systems

    Three dimensional moving pictures with a single imager and microfluidic lens

    Get PDF
    Three-dimensional movie acquisition and corresponding depth data is commonly generated from multiple cameras and multiple views. This technology has high cost and large size which are limitations for medical devices, military surveillance and current consumer products such as small camcorders and cell phone movie cameras. This research result shows that a single imager, equipped with a fast-focus microfluidic lens, produces a highly accurate depth map. On test material, the depth is found to be an average Root Mean Squared Error (RMSE) of 3.543 gray level steps (1.38\%) accuracy compared to ranging data. The depth is inferred using a new Extended Depth from Defocus (EDfD), and defocus is achieved at movie speeds with a microfluidic lens. Camera non-uniformities from both lens and sensor pipeline are analysed. The findings of some lens effects can be compensated for, but noise has the detrimental effect. In addition, early indications show that real-time HDTV 3D movie frame rates are feasible

    Depth Estimation and Image Restoration by Deep Learning from Defocused Images

    Full text link
    Monocular depth estimation and image deblurring are two fundamental tasks in computer vision, given their crucial role in understanding 3D scenes. Performing any of them by relying on a single image is an ill-posed problem. The recent advances in the field of Deep Convolutional Neural Networks (DNNs) have revolutionized many tasks in computer vision, including depth estimation and image deblurring. When it comes to using defocused images, the depth estimation and the recovery of the All-in-Focus (Aif) image become related problems due to defocus physics. Despite this, most of the existing models treat them separately. There are, however, recent models that solve these problems simultaneously by concatenating two networks in a sequence to first estimate the depth or defocus map and then reconstruct the focused image based on it. We propose a DNN that solves the depth estimation and image deblurring in parallel. Our Two-headed Depth Estimation and Deblurring Network (2HDED:NET) extends a conventional Depth from Defocus (DFD) networks with a deblurring branch that shares the same encoder as the depth branch. The proposed method has been successfully tested on two benchmarks, one for indoor and the other for outdoor scenes: NYU-v2 and Make3D. Extensive experiments with 2HDED:NET on these benchmarks have demonstrated superior or close performances to those of the state-of-the-art models for depth estimation and image deblurring
    corecore