620 research outputs found

    Deep Depth From Focus

    Full text link
    Depth from focus (DFF) is one of the classical ill-posed inverse problems in computer vision. Most approaches recover the depth at each pixel based on the focal setting which exhibits maximal sharpness. Yet, it is not obvious how to reliably estimate the sharpness level, particularly in low-textured areas. In this paper, we propose `Deep Depth From Focus (DDFF)' as the first end-to-end learning approach to this problem. One of the main challenges we face is the hunger for data of deep neural networks. In order to obtain a significant amount of focal stacks with corresponding groundtruth depth, we propose to leverage a light-field camera with a co-calibrated RGB-D sensor. This allows us to digitally create focal stacks of varying sizes. Compared to existing benchmarks our dataset is 25 times larger, enabling the use of machine learning for this inverse problem. We compare our results with state-of-the-art DFF methods and we also analyze the effect of several key deep architectural components. These experiments show that our proposed method `DDFFNet' achieves state-of-the-art performance in all scenes, reducing depth error by more than 75% compared to the classical DFF methods.Comment: accepted to Asian Conference on Computer Vision (ACCV) 201

    Deep Eyes: Binocular Depth-from-Focus on Focal Stack Pairs

    Full text link
    Human visual system relies on both binocular stereo cues and monocular focusness cues to gain effective 3D perception. In computer vision, the two problems are traditionally solved in separate tracks. In this paper, we present a unified learning-based technique that simultaneously uses both types of cues for depth inference. Specifically, we use a pair of focal stacks as input to emulate human perception. We first construct a comprehensive focal stack training dataset synthesized by depth-guided light field rendering. We then construct three individual networks: a Focus-Net to extract depth from a single focal stack, a EDoF-Net to obtain the extended depth of field (EDoF) image from the focal stack, and a Stereo-Net to conduct stereo matching. We show how to integrate them into a unified BDfF-Net to obtain high-quality depth maps. Comprehensive experiments show that our approach outperforms the state-of-the-art in both accuracy and speed and effectively emulates human vision systems

    Learning Depth from Focus in the Wild

    Full text link
    For better photography, most recent commercial cameras including smartphones have either adopted large-aperture lens to collect more light or used a burst mode to take multiple images within short times. These interesting features lead us to examine depth from focus/defocus. In this work, we present a convolutional neural network-based depth estimation from single focal stacks. Our method differs from relevant state-of-the-art works with three unique features. First, our method allows depth maps to be inferred in an end-to-end manner even with image alignment. Second, we propose a sharp region detection module to reduce blur ambiguities in subtle focus changes and weakly texture-less regions. Third, we design an effective downsampling module to ease flows of focal information in feature extractions. In addition, for the generalization of the proposed network, we develop a simulator to realistically reproduce the features of commercial cameras, such as changes in field of view, focal length and principal points. By effectively incorporating these three unique features, our network achieves the top rank in the DDFF 12-Scene benchmark on most metrics. We also demonstrate the effectiveness of the proposed method on various quantitative evaluations and real-world images taken from various off-the-shelf cameras compared with state-of-the-art methods. Our source code is publicly available at https://github.com/wcy199705/DfFintheWild

    Oncoming Vehicle Detection with Variable-Focus Liquid Lens

    Get PDF
    Computer vision plays an important role in autonomous vehicle, robotics and manufacturing fields. Depth perception in computer vision requires stereo vision, or fuse together a single camera with other depth sensors such as radar and Lidar. Depth from focus using adjustable lens has not been applied in autonomous vehicle. The goal of this paper is to investigate the application of depth from focus for oncoming vehicle detection. Liquid lens is used to adjust optical power while acquiring images with the camera. The distance of the oncoming vehicle can be estimated by measuring the oncoming vehicle’s sharpness in the images with known lens settings. The results show the system detecting oncoming vehicle at ±2 meter and ±4 meter using depth from focus technique. Estimation of oncoming vehicles above 4 meter can be done by analysing the relative size of the vehicle detected

    A real-time three dimensional profiling depth from focus method

    Get PDF
    Thesis (M.S.)--Massachusetts Institute of Technology, Dept. of Electrical Engineering and Computer Science, 1994.Includes bibliographical references (p. 93-94).by James Wiley Fleming.M.S

    Utilizing sensor fusion in markerless mobile augmented reality

    Get PDF
    One of the key challenges of markerless Augmented Reality (AR) systems, where no a priori information of the environment is available, is map and scale initialization. In such systems, the scale is unknown as it is impossible to determine the scale from a sequence of images alone. Implementing scale is vital for ensuring that augmented objects are contextually sensitive to the environment they are projected upon. In this paper we demonstrate a sensor and vision fusion approach for robust and user-friendly initialization of map and scale. The map is initialized, using inbuilt accelerometers, whilst scale is initialized by the camera auto-focusing capability. The later is possible by applying the Depth From Focus (DFF) method, which was, till now, limited to high precision camera systems. The demonstrated illustrates benefits of such a system, which is running on a commercially available mobile phone Nokia N900

    The Application of Preconditioned Alternating Direction Method of Multipliers in Depth from Focal Stack

    Get PDF
    Post capture refocusing effect in smartphone cameras is achievable by using focal stacks. However, the accuracy of this effect is totally dependent on the combination of the depth layers in the stack. The accuracy of the extended depth of field effect in this application can be improved significantly by computing an accurate depth map which has been an open issue for decades. To tackle this issue, in this paper, a framework is proposed based on Preconditioned Alternating Direction Method of Multipliers (PADMM) for depth from the focal stack and synthetic defocus application. In addition to its ability to provide high structural accuracy and occlusion handling, the optimization function of the proposed method can, in fact, converge faster and better than state of the art methods. The evaluation has been done on 21 sets of focal stacks and the optimization function has been compared against 5 other methods. Preliminary results indicate that the proposed method has a better performance in terms of structural accuracy and optimization in comparison to the current state of the art methods.Comment: 15 pages, 8 figure

    Sublabel-Accurate Relaxation of Nonconvex Energies

    Full text link
    We propose a novel spatially continuous framework for convex relaxations based on functional lifting. Our method can be interpreted as a sublabel-accurate solution to multilabel problems. We show that previously proposed functional lifting methods optimize an energy which is linear between two labels and hence require (often infinitely) many labels for a faithful approximation. In contrast, the proposed formulation is based on a piecewise convex approximation and therefore needs far fewer labels. In comparison to recent MRF-based approaches, our method is formulated in a spatially continuous setting and shows less grid bias. Moreover, in a local sense, our formulation is the tightest possible convex relaxation. It is easy to implement and allows an efficient primal-dual optimization on GPUs. We show the effectiveness of our approach on several computer vision problems
    corecore