70,878 research outputs found

    3D Reconstruction Using Light Field Camera

    Get PDF
    3D reconstruction is important as a method to represent the real world environment using a 3D digital model. The emergence of the Lytro light field camera in the market has opened up new possibilities for researchers to explore 3D reconstruction with this easily obtained off-the-shelf product. By using depth information contained in the camera, 3D reconstruction of real life objects is made possible. However, despite its huge potential, 3D reconstruction based on light field technology is still insufficiently explored. In this work, a map is obtained by using two different responses of the image, namely defocus and correspondence response, and combining both responses to get a clearer and better depth map. In the beginning stage of research, one image with a fixed point of focus is being selected as object of study and exported in multiple file formats. Some of it contains all the light field information of Lytro image, while others contain selective information such as depth data or representation of 2D image. At the initial stage, a preliminary depth map was obtained but the depth representation was not clear and obvious. In the end a 3D depth map that has the outline and shape of a real object studied was generated. It was later found out that defocus analysis can be improved by reducing the defocus analysis radius. All in all, a 3D depth map can be successfully obtained from light field picture through computations in MATLAB code

    3D Reconstruction Using Light Field Camera

    Get PDF
    3D reconstruction is important as a method to represent the real world environment using a 3D digital model. The emergence of the Lytro light field camera in the market has opened up new possibilities for researchers to explore 3D reconstruction with this easily obtained off-the-shelf product. By using depth information contained in the camera, 3D reconstruction of real life objects is made possible. However, despite its huge potential, 3D reconstruction based on light field technology is still insufficiently explored. In this work, a map is obtained by using two different responses of the image, namely defocus and correspondence response, and combining both responses to get a clearer and better depth map. In the beginning stage of research, one image with a fixed point of focus is being selected as object of study and exported in multiple file formats. Some of it contains all the light field information of Lytro image, while others contain selective information such as depth data or representation of 2D image. At the initial stage, a preliminary depth map was obtained but the depth representation was not clear and obvious. In the end a 3D depth map that has the outline and shape of a real object studied was generated. It was later found out that defocus analysis can be improved by reducing the defocus analysis radius. All in all, a 3D depth map can be successfully obtained from light field picture through computations in MATLAB code

    Reconstruction of 3D Surfaces with Complex Material Composure Using a Light Field Camera

    Full text link
    University of Technology Sydney. Faculty of Engineering and Information Technology.Representing real-world objects on a digital screen is a significant and challenging topic in the area of computer vision and augmented reality. This work addressed the challenge of reconstruction of 3D surfaces with a complicated material appearance by using a light field camera. Most recent research uses single images to address this problem, but without using a light field camera, encounter difficulties and limitations to overcome this problem. However, we show that by using a light field camera without user interaction or any requirement for object planarity or symmetry, reconstruction of a 3D model with high accuracy is possible. A light field camera, also known as a Plenoptic camera can capture rich information about the spatial and angular distribution, as well as intensity and colour of light in a single shot. For the reconstruction of 3D models, creating a 3D point cloud is essential and is often obtained based on a depth map. As a result, first, we developed a robust method to estimate an accurate depth map based on the combination of sub-aperture image matching and defocusing cues for a 4D light field format. The depth map is refined using a fast-weighted median filter providing robustness to noise. In the second part, we proposed a novel strategy for the creation of a 3D point cloud from the depth map of a single 4D light field image. The proposed method is based on the transformation of point-plane correspondences. Considering the estimated depth map from the previous part, we applied histogram equalization and histogram stretching to enhance the separation between depth planes. In the third step, we improved our suggested method to obtain a dense and more accurate three-dimensional (3D) point cloud. We applied intelligent edge detection by using feature matching and fuzzy logic from the central sub-aperture light field image and the depth map. The results showed that our new method can reliably mitigate noise compared to other existing methods. Finally, having obtained the 3D point cloud we handled the problem of reflectance in complex material appearance. We developed a new strategy to recover reflectance information based on colour analysis as well as brightness analysis of a light field image. Experimental results demonstrate the effectiveness of our method in both synthetic and real-world images compared to other states of the art methods. Overall, 3D reconstruction can solve many problems of computer vision that is still a challenging topic

    Fast and accurate flow measurement through dual-camera light field particle image velocimetry and ordered-subset algorithm

    Get PDF
    Light field particle image velocimetry (LF-PIV) can measure the three-dimensional (3D) flow field via a single perspective and hence is very attractive for applications with limited optical access. However, the flow velocity measurement via single-camera LF-PIV shows poor accuracy in the depth direction due to the particle reconstruction elongation effect. This study proposes a solution based on a dual-camera LF-PIV system along with an ordered-subset simultaneous algebraic reconstruction technique (OS-SART). The proposed system improves the spatial resolution in the depth direction and reduces the reconstruction elongation. The OS-SART also reduces the computational time brought by the dual-camera LF-PIV. Numerical reconstructions of the particle fields and Gaussian ring vortex field are first performed to evaluate the reconstruction accuracy and efficiency of the proposed system. Experiments on a circular jet flow are conducted to further validate the velocity measurement accuracy. Results indicate that the particle reconstruction elongation is reduced more than 10 times compared to the single-camera LF-PIV and the reconstruction efficiency is improved at least twice compared to the conventional SART. The accuracy is improved significantly for the ring vortex and 3D jet flow fields compared to the single-camera system. It is therefore demonstrated that the proposed system is capable of measuring the 3D flow field fast and accurately

    Enhanced processing methods for light field imaging

    Full text link
    The light field camera provides rich textural and geometric information, but it is still challenging to use it efficiently and accurately to solve computer vision problems. Light field image processing is divided into multiple levels. First, low-level processing technology mainly includes the acquisition of light field images and their preprocessing. Second, the middle-level process consists of the depth estimation, light field encoding, and the extraction of cues from the light field. Third, high-level processing involves 3D reconstruction, target recognition, visual odometry, image reconstruction, and other advanced applications. We propose a series of improved algorithms for each of these levels. The light field signal contains rich angular information. By contrast, traditional computer vision methods, as used for 2D images, often cannot make full use of the high-frequency part of the light field angular information. We propose a fast pre-estimation algorithm to enhance the light field feature to improve its speed and accuracy when keeping full use of the angular information.Light field filtering and refocusing are essential cues in light field signal processing. Modern frequency domain filtering technology and wavelet technology have effectively improved light field filtering accuracy but may fail at object edges. We adapted the sub-window filtering with the light field to improve the reconstruction of object edges. Light field images can analyze the effects of scattering and refraction phenomena, and there are still insufficient metrics to evaluate the results. Therefore, we propose a physical rendering-based light field dataset that simulates the distorted light field image through a transparent medium, such as atmospheric turbulence or water surface. The neural network is an essential method to process complex light field data. We propose an efficient 3D convolutional autoencoder network for the light field structure. This network overcomes the severe distortion caused by high-intensity turbulence with limited angular resolution and solves the difficulty of pixel matching between distorted images. This work emphasizes the application and usefulness of light field imaging in computer vision whilst improving light field image processing speed and accuracy through signal processing, computer graphics, computer vision, and artificial neural networks

    Depth Fields: Extending Light Field Techniques to Time-of-Flight Imaging

    Full text link
    A variety of techniques such as light field, structured illumination, and time-of-flight (TOF) are commonly used for depth acquisition in consumer imaging, robotics and many other applications. Unfortunately, each technique suffers from its individual limitations preventing robust depth sensing. In this paper, we explore the strengths and weaknesses of combining light field and time-of-flight imaging, particularly the feasibility of an on-chip implementation as a single hybrid depth sensor. We refer to this combination as depth field imaging. Depth fields combine light field advantages such as synthetic aperture refocusing with TOF imaging advantages such as high depth resolution and coded signal processing to resolve multipath interference. We show applications including synthesizing virtual apertures for TOF imaging, improved depth mapping through partial and scattering occluders, and single frequency TOF phase unwrapping. Utilizing space, angle, and temporal coding, depth fields can improve depth sensing in the wild and generate new insights into the dimensions of light's plenoptic function.Comment: 9 pages, 8 figures, Accepted to 3DV 201
    • …
    corecore