5,567 research outputs found

    Improved Depth Map Estimation from Stereo Images based on Hybrid Method

    Get PDF
    In this paper, a stereo matching algorithm based on image segments is presented. We propose the hybrid segmentation algorithm that is based on a combination of the Belief Propagation and Mean Shift algorithms with aim to refine the disparity and depth map by using a stereo pair of images. This algorithm utilizes image filtering and modified SAD (Sum of Absolute Differences) stereo matching method. Firstly, a color based segmentation method is applied for segmenting the left image of the input stereo pair (reference image) into regions. The aim of the segmentation is to simplify representation of the image into the form that is easier to analyze and is able to locate objects in images. Secondly, results of the segmentation are used as an input of the local window-based matching method to determine the disparity estimate of each image pixel. The obtained experimental results demonstrate that the final depth map can be obtained by application of segment disparities to the original images. Experimental results with the stereo testing images show that our proposed Hybrid algorithm HSAD gives a good performance

    Local Stereo Matching Using Adaptive Local Segmentation

    Get PDF
    We propose a new dense local stereo matching framework for gray-level images based on an adaptive local segmentation using a dynamic threshold. We define a new validity domain of the fronto-parallel assumption based on the local intensity variations in the 4-neighborhood of the matching pixel. The preprocessing step smoothes low textured areas and sharpens texture edges, whereas the postprocessing step detects and recovers occluded and unreliable disparities. The algorithm achieves high stereo reconstruction quality in regions with uniform intensities as well as in textured regions. The algorithm is robust against local radiometrical differences; and successfully recovers disparities around the objects edges, disparities of thin objects, and the disparities of the occluded region. Moreover, our algorithm intrinsically prevents errors caused by occlusion to propagate into nonoccluded regions. It has only a small number of parameters. The performance of our algorithm is evaluated on the Middlebury test bed stereo images. It ranks highly on the evaluation list outperforming many local and global stereo algorithms using color images. Among the local algorithms relying on the fronto-parallel assumption, our algorithm is the best ranked algorithm. We also demonstrate that our algorithm is working well on practical examples as for disparity estimation of a tomato seedling and a 3D reconstruction of a face

    In-Band Disparity Compensation for Multiview Image Compression and View Synthesis

    Get PDF

    Extended depth-of-field imaging and ranging in a snapshot

    Get PDF
    Traditional approaches to imaging require that an increase in depth of field is associated with a reduction in numerical aperture, and hence with a reduction in resolution and optical throughput. In their seminal work, Dowski and Cathey reported how the asymmetric point-spread function generated by a cubic-phase aberration encodes the detected image such that digital recovery can yield images with an extended depth of field without sacrificing resolution [Appl. Opt. 34, 1859 (1995)]. Unfortunately recovered images are generally visibly degraded by artifacts arising from subtle variations in point-spread functions with defocus. We report a technique that involves determination of the spatially variant translation of image components that accompanies defocus to enable determination of spatially variant defocus. This in turn enables recovery of artifact-free, extended depth-of-field images together with a two-dimensional defocus and range map of the imaged scene. We demonstrate the technique for high-quality macroscopic and microscopic imaging of scenes presenting an extended defocus of up to two waves, and for generation of defocus maps with an uncertainty of 0.036 waves

    Structured Light-Based 3D Reconstruction System for Plants.

    Get PDF
    Camera-based 3D reconstruction of physical objects is one of the most popular computer vision trends in recent years. Many systems have been built to model different real-world subjects, but there is lack of a completely robust system for plants. This paper presents a full 3D reconstruction system that incorporates both hardware structures (including the proposed structured light system to enhance textures on object surfaces) and software algorithms (including the proposed 3D point cloud registration and plant feature measurement). This paper demonstrates the ability to produce 3D models of whole plants created from multiple pairs of stereo images taken at different viewing angles, without the need to destructively cut away any parts of a plant. The ability to accurately predict phenotyping features, such as the number of leaves, plant height, leaf size and internode distances, is also demonstrated. Experimental results show that, for plants having a range of leaf sizes and a distance between leaves appropriate for the hardware design, the algorithms successfully predict phenotyping features in the target crops, with a recall of 0.97 and a precision of 0.89 for leaf detection and less than a 13-mm error for plant size, leaf size and internode distance

    Confidence driven TGV fusion

    Full text link
    We introduce a novel model for spatially varying variational data fusion, driven by point-wise confidence values. The proposed model allows for the joint estimation of the data and the confidence values based on the spatial coherence of the data. We discuss the main properties of the introduced model as well as suitable algorithms for estimating the solution of the corresponding biconvex minimization problem and their convergence. The performance of the proposed model is evaluated considering the problem of depth image fusion by using both synthetic and real data from publicly available datasets

    Label-based Optimization of Dense Disparity Estimation for Robotic Single Incision Abdominal Surgery

    Get PDF
    Minimally invasive surgical techniques have led to novel approaches such as Single Incision Laparoscopic Surgery (SILS), which allows the reduction of post-operative infections and patient recovery time, improving surgical outcomes. However, the new techniques pose also new challenges to surgeons: during SILS, visualization of the surgical field is limited by the endoscope field of view, and the access to the target area is limited by the fact that instruments have to be inserted through a single port. In this context, intra-operative navigation and augmented reality based on pre-operative images have the potential to enhance SILS procedures by providing the information necessary to increase the intervention accuracy and safety. Problems arise when structures of interest change their pose or deform with respect to pre-operative planning, as usually happens in soft tissue abdominal surgery. This requires online estimation of the deformations to correct the pre-operative plan, which can be done, for example, through methods of depth estimation from stereo endoscopic images (3D reconstruction). The denser the reconstruction, the more accurate the deformation identification can be. This work presents an algorithm for 3D reconstruction of soft tissue, focusing on the refinement of the disparity map in order to obtain an accurate and dense point map. This algorithm is part of an assistive system for intra-operative guidance and safety supervision for robotic abdominal SILS . Results show that comparing our method with state-of-the-art CPU implementations, the percentage of valid pixel obtained with our method is 24% higher while providing comparable accuracy. Future research will focus on the development of a real-time implementation of the proposed algorithm, potentially based on a hybrid CPU-GPU processing framework
    corecore