39 research outputs found

    Exploiting depth information for fast multi-view video coding

    Get PDF
    This research work is partially funded by the Strategic Educational Pathways Scholarship Scheme (STEPS-Malta). This scholarship is partly financed by the European Union – European Social Fund (ESF 1.25).Multi-view video coding exploits inter-view redundancies to compress the video streams and their associated depth information. These techniques utilize disparity estimation techniques to obtain disparity vectors (DVs) across different views. However, these methods contribute to the majority of the computational power needed for multi-view video encoding. This paper proposes a solution for fast disparity estimation based on multi-view geometry and depth information. A DV predictor is first calculated followed by an iterative or a fast search estimation process which finds the optimal DV in the search area dictated by the predictor. Simulation results demonstrate that this predictor is reliable enough to determine the area of the optimal DVs to allow a smaller search range. Furthermore, results show that the proposed approach achieves a speedup of 2.5 while still preserving the original rate-distortion performance.peer-reviewe

    Exploiting depth information for fast motion and disparity estimation in multi-view video coding

    Get PDF
    This research work is partially funded by the Strategic Educational Pathways Scholarship Scheme (STEPS-Malta). This scholarship is partly financed by the European Union – European Social Fund (ESF 1.25).Multi-view Video Coding (MVC) employs both motion and disparity estimation within the encoding process. These provide a significant increase in coding efficiency at the expense of a substantial increase in computational requirements. This paper presents a fast motion and disparity estimation technique that utilizes the multi-view geometry together with the depth information and the corresponding encoded motion vectors from the reference view, to produce more reliable motion and disparity vector predictors for the current view. This allows for a smaller search area which reduces the computational cost of the multi-view encoding system. Experimental results confirm that the proposed techniques can provide a speed-up gain of up to 4.2 times, with a negligible loss in the rate-distortion performance for both the color and the depth MVC.peer-reviewe

    Pedestrian detection and tracking using stereo vision techniques

    Get PDF
    Automated pedestrian detection, counting and tracking has received significant attention from the computer vision community of late. Many of the person detection techniques described so far in the literature work well in controlled environments, such as laboratory settings with a small number of people. This allows various assumptions to be made that simplify this complex problem. The performance of these techniques, however, tends to deteriorate when presented with unconstrained environments where pedestrian appearances, numbers, orientations, movements, occlusions and lighting conditions violate these convenient assumptions. Recently, 3D stereo information has been proposed as a technique to overcome some of these issues and to guide pedestrian detection. This thesis presents such an approach, whereby after obtaining robust 3D information via a novel disparity estimation technique, pedestrian detection is performed via a 3D point clustering process within a region-growing framework. This clustering process avoids using hard thresholds by using bio-metrically inspired constraints and a number of plan view statistics. This pedestrian detection technique requires no external training and is able to robustly handle challenging real-world unconstrained environments from various camera positions and orientations. In addition, this thesis presents a continuous detect-and-track approach, with additional kinematic constraints and explicit occlusion analysis, to obtain robust temporal tracking of pedestrians over time. These approaches are experimentally validated using challenging datasets consisting of both synthetic data and real-world sequences gathered from a number of environments. In each case, the techniques are evaluated using both 2D and 3D groundtruth methodologies

    Pedestrian detection using stereo and biometric information

    Get PDF
    A method for pedestrian detection from real world outdoor scenes is presented in this paper. The technique uses disparity information, ground plane estimation and biometric information based on the golden ratio. It can detect pedestrians even in the presence of severe occlusion or a lack of reliable disparity data. It also makes reliable choices in ambiguous areas since the pedestrian regions are initiated using the disparity of head regions. These are usually highly textured and unoccluded, and therefore more reliable in a disparity image than homogeneous or occluded regions

    Regularising disparity estimation via multi task learning with structured light reconstruction

    Full text link
    3D reconstruction is a useful tool for surgical planning and guidance. However, the lack of available medical data stunts research and development in this field, as supervised deep learning methods for accurate disparity estimation rely heavily on large datasets containing ground truth information. Alternative approaches to supervision have been explored, such as self-supervision, which can reduce or remove entirely the need for ground truth. However, no proposed alternatives have demonstrated performance capabilities close to what would be expected from a supervised setup. This work aims to alleviate this issue. In this paper, we investigate the learning of structured light projections to enhance the development of direct disparity estimation networks. We show for the first time that it is possible to accurately learn the projection of structured light on a scene, implicitly learning disparity. Secondly, we \textcolor{black}{explore the use of a multi task learning (MTL) framework for the joint training of structured light and disparity. We present results which show that MTL with structured light improves disparity training; without increasing the number of model parameters. Our MTL setup outperformed the single task learning (STL) network in every validation test. Notably, in the medical generalisation test, the STL error was 1.4 times worse than that of the best MTL performance. The benefit of using MTL is emphasised when the training data is limited.} A dataset containing stereoscopic images, disparity maps and structured light projections on medical phantoms and ex vivo tissue was created for evaluation together with virtual scenes. This dataset will be made publicly available in the future

    Efficient Depth Estimation Using Sparse Stereo-Vision with Other Perception Techniques

    Get PDF
    The stereo vision system is one of the popular computer vision techniques. The idea here is to use the parallax error to our advantage. A single scene is recorded from two different viewing angles, and depth is estimated from the measure of parallax error. This technique is more than a century old and has proven useful in many applications. This field has made a lot of researchers and mathematicians to devise novel algorithms for the accurate output of the stereo systems. This system is particularly useful in the field of robotics. It provides them with the 3D understanding of the scene by giving them estimated object depths. This chapter, along with a complete overview of the stereo system, talks about the efficient estimation of the depth of the object. It stresses on the fact that if coupled with other perception techniques, stereo depth estimation can be made a lot more efficient than the current techniques. The idea revolves around the fact that stereo depth estimation is not necessary for all the pixels of the image. This fact opens room for more complex and accurate depth estimation techniques for the fewer regions of interest in the image scene. Further details about this idea are discussed in the subtopics that follow

    Summative Stereoscopic Image Compression using Arithmetic Coding

    Get PDF
    Image compression targets at plummeting the amount of bits required for image representation for save storage space and speed up the transmission over network. The reduction of size helps to store more images in the disk and take less transfer time in the data network. Stereoscopic image refers to a three dimensional (3D) image that is perceived by the human brain as the transformation of two images that is being sent to the left and right human eyes with distinct phases. However, storing of these images takes twice space than a single image and hence the motivation for this novel approach called Summative Stereoscopic Image Compression using Arithmetic Coding (S2ICAC) where the difference and average of these stereo pair images are calculated, quantized in the case of lossy approach and unquantized in the case of lossless approach, and arithmetic coding is applied. The experimental result analysis indicates that the proposed method achieves high compression ratio and high PSNR value. The proposed method is also compared with JPEG 2000 Position Based Coding Scheme(JPEG 2000 PBCS) and Stereoscopic Image Compression using Huffman Coding (SICHC). From the experimental analysis, it is observed that S2ICAC outperforms JPEG 2000 PBCS as well as SICHC

    V-FUSE: Volumetric Depth Map Fusion with Long-Range Constraints

    Full text link
    We introduce a learning-based depth map fusion framework that accepts a set of depth and confidence maps generated by a Multi-View Stereo (MVS) algorithm as input and improves them. This is accomplished by integrating volumetric visibility constraints that encode long-range surface relationships across different views into an end-to-end trainable architecture. We also introduce a depth search window estimation sub-network trained jointly with the larger fusion sub-network to reduce the depth hypothesis search space along each ray. Our method learns to model depth consensus and violations of visibility constraints directly from the data; effectively removing the necessity of fine-tuning fusion parameters. Extensive experiments on MVS datasets show substantial improvements in the accuracy of the output fused depth and confidence maps.Comment: ICCV 202
    corecore