5 research outputs found

    New Stereo Vision Algorithm Composition Using Weighted Adaptive Histogram Equalization and Gamma Correction

    Get PDF
    This work presents the composition of a new algorithm for a stereo vision system to acquire accurate depth measurement from stereo correspondence. Stereo correspondence produced by matching is commonly affected by image noise such as illumination variation, blurry boundaries, and radiometric differences. The proposed algorithm introduces a pre-processing step based on the combination of Contrast Limited Adaptive Histogram Equalization (CLAHE) and Adaptive Gamma Correction Weighted Distribution (AGCWD) with a guided filter (GF). The cost value of the pre-processing step is determined in the matching cost step using the census transform (CT), which is followed by aggregation using the fixed-window and GF technique. A winner-takes-all (WTA) approach is employed to select the minimum disparity map value and final refinement using left-right consistency checking (LR) along with a weighted median filter (WMF) to remove outliers. The algorithm improved the accuracy 31.65% for all pixel errors and 23.35% for pixel errors in nonoccluded regions compared to several established algorithms on a Middlebury dataset

    Improved stereo matching algorithm based on census transform and dynamic histogram cost computation

    Get PDF
    Stereo matching is a significant subject in the stereo vision algorithm. Traditional taxonomy composition consists of several issues in the stereo correspondences process such as radiometric distortion, discontinuity, and low accuracy at the low texture regions. This new taxonomy improves the local method of stereo matching algorithm based on the dynamic cost computation for disparity map measurement. This method utilised modified dynamic cost computation in the matching cost stage. A modified Census Transform with dynamic histogram is used to provide the cost volume. An adaptive bilateral filtering is applied to retain the image depth and edge information in the cost aggregation stage. A Winner Takes All (WTA) optimisation is applied in the disparity selection and a left-right check with an adaptive bilateral median filtering are employed for final refinement. Based on the dataset of standard Middlebury, the taxonomy has better accuracy and outperformed several other state-of-the-art algorithms

    A Mean-Shift-Based Feature Descriptor for Wide Baseline Stereo Matching

    Get PDF
    We propose a novel Mean-Shift-based building approach in wide baseline. Initially, scale-invariance feature transform (SIFT) approach is used to extract relatively stable feature points. As to each matching SIFT feature point, it needs a reasonable neighborhood range so as to choose feature points set. Subsequently, in view of selecting repeatable and high robust feature points, Mean-Shift controls corresponding feature scale. At last, our approach is employed to depth image acquirement in wide baseline and Graph Cut algorithm optimizes disparity information. Compared with the existing methods such as SIFT, speeded up robust feature (SURF), and normalized cross-correlation (NCC), the presented approach has the advantages of higher robustness and accuracy rate. Experimental results on low resolution image and weak feature description in wide baseline confirm the validity of our approach

    ACCURATE AND FAST STEREO VISION

    Get PDF
    Stereo vision from short-baseline image pairs is one of the most active research fields in computer vision. The estimation of dense disparity maps from stereo image pairs is still a challenging task and there is further space for improving accuracy, minimizing the computational cost and handling more efficiently outliers, low-textured areas, repeated textures, disparity discontinuities and light variations. This PhD thesis presents two novel methodologies relating to stereo vision from short-baseline image pairs: I. The first methodology combines three different cost metrics, defined using colour, the CENSUS transform and SIFT (Scale Invariant Feature Transform) coefficients. The selected cost metrics are aggregated based on an adaptive weights approach, in order to calculate their corresponding cost volumes. The resulting cost volumes are merged into a combined one, following a novel two-phase strategy, which is further refined by exploiting semi-global optimization. A mean-shift segmentation-driven approach is exploited to deal with outliers in the disparity maps. Additionally, low-textured areas are handled using disparity histogram analysis, which allows for reliable disparity plane fitting on these areas. II. The second methodology relies on content-based guided image filtering and weighted semi-global optimization. Initially, the approach uses a pixel-based cost term that combines gradient, Gabor-Feature and colour information. The pixel-based matching costs are filtered by applying guided image filtering, which relies on support windows of two different sizes. In this way, two filtered costs are estimated for each pixel. Among the two filtered costs, the one that will be finally assigned to each pixel, depends on the local image content around this pixel. The filtered cost volume is further refined by exploiting weighted semi-global optimization, which improves the disparity accuracy. The handling of the occluded areas is enhanced by incorporating a straightforward and time efficient scheme. The evaluation results show that both methodologies are very accurate, since they handle efficiently low-textured/occluded areas and disparity discontinuities. Additionally, the second approach has very low computational complexity. Except for the aforementioned two methodologies that use as input short-baseline image pairs, this PhD thesis presents a novel methodology for generating 3D point clouds of good accuracy from wide-baseline stereo pairs
    corecore