70,298 research outputs found

    Adaptive Methods for Point Cloud and Mesh Processing

    Get PDF
    Point clouds and 3D meshes are widely used in numerous applications ranging from games to virtual reality to autonomous vehicles. This dissertation proposes several approaches for noise removal and calibration of noisy point cloud data and 3D mesh sharpening methods. Order statistic filters have been proven to be very successful in image processing and other domains as well. Different variations of order statistics filters originally proposed for image processing are extended to point cloud filtering in this dissertation. A brand-new adaptive vector median is proposed in this dissertation for removing noise and outliers from noisy point cloud data. The major contributions of this research lie in four aspects: 1) Four order statistic algorithms are extended, and one adaptive filtering method is proposed for the noisy point cloud with improved results such as preserving significant features. These methods are applied to standard models as well as synthetic models, and real scenes, 2) A hardware acceleration of the proposed method using Microsoft parallel pattern library for filtering point clouds is implemented using multicore processors, 3) A new method for aerial LIDAR data filtering is proposed. The objective is to develop a method to enable automatic extraction of ground points from aerial LIDAR data with minimal human intervention, and 4) A novel method for mesh color sharpening using the discrete Laplace-Beltrami operator is proposed. Median and order statistics-based filters are widely used in signal processing and image processing because they can easily remove outlier noise and preserve important features. This dissertation demonstrates a wide range of results with median filter, vector median filter, fuzzy vector median filter, adaptive mean, adaptive median, and adaptive vector median filter on point cloud data. The experiments show that large-scale noise is removed while preserving important features of the point cloud with reasonable computation time. Quantitative criteria (e.g., complexity, Hausdorff distance, and the root mean squared error (RMSE)), as well as qualitative criteria (e.g., the perceived visual quality of the processed point cloud), are employed to assess the performance of the filters in various cases corrupted by different noisy models. The adaptive vector median is further optimized for denoising or ground filtering aerial LIDAR data point cloud. The adaptive vector median is also accelerated on multi-core CPUs using Microsoft Parallel Patterns Library. In addition, this dissertation presents a new method for mesh color sharpening using the discrete Laplace-Beltrami operator, which is an approximation of second order derivatives on irregular 3D meshes. The one-ring neighborhood is utilized to compute the Laplace-Beltrami operator. The color for each vertex is updated by adding the Laplace-Beltrami operator of the vertex color weighted by a factor to its original value. Different discretizations of the Laplace-Beltrami operator have been proposed for geometrical processing of 3D meshes. This work utilizes several discretizations of the Laplace-Beltrami operator for sharpening 3D mesh colors and compares their performance. Experimental results demonstrated the effectiveness of the proposed algorithms

    Evaluation of performance for different filtering methods in CT brain images

    Get PDF
    This paper presents the comparison of filtering methods for a contrast enhancement of computed tomography (CT) brain images. Each method consists of three filter consecutively which is a combination of the low order linear filter such as Gaussian filter, disk filter, average filter and median filter with an adaptive filter method and unsharp filter. The process starts with filtering the CT brain image using low order linear filter, then proceeds with adaptive averaging filter and ends with unsharp filter. In this paper, there are two criteria, peak signal to noise ratio and mean square error, that were adopted for performance assessment. Our preliminary results showed that the combination of Gaussian filter with adaptive filter and unsharp filter gives the good result in removing the noise and edge detection. This method improved the CT brain image and the gyri and sulci can be easily identified

    Adaptive two-pass rank order filter to remove impulse noise in highly corrupted images

    Get PDF
    This material is posted here with permission of the IEEE. Such permission of the IEEE does not in any way imply IEEE endorsement of any of Brunel University's products or services. Internal or personal use of this material is permitted. However, permission to reprint/republish this material for advertising or promotional purposes or for creating new collective works for resale or redistribution must be obtained from the IEEE by writing to [email protected]. © 2004 IEEE.In this paper, we present an adaptive two-pass rank order filter to remove impulse noise in highly corrupted images. When the noise ratio is high, rank order filters, such as the median filter for example, can produce unsatisfactory results. Better results can be obtained by applying the filter twice, which we call two-pass filtering. To further improve the performance, we develop an adaptive two-pass rank order filter. Between the passes of filtering, an adaptive process is used to detect irregularities in the spatial distribution of the estimated impulse noise. The adaptive process then selectively replaces some pixels changed by the first pass of filtering with their original observed pixel values. These pixels are then kept unchanged during the second filtering. In combination, the adaptive process and the sec ond filter eliminate more impulse noise and restore some pixels that are mistakenly altered by the first filtering. As a final result, the reconstructed image maintains a higher degree of fidelity and has a smaller amount of noise. The idea of adaptive two-pass processing can be applied to many rank order filters, such as a center-weighted median filter (CWMF), adaptive CWMF, lower-upper-middle filter, and soft-decision rank-order-mean filter. Results from computer simulations are used to demonstrate the performance of this type of adaptation using a number of basic rank order filters.This work was supported in part by CenSSIS, the Center for Subsurface Sensing and Imaging Systems, under the Engineering Research Centers Program of the National Science Foundation (NSF) under Award EEC-9986821, by an ARO MURI on Demining under Grant DAAG55-97-1-0013, and by the NSF under Award 0208548

    New Stereo Vision Algorithm Composition Using Weighted Adaptive Histogram Equalization and Gamma Correction

    Get PDF
    This work presents the composition of a new algorithm for a stereo vision system to acquire accurate depth measurement from stereo correspondence. Stereo correspondence produced by matching is commonly affected by image noise such as illumination variation, blurry boundaries, and radiometric differences. The proposed algorithm introduces a pre-processing step based on the combination of Contrast Limited Adaptive Histogram Equalization (CLAHE) and Adaptive Gamma Correction Weighted Distribution (AGCWD) with a guided filter (GF). The cost value of the pre-processing step is determined in the matching cost step using the census transform (CT), which is followed by aggregation using the fixed-window and GF technique. A winner-takes-all (WTA) approach is employed to select the minimum disparity map value and final refinement using left-right consistency checking (LR) along with a weighted median filter (WMF) to remove outliers. The algorithm improved the accuracy 31.65% for all pixel errors and 23.35% for pixel errors in nonoccluded regions compared to several established algorithms on a Middlebury dataset

    Robust Adaptive Median Binary Pattern for noisy texture classification and retrieval

    Full text link
    Texture is an important cue for different computer vision tasks and applications. Local Binary Pattern (LBP) is considered one of the best yet efficient texture descriptors. However, LBP has some notable limitations, mostly the sensitivity to noise. In this paper, we address these criteria by introducing a novel texture descriptor, Robust Adaptive Median Binary Pattern (RAMBP). RAMBP based on classification process of noisy pixels, adaptive analysis window, scale analysis and image regions median comparison. The proposed method handles images with high noisy textures, and increases the discriminative properties by capturing microstructure and macrostructure texture information. The proposed method has been evaluated on popular texture datasets for classification and retrieval tasks, and under different high noise conditions. Without any train or prior knowledge of noise type, RAMBP achieved the best classification compared to state-of-the-art techniques. It scored more than 90%90\% under 50%50\% impulse noise densities, more than 95%95\% under Gaussian noised textures with standard deviation σ=5\sigma = 5, and more than 99%99\% under Gaussian blurred textures with standard deviation σ=1.25\sigma = 1.25. The proposed method yielded competitive results and high performance as one of the best descriptors in noise-free texture classification. Furthermore, RAMBP showed also high performance for the problem of noisy texture retrieval providing high scores of recall and precision measures for textures with high levels of noise

    Adaptive Nonlocal Filtering: A Fast Alternative to Anisotropic Diffusion for Image Enhancement

    Full text link
    The goal of many early visual filtering processes is to remove noise while at the same time sharpening contrast. An historical succession of approaches to this problem, starting with the use of simple derivative and smoothing operators, and the subsequent realization of the relationship between scale-space and the isotropic dfffusion equation, has recently resulted in the development of "geometry-driven" dfffusion. Nonlinear and anisotropic diffusion methods, as well as image-driven nonlinear filtering, have provided improved performance relative to the older isotropic and linear diffusion techniques. These techniques, which either explicitly or implicitly make use of kernels whose shape and center are functions of local image structure are too computationally expensive for use in real-time vision applications. In this paper, we show that results which are largely equivalent to those obtained from geometry-driven diffusion can be achieved by a process which is conceptually separated info two very different functions. The first involves the construction of a vector~field of "offsets", defined on a subset of the original image, at which to apply a filter. The offsets are used to displace filters away from boundaries to prevent edge blurring and destruction. The second is the (straightforward) application of the filter itself. The former function is a kind generalized image skeletonization; the latter is conventional image filtering. This formulation leads to results which are qualitatively similar to contemporary nonlinear diffusion methods, but at computation times that are roughly two orders of magnitude faster; allowing applications of this technique to real-time imaging. An additional advantage of this formulation is that it allows existing filter hardware and software implementations to be applied with no modification, since the offset step reduces to an image pixel permutation, or look-up table operation, after application of the filter
    corecore