13 research outputs found

    Wavelet and FFT Based Image Denoising Using Non-linear Filters

    Get PDF
    We propose a stationary and discrete wavelet based image denoising scheme and an FFTbased image denoising scheme to remove Gaussian noise. In the first approach, high subbands are added with each other and then soft thresholding is performed. The sum of low subbands is filtered with either piecewise linear (PWL) or Lagrange or spline interpolated PWL filter. In the second approach, FFT is employed on the noisy image and then low frequency and high frequency coefficients are separated with a specified cutoff frequency.Then the inverse of low frequency components is filtered with one of the PWL filters and the inverse of high frequency components is filtered with soft thresholding. The experimental results are compared with Liu and Liu's tensor-based diffusion model (TDM) approach

    Retinal Fundus Image Enhancement Using the Normalized Convolution and Noise Removing

    Get PDF
    Retinal fundus image plays an important role in the diagnosis of retinal related diseases. The detailed information of the retinal fundus image such as small vessels, microaneurysms, and exudates may be in low contrast, and retinal image enhancement usually gives help to analyze diseases related to retinal fundus image. Current image enhancement methods may lead to artificial boundaries, abrupt changes in color levels, and the loss of image detail. In order to avoid these side effects, a new retinal fundus image enhancement method is proposed. First, the original retinal fundus image was processed by the normalized convolution algorithm with a domain transform to obtain an image with the basic information of the background. Then, the image with the basic information of the background was fused with the original retinal fundus image to obtain an enhanced fundus image. Lastly, the fused image was denoised by a two-stage denoising method including the fourth order PDEs and the relaxed median filter. The retinal image databases, including the DRIVE database, the STARE database, and the DIARETDB1 database, were used to evaluate image enhancement effects. The results show that the method can enhance the retinal fundus image prominently. And, different from some other fundus image enhancement methods, the proposed method can directly enhance color images

    Contrast and color balance enhancement for non-uniform illumination retinal images

    Get PDF
    Color retinal images play an important role in supporting a medical diagnosis. However, some retinal images are unsuitable for diagnosis due to the non-uniform illumination. In order to solve this problem, we propose a method for improving non-uniform illumination that can enhance the image quality of a color fundus photograph suitable for reliable visual diagnosis. Firstly, a hidden anatomical structure in dark regions of the retinal images is revealed by improving the image luminosity with gamma correction. Secondly, multi-scale tone manipulation is then used to adjust the image contrast in the lightness channel of L*a*b* color space. Finally, color balance is adjusted by specifying the image brightness based on Hubbard’s specification. The performance of the applied method has been evaluated against the data from the DIARETDB1 dataset. The results obtained show that the proposed algorithm performs well for correcting the non-uniform illumination of color retinal images

    Does median filtering truly preserve edges better than linear filtering?

    Full text link
    Image processing researchers commonly assert that "median filtering is better than linear filtering for removing noise in the presence of edges." Using a straightforward large-nn decision-theory framework, this folk-theorem is seen to be false in general. We show that median filtering and linear filtering have similar asymptotic worst-case mean-squared error (MSE) when the signal-to-noise ratio (SNR) is of order 1, which corresponds to the case of constant per-pixel noise level in a digital signal. To see dramatic benefits of median smoothing in an asymptotic setting, the per-pixel noise level should tend to zero (i.e., SNR should grow very large). We show that a two-stage median filtering using two very different window widths can dramatically outperform traditional linear and median filtering in settings where the underlying object has edges. In this two-stage procedure, the first pass, at a fine scale, aims at increasing the SNR. The second pass, at a coarser scale, correctly exploits the nonlinearity of the median. Image processing methods based on nonlinear partial differential equations (PDEs) are often said to improve on linear filtering in the presence of edges. Such methods seem difficult to analyze rigorously in a decision-theoretic framework. A popular example is mean curvature motion (MCM), which is formally a kind of iterated median filtering. Our results on iterated median filtering suggest that some PDE-based methods are candidates to rigorously outperform linear filtering in an asymptotic framework.Comment: Published in at http://dx.doi.org/10.1214/08-AOS604 the Annals of Statistics (http://www.imstat.org/aos/) by the Institute of Mathematical Statistics (http://www.imstat.org

    Selected Algorithms of Quantitative Image Analysis for Measurements of Properties Characterizing Interfacial Interactions at High Temperatures.

    Get PDF
    In the case of every quantitative image analysis system a very important issue is to improve the quality of images to be analyzed, in other words, their pre-processing. As a result of pre-processing, the significant part of the redundant information and disturbances (which could originate from imperfect vision system components) should be removed from the image. Another particularly important problem to be solved is the right choice of image segmentation procedures. Segmentation essence is to divide an image into disjoint subsets that meet certain criteria for homogeneity (e.g. color, brightness or texture). The result of segmentation should allow the most precise determination of geometrical features of objects present in a scene with a minimum of computing effort. The measurement of geometric properties of objects present in the scene is the subject of image analysis

    Maximum Energy Subsampling: A General Scheme For Multi-resolution Image Representation And Analysis

    Get PDF
    Image descriptors play an important role in image representation and analysis. Multi-resolution image descriptors can effectively characterize complex images and extract their hidden information. Wavelets descriptors have been widely used in multi-resolution image analysis. However, making the wavelets transform shift and rotation invariant produces redundancy and requires complex matching processes. As to other multi-resolution descriptors, they usually depend on other theories or information, such as filtering function, prior-domain knowledge, etc.; that not only increases the computation complexity, but also generates errors. We propose a novel multi-resolution scheme that is capable of transforming any kind of image descriptor into its multi-resolution structure with high computation accuracy and efficiency. Our multi-resolution scheme is based on sub-sampling an image into an odd-even image tree. Through applying image descriptors to the odd-even image tree, we get the relative multi-resolution image descriptors. Multi-resolution analysis is based on downsampling expansion with maximum energy extraction followed by upsampling reconstruction. Since the maximum energy usually retained in the lowest frequency coefficients; we do maximum energy extraction through keeping the lowest coefficients from each resolution level. Our multi-resolution scheme can analyze images recursively and effectively without introducing artifacts or changes to the original images, produce multi-resolution representations, obtain higher resolution images only using information from lower resolutions, compress data, filter noise, extract effective image features and be implemented in parallel processing

    Data mining based learning algorithms for semi-supervised object identification and tracking

    Get PDF
    Sensor exploitation (SE) is the crucial step in surveillance applications such as airport security and search and rescue operations. It allows localization and identification of movement in urban settings and can significantly boost knowledge gathering, interpretation and action. Data mining techniques offer the promise of precise and accurate knowledge acquisition techniques in high-dimensional data domains (and diminishing the “curse of dimensionality” prevalent in such datasets), coupled by algorithmic design in feature extraction, discriminative ranking, feature fusion and supervised learning (classification). Consequently, data mining techniques and algorithms can be used to refine and process captured data and to detect, recognize, classify, and track objects with predictable high degrees of specificity and sensitivity. Automatic object detection and tracking algorithms face several obstacles, such as large and incomplete datasets, ill-defined regions of interest (ROIs), variable scalability, lack of compactness, angular regions, partial occlusions, environmental variables, and unknown potential object classes, which work against their ability to achieve accurate real-time results. Methods must produce fast and accurate results by streamlining image processing, data compression and reduction, feature extraction, classification, and tracking algorithms. Data mining techniques can sufficiently address these challenges by implementing efficient and accurate dimensionality reduction with feature extraction to refine incomplete (ill-partitioning) data-space and addressing challenges related to object classification, intra-class variability, and inter-class dependencies. A series of methods have been developed to combat many of the challenges for the purpose of creating a sensor exploitation and tracking framework for real time image sensor inputs. The framework has been broken down into a series of sub-routines, which work in both series and parallel to accomplish tasks such as image pre-processing, data reduction, segmentation, object detection, tracking, and classification. These methods can be implemented either independently or together to form a synergistic solution to object detection and tracking. The main contributions to the SE field include novel feature extraction methods for highly discriminative object detection, classification, and tracking. Also, a new supervised classification scheme is presented for detecting objects in urban environments. This scheme incorporates both novel features and non-maximal suppression to reduce false alarms, which can be abundant in cluttered environments such as cities. Lastly, a performance evaluation of Graphical Processing Unit (GPU) implementations of the subtask algorithms is presented, which provides insight into speed-up gains throughout the SE framework to improve design for real time applications. The overall framework provides a comprehensive SE system, which can be tailored for integration into a layered sensing scheme to provide the war fighter with automated assistance and support. As more sensor technology and integration continues to advance, this SE framework can provide faster and more accurate decision support for both intelligence and civilian applications
    corecore