12 research outputs found

    A Tutorial on Speckle Reduction in Synthetic Aperture Radar Images

    Get PDF
    Speckle is a granular disturbance, usually modeled as a multiplicative noise, that affects synthetic aperture radar (SAR) images, as well as all coherent images. Over the last three decades, several methods have been proposed for the reduction of speckle, or despeckling, in SAR images. Goal of this paper is making a comprehensive review of despeckling methods since their birth, over thirty years ago, highlighting trends and changing approaches over years. The concept of fully developed speckle is explained. Drawbacks of homomorphic filtering are pointed out. Assets of multiresolution despeckling, as opposite to spatial-domain despeckling, are highlighted. Also advantages of undecimated, or stationary, wavelet transforms over decimated ones are discussed. Bayesian estimators and probability density function (pdf) models in both spatial and multiresolution domains are reviewed. Scale-space varying pdf models, as opposite to scale varying models, are promoted. Promising methods following non-Bayesian approaches, like nonlocal (NL) filtering and total variation (TV) regularization, are reviewed and compared to spatial- and wavelet-domain Bayesian filters. Both established and new trends for assessment of despeckling are presented. A few experiments on simulated data and real COSMO-SkyMed SAR images highlight, on one side the costperformance tradeoff of the different methods, on the other side the effectiveness of solutions purposely designed for SAR heterogeneity and not fully developed speckle. Eventually, upcoming methods based on new concepts of signal processing, like compressive sensing, are foreseen as a new generation of despeckling, after spatial-domain and multiresolution-domain method

    Contourlet Domain Image Modeling and its Applications in Watermarking and Denoising

    Get PDF
    Statistical image modeling in sparse domain has recently attracted a great deal of research interest. Contourlet transform as a two-dimensional transform with multiscale and multi-directional properties is known to effectively capture the smooth contours and geometrical structures in images. The objective of this thesis is to study the statistical properties of the contourlet coefficients of images and develop statistically-based image denoising and watermarking schemes. Through an experimental investigation, it is first established that the distributions of the contourlet subband coefficients of natural images are significantly non-Gaussian with heavy-tails and they can be best described by the heavy-tailed statistical distributions, such as the alpha-stable family of distributions. It is shown that the univariate members of this family are capable of accurately fitting the marginal distributions of the empirical data and that the bivariate members can accurately characterize the inter-scale dependencies of the contourlet coefficients of an image. Based on the modeling results, a new method in image denoising in the contourlet domain is proposed. The Bayesian maximum a posteriori and minimum mean absolute error estimators are developed to determine the noise-free contourlet coefficients of grayscale and color images. Extensive experiments are conducted using a wide variety of images from a number of databases to evaluate the performance of the proposed image denoising scheme and to compare it with that of other existing schemes. It is shown that the proposed denoising scheme based on the alpha-stable distributions outperforms these other methods in terms of the peak signal-to-noise ratio and mean structural similarity index, as well as in terms of visual quality of the denoised images. The alpha-stable model is also used in developing new multiplicative watermark schemes for grayscale and color images. Closed-form expressions are derived for the log-likelihood-based multiplicative watermark detection algorithm for grayscale images using the univariate and bivariate Cauchy members of the alpha-stable family. A multiplicative multichannel watermark detector is also designed for color images using the multivariate Cauchy distribution. Simulation results demonstrate not only the effectiveness of the proposed image watermarking schemes in terms of the invisibility of the watermark, but also the superiority of the watermark detectors in providing detection rates higher than that of the state-of-the-art schemes even for the watermarked images undergone various kinds of attacks

    Ultrasound image processing in the evaluation of labor induction failure risk

    Get PDF
    Labor induction is defined as the artificial stimulation of uterine contractions for the purpose of vaginal birth. Induction is prescribed for medical and elective reasons. Success in labor induction procedures is related to vaginal delivery. Cesarean section is one of the potential risks of labor induction as it occurs in about 20% of the inductions. A ripe cervix (soft and distensible) is needed for a successful labor. During the ripening cervical, tissues experience micro structural changes: collagen becomes disorganized and water content increases. These changes will affect the interaction between cervical tissues and sound waves during ultrasound transvaginal scanning and will be perceived as gray level intensity variations in the echographic image. Texture analysis can be used to analyze these variations and provide a means to evaluate cervical ripening in a non-invasive way

    Thickness estimation, automated classification and novelty detection in ultrasound images of the plantar fascia tissues

    Get PDF
    The plantar fascia (PF) tissue plays an important role in the movement and the stability of the foot during walking and running. Thus it is possible for the overuse and the associated medical problems to cause injuries and some severe common diseases. Ultrasound (US) imaging offers significant potential in diagnosis of PF injuries and monitoring treatments. Despite the advantages of US, the generated PF images are difficult to interpret during medical assessment. This is partly due to the size and position of the PF in relation to the adjacent tissues. This limits the use of US in clinical practice and therefore impacts on patient services for what is a common problem and a major cause of foot pain and discomfort. It is therefore a requirement to devise an automated system that allows better and easier interpretation of PF US images during diagnosis. This study is concerned with developing a computer-based system using a combination of medical image processing techniques whereby different PF US images can be visually improved, segmented, analysed and classified as normal or abnormal, so as to provide more information to the doctors and the clinical treatment department for early diagnosis and the detection of the PF associated medical problems. More specifically, this study is required to investigate the possibility of a proposed model for localizing and estimating the PF thickness a cross three different sections (rearfoot, midfoot and forefoot) using a supervised ANN segmentation technique. The segmentation method uses RBF artificial neural network module in order to classify small overlapping patches into PF and non-PF tissue. Feature selection technique was performed as a post-processing step for feature extraction to reduce the number of the extracted features. Then the trained RBF-ANN is used to segment the desired PF region. The PF thickness was calculated using two different methods: distance transformation and a proposed area-length calculation algorithm. Additionally, different machine learning approaches were investigated and applied to the segmented PF region in order to distinguish between symptomatic and asymptomatic PF subjects using the best normalized and selected feature set. This aims to facilitate the characterization and the classification of the PF area for the identification of patients with inferior heel pain at risk of plantar fasciitis. Finally, a novelty detection framework for detecting the symptomatic PF samples (with plantar fasciitis disorder) using only asymptomatic samples is proposed. This model implies the following: feature analysis, building a normality model by training the one-class SVDD classifier using only asymptomatic PF training datasets, and computing novelty scores using the trained SVDD classifier, training and testing asymptomatic datasets, and testing symptomatic datasets of the PF dataset. The performance evaluation results showed that the proposed approaches used in this study obtained favourable results compared to other methods reported in the literature

    Probabilistic modeling of wavelet coefficients for processing of image and video signals

    Get PDF
    Statistical estimation and detection techniques are widely used in signal processing including wavelet-based image and video processing. The probability density function (PDF) of the wavelet coefficients of image and video signals plays a key role in the development of techniques for such a processing. Due to the fixed number of parameters, the conventional PDFs for the estimators and detectors usually ignore higher-order moments. Consequently, estimators and detectors designed using such PDFs do not provide a satisfactory performance. This thesis is concerned with first developing a probabilistic model that is capable of incorporating an appropriate number of parameters that depend on higher-order moments of the wavelet coefficients. This model is then used as the prior to propose certain estimation and detection techniques for denoising and watermarking of image and video signals. Towards developing the probabilistic model, the Gauss-Hermite series expansion is chosen, since the wavelet coefficients have non-compact support and their empirical density function shows a resemblance to the standard Gaussian function. A modification is introduced in the series expansion so that only a finite number of terms can be used for modeling the wavelet coefficients with rendering the resulting PDF to become negative. The parameters of the resulting PDF, called the modified Gauss-Hermite (NIGH) PDF, are evaluated in terms of the higher-order sample-moments. It is shown that the MGH PDF fits the empirical density function better than the existing PDFs that use a limited number of parameters do. The proposed MGH PDF is used as the prior of image and video signals in designing maximum a posteriori and minimum mean squared error-based estimators for denoising of image and video signals and log-likelihood ratio-based detector for watermarking of image signals. The performance of the estimation and detection techniques are then evaluated in terms of the commonly used metrics. It is shown through extensive experimentations that the estimation and detection techniques developed utilizing the proposed MGH PDF perform substantially better than those that utilize the conventional PDFs. These results confirm that the superior fit of the MGH PDF to the empirical density function resulting from the flexibility of the MGH PDF in choosing the number of parameters, which are functions of higher-order moments of data, leads to the better performance. Thus, the proposed MGH PDF should play a significant role in wavelet-based image and video signal processin

    Novel Video Completion Approaches and Their Applications

    Get PDF
    Video completion refers to automatically restoring damaged or removed objects in a video sequence, with applications ranging from sophisticated video removal of undesired static or dynamic objects to correction of missing or corrupted video frames in old movies and synthesis of new video frames to add, modify, or generate a new visual story. The video completion problem can be solved using texture synthesis and/or data interpolation to fill-in the holes of the sequence inward. This thesis makes a distinction between still image completion and video completion. The latter requires visually pleasing consistency by taking into account the temporal information. Based on their applied concepts, video completion techniques are categorized as inpainting and texture synthesis. We present a bandlet transform-based technique for each of these categories of video completion techniques. The proposed inpainting-based technique is a 3D volume regularization scheme that takes advantage of bandlet bases for exploiting the anisotropic regularities to reconstruct a damaged video. The proposed exemplar-based approach, on the other hand, performs video completion using a precise patch fusion in the bandlet domain instead of patch replacement. The video completion task is extended to two important applications in video restoration. First, we develop an automatic video text detection and removal that benefits from the proposed inpainting scheme and a novel video text detector. Second, we propose a novel video super-resolution technique that employs the inpainting algorithm spatially in conjunction with an effective structure tensor, generated using bandlet geometry. The experimental results show a good performance of the proposed video inpainting method and demonstrate the effectiveness of bandlets in video completion tasks. The proposed video text detector and the video super resolution scheme also show a high performance in comparison with existing methods

    Wavelet-based noise reduction of cDNA microarray images

    Get PDF
    The advent of microarray imaging technology has lead to enormous progress in the life sciences by allowing scientists to analyze the expression of thousands of genes at a time. For complementary DNA (cDNA) microarray experiments, the raw data are a pair of red and green channel images corresponding to the treatment and control samples. These images are contaminated by a high level of noise due to the numerous noise sources affecting the image formation. A major challenge of microarray image analysis is the extraction of accurate gene expression measurements from the noisy microarray images. A crucial step in this process is denoising, which consists of reducing the noise in the observed microarray images while preserving the signal information as much as possible. This thesis deals with the problem of developing novel methods for reducing noise in cDNA microarray images for accurate estimation of the gene expression levels. Denoising methods based on the wavelet transform have shown significant success when applied to natural images. However, these methods are not very efficient for reducing noise in cDNA microarray images. An important reason for this is that existing methods are only capable of processing the red and green channel images separately. In doing so. they ignore the signal correlation as well as the noise correlation that exists between the wavelet coefficients of the two channels. The primary objective of this research is to design efficient wavelet-based noise reduction algorithms for cDNA microarray images that take into account these inter-channel dependencies by 'jointly' estimating the noise-free coefficients in both the channels. Denoising algorithms are developed using two types of wavelet transforms, namely, the frequently-used discrete wavelet transform (DWT) and the complex wavelet transform (CWT). The main advantage of using the DWT for denoising is that this transform is computationally very efficient. In order to obtain a better denoising performance for microarray images, however, the CWT is preferred to DWT because the former has good directional selectivity properties that are necessary for better representation of the circular edges of spots. The linear minimum mean squared error and maximum a posteriori estimation techniques are used to develop bivariate estimators for the noise-free coefficients of the two images. These estimators are derived by utilizing appropriate joint probability density functions for the image coefficients as well as the noise coefficients of the two channels. Extensive experimentations are carried out on a large set of cDNA microarray images to evaluate the performance of the proposed denoising methods as compared to the existing ones. Comparisons are made using standard metrics such as the peak signal-to-noise ratio (PSNR) for measuring the amount of noise removed from the pixels of the images, and the mean absolute error for measuring the accuracy of the estimated log-intensity ratios obtained from the denoised version of the images. Results indicate that the proposed denoising methods that are developed specifically for the microarray images do, indeed, lead to more accurate estimation of gene expression levels. Thus, it is expected that the proposed methods will play a significant role in improving the reliability of the results obtained from practical microarray experiments
    corecore