16 research outputs found

    Effective SAR image despeckling based on bandlet and SRAD

    Get PDF
    Despeckling of a SAR image without losing features of the image is a daring task as it is intrinsically affected by multiplicative noise called speckle. This thesis proposes a novel technique to efficiently despeckle SAR images. Using an SRAD filter, a Bandlet transform based filter and a Guided filter, the speckle noise in SAR images is removed without losing the features in it. Here a SAR image input is given parallel to both SRAD and Bandlet transform based filters. The SRAD filter despeckles the SAR image and the despeckled output image is used as a reference image for the guided filter. In the Bandlet transform based despeckling scheme, the input SAR image is first decomposed using the bandlet transform. Then the coefficients obtained are thresholded using a soft thresholding rule. All coefficients other than the low-frequency ones are so adjusted. The generalized cross-validation (GCV) technique is employed here to find the most favorable threshold for each subband. The bandlet transform is able to extract edges and fine features in the image because it finds the direction where the function gives maximum value and in the same direction it builds extended orthogonal vectors. Simple soft thresholding using an optimum threshold despeckles the input SAR image. The guided filter with the help of a reference image removes the remaining speckle from the bandlet transform output. In terms of numerical and visual quality, the proposed filtering scheme surpasses the available despeckling schemes

    SONAR Images Denoising

    Get PDF
    International audienc

    Image Denoising Based on Artificial Bee Colony and BP Neural Network

    Get PDF
    Image is often subject to noise pollution during the process of collection, acquisition and transmission, noise is a major factor affecting the image quality, which has greatly impeded people from extracting information from the image. The purpose of image denoising is to restore the original image without noise from the noise image, and at the same time maintain the detailed information of the image as much as possible. This paper, by combining artificial bee colony algorithm and BP neural network, proposes the image denoising method based on artificial bee colony and BP neural network (ABC-BPNN), ABC-BPNN adopts the “double circulation” structure during the training process, after specifying the expected convergence speed and precision, it can adjust the rules according to the structure, automatically adjusts the number of neurons, while the weight of the neurons and relevant parameters are determined through bee colony optimization. The simulation result shows that the algorithm proposed in this paper can maintain the image edges and other important features while removing noise, so as to obtain better denoising effect

    Multiresolution image models and estimation techniques

    Get PDF

    Bidirectional Recurrent Neural Network based Early Prediction of Cardiovascular Diseases using Electrocardiogram Signals for Type 2 Diabetic Patients

    Get PDF
    Introduction: The electrocardiogram (ECG) signal is important for early diagnosis of heart abnormalities. Type 2 diabetic individuals’ ECG signals provide pertinent data about their heart and are one of the most important diagnostic techniques used by doctors to identify Cardiovascular Disease (CVD). Bidirectional Recurrent Neural Network (RNN) classifies the features linked to normal and abnormal stage ECG signal. Aim: To analyse ECG signals of type 2 diabetic patients for early prediction of CVDs using feature extraction and bidirectional RNN based classification. Materials and Methods: This was a secondary data-based modelling study at Shri Ramasamy Memorial University Sikkim, India from December 2020 to January 2022. Different noises were removed by hybrid preprocessing filter made up of a Median and Savitzky-Golay filter. Undecimated Dual Tree Complex Wavelet Transform (UDTCWT) along with Detrended fluctuation (DA) analysis and Empirical Orthogonal Function (EOF) analysis were then used to extract features. These features were classified with Bidirectional RNN. Results: The proposed method was tested on the MIT-BIH, Physionet and DICARDIA databases, and the findings showed that it achieves an average accuracy of 97.6% when compared to the conventional techniques. Conclusion: The proposed method proves to be the most effective way for detecting anomalies in ECG signals in both the early and pathological stages. This method is also effective to diagnose the early intervention of cardiovascular symptoms

    Wavelet Shrinkage Based Image Denoising using Soft Computing

    Get PDF
    Noise reduction is an open problem and has received considerable attention in the literature for several decades. Over the last two decades, wavelet based methods have been applied to the problem of noise reduction and have been shown to outperform the traditional Wiener filter, Median filter, and modified Lee filter in terms of root mean squared error (MSE), peak signal noise ratio (PSNR) and other evaluation methods. In this research, two approaches for the development of high performance algorithms for de-noising are proposed, both based on soft computing tools, such as fuzzy logic, neural networks, and genetic algorithms. First, an improved additive noise reduction method for digital grey scale nature images, which uses an interval type-2 fuzzy logic system to shrink wavelet coefficients, is proposed. This method is an extension of a recently published approach for additive noise reduction using a type-1 fuzzy logic system based wavelet shrinkage. Unlike the type-1 fuzzy logic system based wavelet shrinkage method, the proposed approach employs a thresholding filter to adjust the wavelet coefficients according to the linguistic uncertainty in neighborhood values, inter-scale dependencies and intra-scale correlations of wavelet coefficients at different resolutions by exploiting the interval type-2 fuzzy set theory. Experimental results show that the proposed approach can efficiently and rapidly remove additive noise from digital grey scale images. Objective analysis and visual observations show that the proposed approach outperforms current fuzzy non-wavelet methods and fuzzy wavelet based methods, and is comparable with some recent but more complex wavelet methods, such as Hidden Markov Model based additive noise de-noising method. The main differences between the proposed approach and other wavelet shrinkage based approaches and the main improvements of the proposed approach are also illustrated in this thesis. Second, another improved method of additive noise reduction is also proposed. The method is based on fusing the results of different filters using a Fuzzy Neural Network (FNN). The proposed method combines the advantages of these filters and has outstanding ability of smoothing out additive noise while preserving details of an image (e.g. edges and lines) effectively. A Genetic Algorithm (GA) is applied to choose the optimal parameters of the FNN. The experimental results show that the proposed method is powerful for removing noise from natural images, and the MSE of this approach is less, and the PSNR of is higher, than that of any individual filters which are used for fusion. Finally, the two proposed approaches are compared with each other from different point of views, such as objective analysis in terms of mean squared error(MSE), peak signal to noise ratio (PSNR), image quality index (IQI) based on quality assessment of distorted images, and Information Theoretic Criterion (ITC) based on a human vision model, computational cost, universality, and human observation. The results show that the proposed FNN based algorithm optimized by GA has the best performance among all testing approaches. Important considerations for these proposed approaches and future work are discussed

    Optical Coherence Tomography Noise Reduction Using Anisotropic Local Bivariate Gaussian Mixture Prior in 3D Complex Wavelet Domain

    Get PDF
    In this paper, MMSE estimator is employed for noise-free 3D OCT data recovery in 3D complex wavelet domain. Since the proposed distribution for noise-free data plays a key role in the performance of MMSE estimator, a priori distribution for the pdf of noise-free 3D complex wavelet coefficients is proposed which is able to model the main statistical properties of wavelets. We model the coefficients with a mixture of two bivariate Gaussian pdfs with local parameters which are able to capture the heavy-tailed property and inter- and intrascale dependencies of coefficients. In addition, based on the special structure of OCT images, we use an anisotropic windowing procedure for local parameters estimation that results in visual quality improvement. On this base, several OCT despeckling algorithms are obtained based on using Gaussian/two-sided Rayleigh noise distribution and homomorphic/nonhomomorphic model. In order to evaluate the performance of the proposed algorithm, we use 156 selected ROIs from 650 × 512 × 128 OCT dataset in the presence of wet AMD pathology. Our simulations show that the best MMSE estimator using local bivariate mixture prior is for the nonhomomorphic model in the presence of Gaussian noise which results in an improvement of 7.8 ± 1.7 in CNR

    Contourlet Domain Image Modeling and its Applications in Watermarking and Denoising

    Get PDF
    Statistical image modeling in sparse domain has recently attracted a great deal of research interest. Contourlet transform as a two-dimensional transform with multiscale and multi-directional properties is known to effectively capture the smooth contours and geometrical structures in images. The objective of this thesis is to study the statistical properties of the contourlet coefficients of images and develop statistically-based image denoising and watermarking schemes. Through an experimental investigation, it is first established that the distributions of the contourlet subband coefficients of natural images are significantly non-Gaussian with heavy-tails and they can be best described by the heavy-tailed statistical distributions, such as the alpha-stable family of distributions. It is shown that the univariate members of this family are capable of accurately fitting the marginal distributions of the empirical data and that the bivariate members can accurately characterize the inter-scale dependencies of the contourlet coefficients of an image. Based on the modeling results, a new method in image denoising in the contourlet domain is proposed. The Bayesian maximum a posteriori and minimum mean absolute error estimators are developed to determine the noise-free contourlet coefficients of grayscale and color images. Extensive experiments are conducted using a wide variety of images from a number of databases to evaluate the performance of the proposed image denoising scheme and to compare it with that of other existing schemes. It is shown that the proposed denoising scheme based on the alpha-stable distributions outperforms these other methods in terms of the peak signal-to-noise ratio and mean structural similarity index, as well as in terms of visual quality of the denoised images. The alpha-stable model is also used in developing new multiplicative watermark schemes for grayscale and color images. Closed-form expressions are derived for the log-likelihood-based multiplicative watermark detection algorithm for grayscale images using the univariate and bivariate Cauchy members of the alpha-stable family. A multiplicative multichannel watermark detector is also designed for color images using the multivariate Cauchy distribution. Simulation results demonstrate not only the effectiveness of the proposed image watermarking schemes in terms of the invisibility of the watermark, but also the superiority of the watermark detectors in providing detection rates higher than that of the state-of-the-art schemes even for the watermarked images undergone various kinds of attacks
    corecore