20 research outputs found

    Textured Renyl Entropy for Image Thresholding

    Get PDF
    This paper introduces Textured Renyi Entropy for image thresholding based on a novel combination mechanism

    Optimized Shannon and Fuzzy Entropy based Machine Learning Model for Brain MRI Image Segmentation

    Get PDF
    The pre-processing procedures for medical image segmentation are a crucial task in MRI image study. The medical image thresholding approaches are competent for bi level thresholding due to its' easiness, strength, fewer convergence period and accurateness. The efficiency can be maintained using an extensive search which can be employed for choosing the best thresholds. In this scenario, swarm intelligence-based learning algorithms can be suitable to gain the best thresholds. In this paper, we have focused in thresholding algorithm for segmentation of MRI brain image by maximizing fuzzy entropy and Shannon Entropy using machine learning and new evolutionary techniques. We have considered, Whale Optimization algorithm (WOA) in order to find the best outcome as well as compared the obtained results with the Shannon Entropy or fuzzy entropy-based examination that are fundamentally improved by Differential Evolution (DE), Particle Swarm Optimization (PSO), Social group optimization algorithm (SGO). It is discovered that overall operation could be effective by the strategy in features which can be captured through picture similarity matrix along with entropy values. We have observed that the proposed whale optimization model is able to better optimize the Shannon and fuzzy entropy compared to other swarm intelligence algorithms. It is also noticed that the new swarm intelligent algorithm i.e Social Group Optimization algorithm (SGO) is also performing better than the other two optimization algorithms i.e., Differential Evolution (DE), Particle Swarm Optimization (PSO) and providing very closer performance compared to Whale optimization algorithm. However, social group optimization algorithm requires little less CPU time than whale optimization algorithm

    Edges Detection Based On Renyi Entropy with Split/Merge

    Get PDF
    Most of the classical methods for edge detection are based on the first and second order derivatives of gray levels of the pixels of the original image. These processes give rise to the exponential increment of computational time, especially with large size of images, and therefore requires more time for processing. This paper shows the new algorithm based on both the Rényi entropy and the Shannon entropy together for edge detection using split and merge technique. The objective is to find the best edge representation and decrease the computation time. A set of experiments in the domain of edge detection are presented. The system yields edge detection performance comparable to the classic methods, such as Canny, LOG, and Sobel.  The experimental results show that the effect of this method is better to LOG, and Sobel methods. In addition, it is better to other three methods in CPU time. Another benefit comes from easy implementation of this method. Keywords: Rényi Entropy, Information content, Edge detection, Thresholdin

    Optimized Shannon and Fuzzy Entropy based Machine Learning Model for Brain MRI Image Segmentation

    Get PDF
    543-549The pre-processing procedures for medical image segmentation are a crucial task in MRI image study. The medical image thresholding approaches are competent for bi level thresholding due to its' easiness, strength, fewer convergence period and accurateness. The efficiency can be maintained using an extensive search which can be employed for choosing the best thresholds. In this scenario, swarm intelligence-based learning algorithms can be suitable to gain the best thresholds. In this paper, we have focused in thresholding algorithm for segmentation of MRI brain image by maximizing fuzzy entropy and Shannon Entropy using machine learning and new evolutionary techniques. We have considered, Whale Optimization algorithm (WOA) in order to find the best outcome as well as compared the obtained results with the Shannon Entropy or fuzzy entropy-based examination that are fundamentally improved by Differential Evolution (DE), Particle Swarm Optimization (PSO), Social group optimization algorithm (SGO). It is discovered that overall operation could be effective by the strategy in features which can be captured through picture similarity matrix along with entropy values. We have observed that the proposed whale optimization model is able to better optimize the Shannon and fuzzy entropy compared to other swarm intelligence algorithms. It is also noticed that the new swarm intelligent algorithm i.e Social Group Optimization algorithm (SGO) is also performing better than the other two optimization algorithms i.e., Differential Evolution (DE), Particle Swarm Optimization (PSO) and providing very closer performance compared to Whale optimization algorithm. However, social group optimization algorithm requires little less CPU time than whale optimization algorithm

    An Examination of Some Signi cant Approaches to Statistical Deconvolution

    No full text
    We examine statistical approaches to two significant areas of deconvolution - Blind Deconvolution (BD) and Robust Deconvolution (RD) for stochastic stationary signals. For BD, we review some major classical and new methods in a unified framework of nonGaussian signals. The first class of algorithms we look at falls into the class of Minimum Entropy Deconvolution (MED) algorithms. We discuss the similarities between them despite differences in origins and motivations. We give new theoretical results concerning the behaviour and generality of these algorithms and give evidence of scenarios where they may fail. In some cases, we present new modifications to the algorithms to overcome these shortfalls. Following our discussion on the MED algorithms, we next look at a recently proposed BD algorithm based on the correntropy function, a function defined as a combination of the autocorrelation and the entropy functiosn. We examine its BD performance when compared with MED algorithms. We find that the BD carried out via correntropy-matching cannot be straightforwardly interpreted as simultaneous moment-matching due to the breakdown of the correntropy expansion in terms of moments. Other issues such as maximum/minimum phase ambiguity and computational complexity suggest that careful attention is required before establishing the correntropy algorithm as a superior alternative to the existing BD techniques. For the problem of RD, we give a categorisation of different kinds of uncertainties encountered in estimation and discuss techniques required to solve each individual case. Primarily, we tackle the overlooked cases of robustification of deconvolution filters based on estimated blurring response or estimated signal spectrum. We do this by utilising existing methods derived from criteria such as minimax MSE with imposed uncertainty bands and penalised MSE. In particular, we revisit the Modified Wiener Filter (MWF) which offers simplicity and flexibility in giving improved RDs to the standard plug-in Wiener Filter (WF)

    A Diagnosis Feature Space for Condition Monitoring and Fault Diagnosis of Ball Bearings

    Get PDF
    The problem of fault diagnosis and condition monitoring of ball bearings is a multidisciplinary subject. It involves research subjects from diverse disciplines of mechanical engineering, electrical engineering and in particular signal processing. In the first step, one should identify the correct method of investigation. The methods of investigation for condition monitoring of ball bearings include acoustic emission measurements, temperature monitoring, electrical current monitoring, debris analysis and vibration signal analysis. In this thesis the vibration signal analysis is employed. Once the method of analysis is selected, then features sensitive to faults should be calculated from the signal. While some of the features may be useful for condition monitoring, some of the calculated features might be extra and may not be helpful. Therefore, a feature reduction module should be employed. Initially, six features are selected as a candidate for the diagnosis feature space. After analyzing the trend of the features, it was concluded that three of the features are not appropriate for fault diagnosis. In this thesis, two problem is investigated. First the problem of identifying the effects of the fault size on the vibration signal is investigated. Also the performance of the feature space is tested in distinguishing the healthy ball bearings from the defective vibration signals

    WAVELET TRANSFORMS FOR EEG SIGNAL DENOISING AND DECOMPOSITION

    Get PDF
    EEG signal analysis is difficult because there are so many unwanted impulses from non-cerebral sources. Presently, methods for eliminating noise through selective frequency filtering are afflicted with a notable deprivation of EEG information. Therefore, even if the noise is decreased, the signal's uniqueness should be preserved, and decomposition of the signal should be more accurate for feature extraction in order to facilitate the classification of diseases. This step makes the diagnosis faster. In this study, three types of wavelet transforms were applied: Discrete Wavelet Transform (DWT), Wavelet Packet Transform (WPT), and Stationary Wavelet Transform (SWT), with three mother functions: Haar, Symlet2, and Coiflet2. Three parameters were used to evaluate the performance: Signal-to-Noise Ratio (SNR), Mean Square Error (MSE), and Peak Signal-to-Noise Ratio (PSNR). Most of the higher values of SNR and PSNR were 27.3189 and 40.019, respectively, and the lowest value of MSE was 5.0853 when using Symlet2-SWT level four. To decompose the signal, we relied on the best filter used in the denoising process and applied four methods: DWT, Maximal Overlap DWTs (MODWT), Empirical Mode Decomposition (EMD), and Variational Mode Decomposition (VMD). The comparison has been made between the four methods based on three metrics: energy, correlation coefficient, and distances between the Power Spectral Density (PSD), where the highest value of energy was 5.09E+08 and the lowest value of the PSD was -1.2596 when using EMD

    Automatic Precipitation Measurement Based on Raindrop Imaging and Artificial Intelligence

    Get PDF
    Rainfall measurement is subjected to various uncertainties due to the complexity of measurement techniques and atmosphere characteristics associated with weather type. Thus, this article presents a video-based disdrometer to analyze raindrop images by introducing artificial intelligence technology for the rainfall rate. First, a high-speed CMOS camera is integrated into a planar LED as a backlight source for appropriately acquiring falling raindrops in different positions. The falling raindrops can be illuminated and used for further image analysis. Algorithms developed for raindrop detection and trajectory identification are employed. In a field test, a rainfall event of 42 continuous hours has been measured by the proposed disdrometer that is validated against a commercial PARSIVEL² disdrometer and a tipping bucket rain gauge at the same area. In the evaluation for 5-min rainfall images, the results of the trajectory identification are within the precision of 87.8%, recall of 98.4%, and F1 score of 92.8%, respectively. Furthermore, the performance exhibits that the rainfall rate and raindrop size distribution (RSD) obtained by the proposed disdrometer are remarkably consistent with those of PARSIVEL² disdrometer. The results suggest that the proposed disdrometer based on the continuous movements of the falling raindrops can achieve accurate measurements and eliminate the potential errors effectively in the real-time monitoring of rainfall
    corecore