293 research outputs found

    Content based Medical Image Retrieval: use of Generalized Gaussian Density to model BEMD's IMF.

    No full text
    In this paper, we address the problem of medical ddiagnosis aid through content based image retrieval methods. We propose to characterize images without extracting local features, by using global information extracted from the image Bidimensional Empirical Mode Decomposition (BEMD). This method decompose image into a set of functions named Intrinsic Mode Functions (IMF) and a residu. The generalized Gaussian density function (GGD) is used to represent the coefficients derived from each IMF, and the Kullback–Leibler Distance (KLD) compute the similarity between GGDs. Retrieval efficiency is given for different databases including a diabetic retinopathy, and a face database. Results are promising: the retrieval efficiency is higher than 85% for some cases

    BEMDEC: An Adaptive and Robust Methodology for Digital Image Feature Extraction

    Get PDF
    The intriguing study of feature extraction, and edge detection in particular, has, as a result of the increased use of imagery, drawn even more attention not just from the field of computer science but also from a variety of scientific fields. However, various challenges surrounding the formulation of feature extraction operator, particularly of edges, which is capable of satisfying the necessary properties of low probability of error (i.e., failure of marking true edges), accuracy, and consistent response to a single edge, continue to persist. Moreover, it should be pointed out that most of the work in the area of feature extraction has been focused on improving many of the existing approaches rather than devising or adopting new ones. In the image processing subfield, where the needs constantly change, we must equally change the way we think. In this digital world where the use of images, for variety of purposes, continues to increase, researchers, if they are serious about addressing the aforementioned limitations, must be able to think outside the box and step away from the usual in order to overcome these challenges. In this dissertation, we propose an adaptive and robust, yet simple, digital image features detection methodology using bidimensional empirical mode decomposition (BEMD), a sifting process that decomposes a signal into its two-dimensional (2D) bidimensional intrinsic mode functions (BIMFs). The method is further extended to detect corners and curves, and as such, dubbed as BEMDEC, indicating its ability to detect edges, corners and curves. In addition to the application of BEMD, a unique combination of a flexible envelope estimation algorithm, stopping criteria and boundary adjustment made the realization of this multi-feature detector possible. Further application of two morphological operators of binarization and thinning adds to the quality of the operator

    Detection of pathologies in retina digital images an empirical mode decomposition approach

    Get PDF
    Accurate automatic detection of pathologies in retina digital images offers a promising approach in clinicalapplications. This thesis employs the discrete wavelet transform (DWT) and empirical mode decomposition (EMD) to extract six statistical textural features from retina digital images. The statistical features are the mean, standard deviation, smoothness, third moment, uniformity, and entropy. The purpose is to classify normal and abnormal images. Five different pathologies are considered. They are Artery sheath (Coat’s disease), blot hemorrhage, retinal degeneration (circinates), age-related macular degeneration (drusens), and diabetic retinopathy (microaneurysms and exudates). Four classifiers are employed; including support vector machines (SVM), quadratic discriminant analysis (QDA), k-nearest neighbor algorithm (k-NN), and probabilistic neural networks (PNN). For each experiment, ten random folds are generated to perform cross-validation tests. In order to assess the performance of the classifiers, the average and standard deviation of the correct recognition rate, sensitivity and specificity are computed for each simulation. The experimental results highlight two main conclusions. First, they show the outstanding performance of EMD over DWT with all classifiers. Second, they demonstrate the superiority of the SVM classifier over QDA, k-NN, and PNN. Finally, principal component analysis (PCA) was employed to reduce the number of features in hope to improve the accuracy of classifiers. We find that there is no general and significant improvement of the performance, however. In sum, the EMD-SVM system provides a promising approach for the detection of pathologies in digital retina

    Convergence and stability assessment of Newton-Kantorovich reconstrutin algorithms for microwve tomography

    Get PDF
    For newly developed iterative Newton-Kantorovitch reconstruction techniques, the quality of the final image depends on both experimental and model noise. Experimental noise is inherent to any experimental acquisition scheme, while model noise refers to the accuracy of the numerical model, used in the reconstruction process, to reproduce the experimental setup. This paper provides a systematic assessment of the major sources of experimental and model noise on the quality of the final image. This assessment is conducted from experimental data obtained with a microwave circular scanner operating at 2.33 GHz. Targets to be imaged include realistic biological structures, such as a human forearm, as well as calibrated samples for the sake of accuracy evaluation. The results provide a quantitative estimation of the effect of experimental factors, such as temperature of the immersion medium, frequency, signal-to-noise ratio, and various numerical parameters.Peer Reviewe

    Metodi di equalizzazione dell'istogramma e di mode decomposition per il miglioramento di immagini biomediche

    Get PDF
    La presente tesi descrive dei metodi di miglioramento delle immagini mediche basati sull'equalizzazione dell'istogramma e sulla Bidimensional Empirical Mode Decomposition (BEMD). Si descriveranno l'algoritmo tradizionale di equalizzazione (HE), un suo miglioramento (BHE) e due algoritmi recentemente proposti in letteratura. Il primo preelabora l'istogramma, il secondo combina l'Histogram Equalization con la BEMD. Si studieranno i pro e i contro di ciascuno di questi metod

    Robust watermarking for magnetic resonance images with automatic region of interest detection

    Get PDF
    Medical image watermarking requires special considerations compared to ordinary watermarking methods. The first issue is the detection of an important area of the image called the Region of Interest (ROI) prior to starting the watermarking process. Most existing ROI detection procedures use manual-based methods, while in automated methods the robustness against intentional or unintentional attacks has not been considered extensively. The second issue is the robustness of the embedded watermark against different attacks. A common drawback of existing watermarking methods is their weakness against salt and pepper noise. The research carried out in this thesis addresses these issues of having automatic ROI detection for magnetic resonance images that are robust against attacks particularly the salt and pepper noise and designing a new watermarking method that can withstand high density salt and pepper noise. In the ROI detection part, combinations of several algorithms such as morphological reconstruction, adaptive thresholding and labelling are utilized. The noise-filtering algorithm and window size correction block are then introduced for further enhancement. The performance of the proposed ROI detection is evaluated by computing the Comparative Accuracy (CA). In the watermarking part, a combination of spatial method, channel coding and noise filtering schemes are used to increase the robustness against salt and pepper noise. The quality of watermarked image is evaluated using Peak Signal-to-Noise Ratio (PSNR) and Structural Similarity Index (SSIM), and the accuracy of the extracted watermark is assessed in terms of Bit Error Rate (BER). Based on experiments, the CA under eight different attacks (speckle noise, average filter, median filter, Wiener filter, Gaussian filter, sharpening filter, motion, and salt and pepper noise) is between 97.8% and 100%. The CA under different densities of salt and pepper noise (10%-90%) is in the range of 75.13% to 98.99%. In the watermarking part, the performance of the proposed method under different densities of salt and pepper noise measured by total PSNR, ROI PSNR, total SSIM and ROI SSIM has improved in the ranges of 3.48-23.03 (dB), 3.5-23.05 (dB), 0-0.4620 and 0-0.5335 to 21.75-42.08 (dB), 20.55-40.83 (dB), 0.5775-0.8874 and 0.4104-0.9742 respectively. In addition, the BER is reduced to the range of 0.02% to 41.7%. To conclude, the proposed method has managed to significantly improve the performance of existing medical image watermarking methods

    Image Analysis and Image Mining Techniques: A Review

    Get PDF
    This paper presents the analysis of existing literature which is relevant to mining and the mechanisms associated with weighted substructure. Though, the literature consists of a lot many research contributions, but, here, we have analysed around thirty-five research and review papers. The existing approaches are categorized based on the basic concepts involved in the mechanisms. The emphasis is on the concept used by the concerned authors, the database used for experimentations and the performance evaluation parameters. Their claims are also highlighted. Our findings from the exhaustive literature review are mentioned along with the identified problems. This paper is useful for comparative study of various approaches which is prerequisite for solving image mining problem
    • …
    corecore