26 research outputs found

    Breast Mass Segmentation Using a Semi-automatic Procedure Based on Fuzzy C-means Clustering

    Get PDF
    Mammography is the primary modality that helped in the early detection and diagnosis of women breast diseases. Further, the process of extracting the masses in mammogram represents a challenging task facing the radiologists, due to problems such as fuzzy or speculated borders, low contrast and the presence of intensity inhomogeneities. Aims to help the radiologists in the diagnosis of breast cancer, many approaches have been conducted to automatically segment the masses in mammograms. Towards this aim, in this paper, we present a new approach for extraction of tumors from region-of-interest (ROI) using the algorithm of Fuzzy C-Means (FCM) setting two clusters for semi-automated segmentation. The proposed method meant to select as input data the set of pixels that enable to get the meaningful information required to segment the masses with high accuracy. This could be accomplished through eliminating unnecessary pixels, which influence on this process through separating it outside of the input data using an optimal threshold given by monitoring the change of clusters rate during the process of threshold decrementing. The proposed methodology has successfully segmented the masses, with an average sensitivity of 82.02% and specificity of 98.23%

    A New Approach to the Detection of Mammogram Boundary

    Get PDF
    Mammography is a method used for the detection of breast cancer. computer-aided diagnostic (CAD) systems help the radiologist in the detection and interpretation of mass in breast mammography. One of the important information of a mass is its contour and its form because it provides valuable information about the abnormality of a mass. The accuracy in the recognition of the shape of a mass is related to the accuracy of the detected mass contours. In this work we propose a new approach for detecting the boundaries of lesion in mammography images based on region growing algorithm without using the threshold, the proposed method requires an initial rectangle surrounding the lesion selected manually by the radiologist (Region Of Interest), where the region growing algorithm applies on lines segments that attach each pixel of this rectangle with the seed point, such as the ends (seeds) of each line segment grow in a direction towards one another. The proposed approach is evaluated on a set of data with 20 masses of the MIAS base whose contours are annotated manually by expert radiologists. The performance of the method is evaluated in terms of specificity, sensitivity, accuracy and overlap. All the findings and details of approach are presented in detail

    A Tutorial on Speckle Reduction in Synthetic Aperture Radar Images

    Get PDF
    Speckle is a granular disturbance, usually modeled as a multiplicative noise, that affects synthetic aperture radar (SAR) images, as well as all coherent images. Over the last three decades, several methods have been proposed for the reduction of speckle, or despeckling, in SAR images. Goal of this paper is making a comprehensive review of despeckling methods since their birth, over thirty years ago, highlighting trends and changing approaches over years. The concept of fully developed speckle is explained. Drawbacks of homomorphic filtering are pointed out. Assets of multiresolution despeckling, as opposite to spatial-domain despeckling, are highlighted. Also advantages of undecimated, or stationary, wavelet transforms over decimated ones are discussed. Bayesian estimators and probability density function (pdf) models in both spatial and multiresolution domains are reviewed. Scale-space varying pdf models, as opposite to scale varying models, are promoted. Promising methods following non-Bayesian approaches, like nonlocal (NL) filtering and total variation (TV) regularization, are reviewed and compared to spatial- and wavelet-domain Bayesian filters. Both established and new trends for assessment of despeckling are presented. A few experiments on simulated data and real COSMO-SkyMed SAR images highlight, on one side the costperformance tradeoff of the different methods, on the other side the effectiveness of solutions purposely designed for SAR heterogeneity and not fully developed speckle. Eventually, upcoming methods based on new concepts of signal processing, like compressive sensing, are foreseen as a new generation of despeckling, after spatial-domain and multiresolution-domain method

    A Decision Support System (DSS) for Breast Cancer Detection Based on Invariant Feature Extraction, Classification, and Retrieval of Masses of Mammographic Images

    Get PDF
    This paper presents an integrated system for the breast cancer detection from mammograms based on automated mass detection, classification, and retrieval with a goal to support decision-making by retrieving and displaying the relevant past cases as well as predicting the images as benign or malignant. It is hypothesized that the proposed diagnostic aid would refresh the radiologist’s mental memory to guide them to a precise diagnosis with concrete visualizations instead of only suggesting a second diagnosis like many other CAD systems. Towards achieving this goal, a Graph-Based Visual Saliency (GBVS) method is used for automatic mass detection, invariant features are extracted based on using Non-Subsampled Contourlet transform (NSCT) and eigenvalues of the Hessian matrix in a histogram of oriented gradients (HOG), and finally classification and retrieval are performed based on using Support Vector Machines (SVM) and Extreme Learning Machines (ELM), and a linear combination-based similarity fusion approach. The image retrieval and classification performances are evaluated and compared in the benchmark Digital Database for Screening Mammography (DDSM) of 2604 cases by using both the precision-recall and classification accuracies. Experimental results demonstrate the effectiveness of the proposed system and show the viability of a real-time clinical application

    Edge detection algorithm based on quantum superposition principle and photons arrival probability

    Get PDF
    The detection of object edges in images is a crucial step employed in a vast amount of computer vision applications, for which a series of different algorithms has been developed in the last decades. This paper proposes a new edge detection method based on quantum information, which is achieved in two main steps: (i) an image enhancement stage that employs the quantum superposition law and (ii) an edge detection stage based on the probability of photon arrival to the camera sensor. The proposed method has been tested on synthetic and real images devoted to agriculture applications, where Fram & Deutsh criterion has been adopted to evaluate its performance. The results show that the proposed method gives better results in terms of detection quality and computation time compared to classical edge detection algorithms such as Sobel, Kayyali, Canny and a more recent algorithm based on Shannon entropy

    Fusion of Images and Videos using Multi-scale Transforms

    Get PDF
    This thesis deals with methods for fusion of images as well as videos using multi-scale transforms. First, a novel image fusion algorithm based primarily on an improved multi-scale coefficient decomposition framework is proposed. The proposed framework uses a combination of non-subsampled contourlet and wavelet transforms for the initial multi-scale decompositions. The decomposed multi-scale coefficients are then fused twice using various local activity measures. Experimental results show that the proposed approach performs better or on par with the existing state-of-the art image fusion algorithms in terms of quantitative and qualitative performance. In addition, the proposed image fusion algorithm can produce high quality fused images even with a computationally inexpensive two-scale decomposition. Finally, we extend the proposed framework to formulate a novel video fusion algorithm for camouflaged target detection from infrared and visible sensor inputs. The proposed framework consists of a novel target identification method based on conventional thresholding techniques proposed by Otsu and Kapur et al. These thresholding techniques are further extended to formulate novel region-based fusion rules using local statistical measures. The proposed video fusion algorithm, when used in target highlighting mode, can further enhance the hidden target, making it much easier to localize the hidden camouflaged target. Experimental results show that the proposed video fusion algorithm performs much better than its counterparts in terms of quantitative and qualitative results as well as in terms of time complexity. The relative low complexity of the proposed video fusion algorithm makes it an ideal candidate for real-time video surveillance applications

    A novel multispectral and 2.5D/3D image fusion camera system for enhanced face recognition

    Get PDF
    The fusion of images from the visible and long-wave infrared (thermal) portions of the spectrum produces images that have improved face recognition performance under varying lighting conditions. This is because long-wave infrared images are the result of emitted, rather than reflected, light and are therefore less sensitive to changes in ambient light. Similarly, 3D and 2.5D images have also improved face recognition under varying pose and lighting. The opacity of glass to long-wave infrared light, however, means that the presence of eyeglasses in a face image reduces the recognition performance. This thesis presents the design and performance evaluation of a novel camera system which is capable of capturing spatially registered visible, near-infrared, long-wave infrared and 2.5D depth video images via a common optical path requiring no spatial registration between sensors beyond scaling for differences in sensor sizes. Experiments using a range of established face recognition methods and multi-class SVM classifiers show that the fused output from our camera system not only outperforms the single modality images for face recognition, but that the adaptive fusion methods used produce consistent increases in recognition accuracy under varying pose, lighting and with the presence of eyeglasses

    Remote Sensing of the Oceans

    Get PDF
    This book covers different topics in the framework of remote sensing of the oceans. Latest research advancements and brand-new studies are presented that address the exploitation of remote sensing instruments and simulation tools to improve the understanding of ocean processes and enable cutting-edge applications with the aim of preserving the ocean environment and supporting the blue economy. Hence, this book provides a reference framework for state-of-the-art remote sensing methods that deal with the generation of added-value products and the geophysical information retrieval in related fields, including: Oil spill detection and discrimination; Analysis of tropical cyclones and sea echoes; Shoreline and aquaculture area extraction; Monitoring coastal marine litter and moving vessels; Processing of SAR, HF radar and UAV measurements

    Multiscale Medical Image Fusion in Wavelet Domain

    Get PDF
    Wavelet transforms have emerged as a powerful tool in image fusion. However, the study and analysis of medical image fusion is still a challenging area of research. Therefore, in this paper, we propose a multiscale fusion of multimodal medical images in wavelet domain. Fusion of medical images has been performed at multiple scales varying from minimum to maximum level using maximum selection rule which provides more flexibility and choice to select the relevant fused images. The experimental analysis of the proposed method has been performed with several sets of medical images. Fusion results have been evaluated subjectively and objectively with existing state-of-the-art fusion methods which include several pyramid- and wavelet-transform-based fusion methods and principal component analysis (PCA) fusion method. The comparative analysis of the fusion results has been performed with edge strength (Q), mutual information (MI), entropy (E), standard deviation (SD), blind structural similarity index metric (BSSIM), spatial frequency (SF), and average gradient (AG) metrics. The combined subjective and objective evaluations of the proposed fusion method at multiple scales showed the effectiveness and goodness of the proposed approach
    corecore