49 research outputs found

    A Comparative Study on the Methods Used for the Detection of Breast Cancer

    Get PDF
    Among women in the world, the death caused by the Breast cancer has become the leading role. At an initial stage, the tumor in the breast is hard to detect. Manual attempt have proven to be time consuming and inefficient in many cases. Hence there is a need for efficient methods that diagnoses the cancerous cell without human involvement with high accuracy. Mammography is a special case of CT scan which adopts X-ray method with high resolution film. so that it can detect well the tumors in the breast. This paper describes the comparative study of the various data mining methods on the detection of the breast cancer by using image processing techniques

    Comparison of denoising methods for digital mammographic image

    Get PDF
    We compared effects of denoising methods on digital mammographic images. The denoising methods studied were an adaptive Wiener filter and low–pass Gaussian filter. The denoising methods were applied as an image preprocessing techniques before enhancement. The performance of image denoising methods are based on Mean Squared Error (MSE) and Peak Signal To Ratio (PSNR) values

    Consistent performance measurement of a system to detect masses in mammograms based on blind feature extraction

    Get PDF
    BACKGROUND: Breast cancer continues to be a leading cause of cancer deaths among women, especially in Western countries. In the last two decades, many methods have been proposed to achieve a robust mammography‐based computer aided detection (CAD) system. A CAD system should provide high performance over time and in different clinical situations. I.e., the system should be adaptable to different clinical situations and should provide consistent performance. METHODS: We tested our system seeking a measure of the guarantee of its consistent performance. The method is based on blind feature extraction by independent component analysis (ICA) and classification by neural networks (NN) or SVM classifiers. The test mammograms were from the Digital Database for Screening Mammography (DDSM). This database was constructed collaboratively by four institutions over more than 10 years. We took advantage of this to train our system using the mammograms from each institution separately, and then testing it on the remaining mammograms. We performed another experiment to compare the results and thus obtain the measure sought. This experiment consists in to form the learning sets with all available prototypes regardless of the institution in which them were generated, obtaining in that way the overall results. RESULTS: The smallest variation from comparing the results of the testing set in each experiment (performed by training the system using the mammograms from one institution and testing with the remaining) with those of the overall result, considering the success rate for an intermediate decision maker threshold, was roughly 5%, and the largest variation was roughly 17%. But, if we considere the area under ROC curve, the smallest variation was close to 4%, and the largest variation was about a 6%. CONCLUSIONS: Considering the heterogeneity in the datasets used to train and test our system in each case, we think that the variation of performance obtained when the results are compared with the overall results is acceptable in both cases, for NN and SVM classifiers. The present method is therefore very general in that it is able to adapt to different clinical situations and provide consistent performance

    Visibility recovery on images acquired in attenuating media. Application to underwater, fog, and mammographic imaging

    Get PDF
    136 p.When acquired in attenuating media, digital images of ten suffer from a particularly complex degradation that reduces their visual quality, hindering their suitability for further computational applications, or simply decreasing the visual pleasan tness for the user. In these cases, mathematical image processing reveals it self as an ideal tool to recover some of the information lost during the degradation process. In this dissertation,we deal with three of such practical scenarios in which this problematic is specially relevant, namely, underwater image enhancement, fogremoval and mammographic image processing. In the case of digital mammograms,X-ray beams traverse human tissue, and electronic detectorscapture them as they reach the other side. However, the superposition on a bidimensional image of three-dimensional structures produces low contraste dimages in which structures of interest suffer from a diminished visibility, obstructing diagnosis tasks. Regarding fog removal, the loss of contrast is produced by the atmospheric conditions, and white colour takes over the scene uniformly as distance increases, also reducing visibility.For underwater images, there is an added difficulty, since colour is not lost uniformly; instead, red colours decay the fastest, and green and blue colours typically dominate the acquired images. To address all these challenges,in this dissertation we develop new methodologies that rely on: a)physical models of the observed degradation, and b) the calculus of variations.Equipped with this powerful machinery, we design novel theoreticaland computational tools, including image-dependent functional energies that capture the particularities of each degradation model. These energie sare composed of different integral terms that are simultaneous lyminimized by means of efficient numerical schemes, producing a clean,visually-pleasant and use ful output image, with better contrast and increased visibility. In every considered application, we provide comprehensive qualitative (visual) and quantitative experimental results to validateour methods, confirming that the developed techniques out perform other existing approaches in the literature

    Image processing and machine learning techniques used in computer-aided detection system for mammogram screening - a review

    Get PDF
    This paper aims to review the previously developed Computer-aided detection (CAD) systems for mammogram screening because increasing death rate in women due to breast cancer is a global medical issue and it can be controlled only by early detection with regular screening. Till now mammography is the widely used breast imaging modality. CAD systems have been adopted by the radiologists to increase the accuracy of the breast cancer diagnosis by avoiding human errors and experience related issues. This study reveals that in spite of the higher accuracy obtained by the earlier proposed CAD systems for breast cancer diagnosis, they are not fully automated. Moreover, the false-positive mammogram screening cases are high in number and over-diagnosis of breast cancer exposes a patient towards harmful overtreatment for which a huge amount of money is being wasted. In addition, it is also reported that the mammogram screening result with and without CAD systems does not have noticeable difference, whereas the undetected cancer cases by CAD system are increasing. Thus, future research is required to improve the performance of CAD system for mammogram screening and make it completely automated

    False-positive reduction in mammography using multiscale spatial Weber law descriptor and support vector machines

    Get PDF
    In a CAD system for the detection of masses, segmentation of mammograms yields regions of interest (ROIs), which are not only true masses but also suspicious normal tissues that result in false positives. We introduce a new method for false-positive reduction in this paper. The key idea of our approach is to exploit the textural properties of mammograms and for texture description, to use Weber law descriptor (WLD), which outperforms state-of-the-art best texture descriptors. The basic WLD is a holistic descriptor by its construction because it integrates the local information content into a single histogram, which does not take into account the spatial locality of micropatterns. We extend it into a multiscale spatial WLD (MSWLD) that better characterizes the texture micro structures of masses by incorporating the spatial locality and scale of microstructures. The dimension of the feature space generated by MSWLD becomes high; it is reduced by selecting features based on their significance. Finally, support vector machines are employed to classify ROIs as true masses or normal parenchyma. The proposed approach is evaluated using 1024 ROIs taken from digital database for screening mammography and an accuracy of Az = 0.99 ± 0.003 (area under receiver operating characteristic curve) is obtained. A comparison reveals that the proposed method has significant improvement over the state-of-the-art best methods for false-positive reduction problem

    Independent component analysis (ICA) applied to ultrasound image processing and tissue characterization

    Get PDF
    As a complicated ubiquitous phenomenon encountered in ultrasound imaging, speckle can be treated as either annoying noise that needs to be reduced or the source from which diagnostic information can be extracted to reveal the underlying properties of tissue. In this study, the application of Independent Component Analysis (ICA), a relatively new statistical signal processing tool appeared in recent years, to both the speckle texture analysis and despeckling problems of B-mode ultrasound images was investigated. It is believed that higher order statistics may provide extra information about the speckle texture beyond the information provided by first and second order statistics only. However, the higher order statistics of speckle texture is still not clearly understood and very difficult to model analytically. Any direct dealing with high order statistics is computationally forbidding. On the one hand, many conventional ultrasound speckle texture analysis algorithms use only first or second order statistics. On the other hand, many multichannel filtering approaches use pre-defined analytical filters which are not adaptive to the data. In this study, an ICA-based multichannel filtering texture analysis algorithm, which considers both higher order statistics and data adaptation, was proposed and tested on the numerically simulated homogeneous speckle textures. The ICA filters were learned directly from the training images. Histogram regularization was conducted to make the speckle images quasi-stationary in the wide sense so as to be adaptive to an ICA algorithm. Both Principal Component Analysis (PCA) and a greedy algorithm were used to reduce the dimension of feature space. Finally, Support Vector Machines (SVM) with Radial Basis Function (RBF) kernel were chosen as the classifier for achieving best classification accuracy. Several representative conventional methods, including both low and high order statistics based methods, and both filtering and non-filtering methods, have been chosen for comparison study. The numerical experiments have shown that the proposed ICA-based algorithm in many cases outperforms other algorithms for comparison. Two-component texture segmentation experiments were conducted and the proposed algorithm showed strong capability of segmenting two visually very similar yet different texture regions with rather fuzzy boundaries and almost the same mean and variance. Through simulating speckle with first order statistics approaching gradually to the Rayleigh model from different non-Rayleigh models, the experiments to some extent reveal how the behavior of higher order statistics changes with the underlying property of tissues. It has been demonstrated that when the speckle approaches the Rayleigh model, both the second and higher order statistics lose the texture differentiation capability. However, when the speckles tend to some non-Rayleigh models, methods based on higher order statistics show strong advantage over those solely based on first or second order statistics. The proposed algorithm may potentially find clinical application in the early detection of soft tissue disease, and also be helpful for better understanding ultrasound speckle phenomenon in the perspective of higher order statistics. For the despeckling problem, an algorithm was proposed which adapted the ICA Sparse Code Shrinkage (ICA-SCS) method for the ultrasound B-mode image despeckling problem by applying an appropriate preprocessing step proposed by other researchers. The preprocessing step makes the speckle noise much closer to the real white Gaussian noise (WGN) hence more amenable to a denoising algorithm such as ICS-SCS that has been strictly designed for additive WGN. A discussion is given on how to obtain the noise-free training image samples in various ways. The experimental results have shown that the proposed method outperforms several classical methods chosen for comparison, including first or second order statistics based methods (such as Wiener filter) and multichannel filtering methods (such as wavelet shrinkage), in the capability of both speckle reduction and edge preservation

    Visibility Recovery on Images Acquired in Attenuating Media. Application to Underwater, Fog, and Mammographic Imaging

    Get PDF
    When acquired in attenuating media, digital images often suffer from a particularly complex degradation that reduces their visual quality, hindering their suitability for further computational applications, or simply decreasing the visual pleasantness for the user. In these cases, mathematical image processing reveals itself as an ideal tool to recover some of the information lost during the degradation process. In this dissertation, we deal with three of such practical scenarios in which this problematic is specially relevant, namely, underwater image enhancement, fog removal and mammographic image processing. In the case of digital mammograms, X-ray beams traverse human tissue, and electronic detectors capture them as they reach the other side. However, the superposition on a bidimensional image of three-dimensional structures produces lowcontrasted images in which structures of interest suffer from a diminished visibility, obstructing diagnosis tasks. Regarding fog removal, the loss of contrast is produced by the atmospheric conditions, and white colour takes over the scene uniformly as distance increases, also reducing visibility. For underwater images, there is an added difficulty, since colour is not lost uniformly; instead, red colours decay the fastest, and green and blue colours typically dominate the acquired images. To address all these challenges, in this dissertation we develop new methodologies that rely on: a) physical models of the observed degradation, and b) the calculus of variations. Equipped with this powerful machinery, we design novel theoretical and computational tools, including image-dependent functional energies that capture the particularities of each degradation model. These energies are composed of different integral terms that are simultaneously minimized by means of efficient numerical schemes, producing a clean, visually-pleasant and useful output image, with better contrast and increased visibility. In every considered application, we provide comprehensive qualitative (visual) and quantitative experimental results to validate our methods, confirming that the developed techniques outperform other existing approaches in the literature
    corecore