485 research outputs found

    A Decision Support System (DSS) for Breast Cancer Detection Based on Invariant Feature Extraction, Classification, and Retrieval of Masses of Mammographic Images

    Get PDF
    This paper presents an integrated system for the breast cancer detection from mammograms based on automated mass detection, classification, and retrieval with a goal to support decision-making by retrieving and displaying the relevant past cases as well as predicting the images as benign or malignant. It is hypothesized that the proposed diagnostic aid would refresh the radiologist’s mental memory to guide them to a precise diagnosis with concrete visualizations instead of only suggesting a second diagnosis like many other CAD systems. Towards achieving this goal, a Graph-Based Visual Saliency (GBVS) method is used for automatic mass detection, invariant features are extracted based on using Non-Subsampled Contourlet transform (NSCT) and eigenvalues of the Hessian matrix in a histogram of oriented gradients (HOG), and finally classification and retrieval are performed based on using Support Vector Machines (SVM) and Extreme Learning Machines (ELM), and a linear combination-based similarity fusion approach. The image retrieval and classification performances are evaluated and compared in the benchmark Digital Database for Screening Mammography (DDSM) of 2604 cases by using both the precision-recall and classification accuracies. Experimental results demonstrate the effectiveness of the proposed system and show the viability of a real-time clinical application

    Shape description and matching using integral invariants on eccentricity transformed images

    Get PDF
    Matching occluded and noisy shapes is a problem frequently encountered in medical image analysis and more generally in computer vision. To keep track of changes inside the breast, for example, it is important for a computer aided detection system to establish correspondences between regions of interest. Shape transformations, computed both with integral invariants (II) and with geodesic distance, yield signatures that are invariant to isometric deformations, such as bending and articulations. Integral invariants describe the boundaries of planar shapes. However, they provide no information about where a particular feature lies on the boundary with regard to the overall shape structure. Conversely, eccentricity transforms (Ecc) can match shapes by signatures of geodesic distance histograms based on information from inside the shape; but they ignore the boundary information. We describe a method that combines the boundary signature of a shape obtained from II and structural information from the Ecc to yield results that improve on them separately

    Caracterización de Patrones Anormales en Mamografías

    Get PDF
    Abstract. Computer-guided image interpretation is an extensive research area whose main purpose is to provide tools to support decision-making, for which a large number of automatic techniques have been proposed, such as, feature extraction, pattern recognition, image processing, machine learning, among others. In breast cancer, the results obtained at this area, they have led to the development of diagnostic support systems, which have even been approved by the FDA (Federal Drug Administration). However, the use of those systems is not widely extended in clinic scenarios, mainly because their performance is unstable and poorly reproducible. This is due to the high variability of the abnormal patterns associated with this neoplasia. This thesis addresses the main problem associated with the characterization and interpretation of breast masses and architectural distortion, mammographic findings directly related to the presence of breast cancer with higher variability in their form, size and location. This document introduces the design, implementation and evaluation of strategies to characterize abnormal patterns and to improve the mammographic interpretation during the diagnosis process. The herein proposed strategies allow to characterize visual patterns of these lesions and the relationship between them to infer their clinical significance according to BI-RADS (Breast Imaging Reporting and Data System), a radiologic tool used for mammographic evaluation and reporting. The obtained results outperform some obtained by methods reported in the literature both tasks classification and interpretation of masses and architectural distortion, respectively, demonstrating the effectiveness and versatility of the proposed strategies.Resumen. La interpretación de imágenes guiada por computador es una área extensa de investigación cuyo objetivo principal es proporcionar herramientas para el soporte a la toma de decisiones, para lo cual se han usado un gran número de técnicas de extracción de características, reconocimiento de patrones, procesamiento de imágenes, aprendizaje de máquina, entre otras. En el cáncer de mama, los resultados obtenidos en esta área han dado lugar al desarrollo de sistemas de apoyo al diagnóstico que han sido incluso aprobados por la FDA (Federal Drug Administration). Sin embargo, el uso de estos sistemas no es ampliamente extendido, debido principalmente, a que su desempeño resulta inestable y poco reproducible frente a la alta variabilidad de los patrones anormales asociados a esta neoplasia. Esta tesis trata el principal problema asociado a la caracterización y análisis de masas y distorsión de la arquitectura debido a que son hallazgos directamente relacionados con la presencia de cáncer y que usualmente presentan mayor variabilidad en su forma, tamaño y localización, lo que altera los resultados diagnósticos. Este documento introduce el diseño, implementación y evaluación de un conjunto de estrategias para caracterizar patrones anormales relacionados con este tipo de hallazgos para mejorar la interpretación y soportar el diagnóstico mediante la imagen mamaria. Los modelos aquí propuestos permiten caracterizar patrones visuales y la relación entre estos para inferir su significado clínico según el estándar BI-RADS (Breast Imaging Reporting and Data System) usado para la evaluación y reporte mamográfico. Los resultados obtenidos han demostrado mejorar a los resultados obtenidos por los métodos reportados en la literatura en tareas como clasificación e interpretación de masas y distorsión arquitectural, demostrando la efectividad y versatilidad de las estrategia propuestas.Doctorad

    Detecting microcalcification clusters in digital mammograms: Study for inclusion into computer aided diagnostic prompting system

    Full text link
    Among signs of breast cancer encountered in digital mammograms radiologists point to microcalcification clusters (MCCs). Their detection is a challenging problem from both medical and image processing point of views. This work presents two concurrent methods for MCC detection, and studies their possible inclusion to a computer aided diagnostic prompting system. One considers Wavelet Domain Hidden Markov Tree (WHMT) for modeling microcalcification edges. The model is used for differentiation between MC and non-MC edges based on the weighted maximum likelihood (WML) values. The classification of objects is carried out using spatial filters. The second method employs SUSAN edge detector in the spatial domain for mammogram segmentation. Classification of objects as calcifications is carried out using another set of spatial filters and Feedforward Neural Network (NN). A same distance filter is employed in both methods to find true clusters. The analysis of two methods is performed on 54 image regions from the mammograms selected randomly from DDSM database, including benign and cancerous cases as well as cases which can be classified as hard cases from both radiologists and the computer perspectives. WHMT/WML is able to detect 98.15% true positive (TP) MCCs under 1.85% of false positives (FP), whereas the SUSAN/NN method achieves 94.44% of TP at the cost of 1.85% for FP. The comparison of these two methods suggests WHMT/WML for the computer aided diagnostic prompting. It also certifies the low false positive rates for both methods, meaning less biopsy tests per patient

    Pixel N-grams for Mammographic Image Classification

    Get PDF
    X-ray screening for breast cancer is an important public health initiative in the management of a leading cause of death for women. However, screening is expensive if mammograms are required to be manually assessed by radiologists. Moreover, manual screening is subject to perception and interpretation errors. Computer aided detection/diagnosis (CAD) systems can help radiologists as computer algorithms are good at performing image analysis consistently and repetitively. However, image features that enhance CAD classification accuracies are necessary for CAD systems to be deployed. Many CAD systems have been developed but the specificity and sensitivity is not high; in part because of challenges inherent in identifying effective features to be initially extracted from raw images. Existing feature extraction techniques can be grouped under three main approaches; statistical, spectral and structural. Statistical and spectral techniques provide global image features but often fail to distinguish between local pattern variations within an image. On the other hand, structural approach have given rise to the Bag-of-Visual-Words (BoVW) model, which captures local variations in an image, but typically do not consider spatial relationships between the visual “words”. Moreover, statistical features and features based on BoVW models are computationally very expensive. Similarly, structural feature computation methods other than BoVW are also computationally expensive and strongly dependent upon algorithms that can segment an image to localize a region of interest likely to contain the tumour. Thus, classification algorithms using structural features require high resource computers. In order for a radiologist to classify the lesions on low resource computers such as Ipads, Tablets, and Mobile phones, in a remote location, it is necessary to develop computationally inexpensive classification algorithms. Therefore, the overarching aim of this research is to discover a feature extraction/image representation model which can be used to classify mammographic lesions with high accuracy, sensitivity and specificity along with low computational cost. For this purpose a novel feature extraction technique called ‘Pixel N-grams’ is proposed. The Pixel N-grams approach is inspired from the character N-gram concept in text categorization. Here, N number of consecutive pixel intensities are considered in a particular direction. The image is then represented with the help of histogram of occurrences of the Pixel N-grams in an image. Shape and texture of mammographic lesions play an important role in determining the malignancy of the lesion. It was hypothesized that the Pixel N-grams would be able to distinguish between various textures and shapes. Experiments carried out on benchmark texture databases and binary basic shapes database have demonstrated that the hypothesis was correct. Moreover, the Pixel N-grams were able to distinguish between various shapes irrespective of size and location of shape in an image. The efficacy of the Pixel N-gram technique was tested on mammographic database of primary digital mammograms sourced from a radiological facility in Australia (LakeImaging Pty Ltd) and secondary digital mammograms (benchmark miniMIAS database). A senior radiologist from LakeImaging provided real time de-identified high resolution mammogram images with annotated regions of interests (which were used as groundtruth), and valuable radiological diagnostic knowledge. Two types of classifications were observed on these two datasets. Normal/abnormal classification useful for automated screening and circumscribed/speculation/normal classification useful for automated diagnosis of breast cancer. The classification results on both the mammography datasets using Pixel N-grams were promising. Classification performance (Fscore, sensitivity and specificity) using Pixel N-gram technique was observed to be significantly better than the existing techniques such as intensity histogram, co-occurrence matrix based features and comparable with the BoVW features. Further, Pixel N-gram features are found to be computationally less complex than the co-occurrence matrix based features as well as BoVW features paving the way for mammogram classification on low resource computers. Although, the Pixel N-gram technique was designed for mammographic classification, it could be applied to other image classification applications such as diabetic retinopathy, histopathological image classification, lung tumour detection using CT images, brain tumour detection using MRI images, wound image classification and tooth decay classification using dentistry x-ray images. Further, texture and shape classification is also useful for classification of real world images outside the medical domain. Therefore, the pixel N-gram technique could be extended for applications such as classification of satellite imagery and other object detection tasks.Doctor of Philosoph

    Cancer diagnosis using deep learning: A bibliographic review

    Get PDF
    In this paper, we first describe the basics of the field of cancer diagnosis, which includes steps of cancer diagnosis followed by the typical classification methods used by doctors, providing a historical idea of cancer classification techniques to the readers. These methods include Asymmetry, Border, Color and Diameter (ABCD) method, seven-point detection method, Menzies method, and pattern analysis. They are used regularly by doctors for cancer diagnosis, although they are not considered very efficient for obtaining better performance. Moreover, considering all types of audience, the basic evaluation criteria are also discussed. The criteria include the receiver operating characteristic curve (ROC curve), Area under the ROC curve (AUC), F1 score, accuracy, specificity, sensitivity, precision, dice-coefficient, average accuracy, and Jaccard index. Previously used methods are considered inefficient, asking for better and smarter methods for cancer diagnosis. Artificial intelligence and cancer diagnosis are gaining attention as a way to define better diagnostic tools. In particular, deep neural networks can be successfully used for intelligent image analysis. The basic framework of how this machine learning works on medical imaging is provided in this study, i.e., pre-processing, image segmentation and post-processing. The second part of this manuscript describes the different deep learning techniques, such as convolutional neural networks (CNNs), generative adversarial models (GANs), deep autoencoders (DANs), restricted Boltzmann’s machine (RBM), stacked autoencoders (SAE), convolutional autoencoders (CAE), recurrent neural networks (RNNs), long short-term memory (LTSM), multi-scale convolutional neural network (M-CNN), multi-instance learning convolutional neural network (MIL-CNN). For each technique, we provide Python codes, to allow interested readers to experiment with the cited algorithms on their own diagnostic problems. The third part of this manuscript compiles the successfully applied deep learning models for different types of cancers. Considering the length of the manuscript, we restrict ourselves to the discussion of breast cancer, lung cancer, brain cancer, and skin cancer. The purpose of this bibliographic review is to provide researchers opting to work in implementing deep learning and artificial neural networks for cancer diagnosis a knowledge from scratch of the state-of-the-art achievements
    corecore