7 research outputs found

    Caracterización de Patrones Anormales en Mamografías

    Get PDF
    Abstract. Computer-guided image interpretation is an extensive research area whose main purpose is to provide tools to support decision-making, for which a large number of automatic techniques have been proposed, such as, feature extraction, pattern recognition, image processing, machine learning, among others. In breast cancer, the results obtained at this area, they have led to the development of diagnostic support systems, which have even been approved by the FDA (Federal Drug Administration). However, the use of those systems is not widely extended in clinic scenarios, mainly because their performance is unstable and poorly reproducible. This is due to the high variability of the abnormal patterns associated with this neoplasia. This thesis addresses the main problem associated with the characterization and interpretation of breast masses and architectural distortion, mammographic findings directly related to the presence of breast cancer with higher variability in their form, size and location. This document introduces the design, implementation and evaluation of strategies to characterize abnormal patterns and to improve the mammographic interpretation during the diagnosis process. The herein proposed strategies allow to characterize visual patterns of these lesions and the relationship between them to infer their clinical significance according to BI-RADS (Breast Imaging Reporting and Data System), a radiologic tool used for mammographic evaluation and reporting. The obtained results outperform some obtained by methods reported in the literature both tasks classification and interpretation of masses and architectural distortion, respectively, demonstrating the effectiveness and versatility of the proposed strategies.Resumen. La interpretación de imágenes guiada por computador es una área extensa de investigación cuyo objetivo principal es proporcionar herramientas para el soporte a la toma de decisiones, para lo cual se han usado un gran número de técnicas de extracción de características, reconocimiento de patrones, procesamiento de imágenes, aprendizaje de máquina, entre otras. En el cáncer de mama, los resultados obtenidos en esta área han dado lugar al desarrollo de sistemas de apoyo al diagnóstico que han sido incluso aprobados por la FDA (Federal Drug Administration). Sin embargo, el uso de estos sistemas no es ampliamente extendido, debido principalmente, a que su desempeño resulta inestable y poco reproducible frente a la alta variabilidad de los patrones anormales asociados a esta neoplasia. Esta tesis trata el principal problema asociado a la caracterización y análisis de masas y distorsión de la arquitectura debido a que son hallazgos directamente relacionados con la presencia de cáncer y que usualmente presentan mayor variabilidad en su forma, tamaño y localización, lo que altera los resultados diagnósticos. Este documento introduce el diseño, implementación y evaluación de un conjunto de estrategias para caracterizar patrones anormales relacionados con este tipo de hallazgos para mejorar la interpretación y soportar el diagnóstico mediante la imagen mamaria. Los modelos aquí propuestos permiten caracterizar patrones visuales y la relación entre estos para inferir su significado clínico según el estándar BI-RADS (Breast Imaging Reporting and Data System) usado para la evaluación y reporte mamográfico. Los resultados obtenidos han demostrado mejorar a los resultados obtenidos por los métodos reportados en la literatura en tareas como clasificación e interpretación de masas y distorsión arquitectural, demostrando la efectividad y versatilidad de las estrategia propuestas.Doctorad

    IMAGE UNDERSTANDING OF MOLAR PREGNANCY BASED ON ANOMALIES DETECTION

    Get PDF
    Cancer occurs when normal cells grow and multiply without normal control. As the cells multiply, they form an area of abnormal cells, known as a tumour. Many tumours exhibit abnormal chromosomal segregation at cell division. These anomalies play an important role in detecting molar pregnancy cancer. Molar pregnancy, also known as hydatidiform mole, can be categorised into partial (PHM) and complete (CHM) mole, persistent gestational trophoblastic and choriocarcinoma. Hydatidiform moles are most commonly found in women under the age of 17 or over the age of 35. Hydatidiform moles can be detected by morphological and histopathological examination. Even experienced pathologists cannot easily classify between complete and partial hydatidiform moles. However, the distinction between complete and partial hydatidiform moles is important in order to recommend the appropriate treatment method. Therefore, research into molar pregnancy image analysis and understanding is critical. The hypothesis of this research project is that an anomaly detection approach to analyse molar pregnancy images can improve image analysis and classification of normal PHM and CHM villi. The primary aim of this research project is to develop a novel method, based on anomaly detection, to identify and classify anomalous villi in molar pregnancy stained images. The novel method is developed to simulate expert pathologists’ approach in diagnosis of anomalous villi. The knowledge and heuristics elicited from two expert pathologists are combined with the morphological domain knowledge of molar pregnancy, to develop a heuristic multi-neural network architecture designed to classify the villi into their appropriated anomalous types. This study confirmed that a single feature cannot give enough discriminative power for villi classification. Whereas expert pathologists consider the size and shape before textural features, this thesis demonstrated that the textural feature has a higher discriminative power than size and shape. The first heuristic-based multi-neural network, which was based on 15 elicited features, achieved an improved average accuracy of 81.2%, compared to the traditional multi-layer perceptron (80.5%); however, the recall of CHM villi class was still low (64.3%). Two further textural features, which were elicited and added to the second heuristic-based multi-neural network, have improved the average accuracy from 81.2% to 86.1% and the recall of CHM villi class from 64.3% to 73.5%. The precision of the multi-neural network II has also increased from 82.7% to 89.5% for normal villi class, from 81.3% to 84.7% for PHM villi class and from 80.8% to 86% for CHM villi class. To support pathologists to visualise the results of the segmentation, a software tool, Hydatidiform Mole Analysis Tool (HYMAT), was developed compiling the morphological and pathological data for each villus analysis

    Reinforced Segmentation of Images Containing One Object of Interest

    Get PDF
    In many image-processing applications, one object of interest must be segmented. The techniques used for segmentation vary depending on the particular situation and the specifications of the problem at hand. In methods that rely on a learning process, the lack of a sufficient number of training samples is usually an obstacle, especially when the samples need to be manually prepared by an expert. The performance of some other methods may suffer from frequent user interactions to determine the critical segmentation parameters. Also, none of the existing approaches use online (permanent) feedback, from the user, in order to evaluate the generated results. Considering the above factors, a new multi-stage image segmentation system, based on Reinforcement Learning (RL) is introduced as the main contribution of this research. In this system, the RL agent takes specific actions, such as changing the tasks parameters, to modify the quality of the segmented image. The approach starts with a limited number of training samples and improves its performance in the course of time. In this system, the expert knowledge is continuously incorporated to increase the segmentation capabilities of the method. Learning occurs based on interactions with an offline simulation environment, and later online through interactions with the user. The offline mode is performed using a limited number of manually segmented samples, to provide the segmentation agent with basic information about the application domain. After this mode, the agent can choose the appropriate parameter values for different processing tasks, based on its accumulated knowledge. The online mode, consequently, guarantees that the system is continuously training and can increase its accuracy, the more the user works with it. During this mode, the agent captures the user preferences and learns how it must change the segmentation parameters, so that the best result is achieved. By using these two learning modes, the RL agent allows us to optimally recognize the decisive parameters for the entire segmentation process

    Texture and Colour in Image Analysis

    Get PDF
    Research in colour and texture has experienced major changes in the last few years. This book presents some recent advances in the field, specifically in the theory and applications of colour texture analysis. This volume also features benchmarks, comparative evaluations and reviews

    Pixel N-grams for Mammographic Image Classification

    Get PDF
    X-ray screening for breast cancer is an important public health initiative in the management of a leading cause of death for women. However, screening is expensive if mammograms are required to be manually assessed by radiologists. Moreover, manual screening is subject to perception and interpretation errors. Computer aided detection/diagnosis (CAD) systems can help radiologists as computer algorithms are good at performing image analysis consistently and repetitively. However, image features that enhance CAD classification accuracies are necessary for CAD systems to be deployed. Many CAD systems have been developed but the specificity and sensitivity is not high; in part because of challenges inherent in identifying effective features to be initially extracted from raw images. Existing feature extraction techniques can be grouped under three main approaches; statistical, spectral and structural. Statistical and spectral techniques provide global image features but often fail to distinguish between local pattern variations within an image. On the other hand, structural approach have given rise to the Bag-of-Visual-Words (BoVW) model, which captures local variations in an image, but typically do not consider spatial relationships between the visual “words”. Moreover, statistical features and features based on BoVW models are computationally very expensive. Similarly, structural feature computation methods other than BoVW are also computationally expensive and strongly dependent upon algorithms that can segment an image to localize a region of interest likely to contain the tumour. Thus, classification algorithms using structural features require high resource computers. In order for a radiologist to classify the lesions on low resource computers such as Ipads, Tablets, and Mobile phones, in a remote location, it is necessary to develop computationally inexpensive classification algorithms. Therefore, the overarching aim of this research is to discover a feature extraction/image representation model which can be used to classify mammographic lesions with high accuracy, sensitivity and specificity along with low computational cost. For this purpose a novel feature extraction technique called ‘Pixel N-grams’ is proposed. The Pixel N-grams approach is inspired from the character N-gram concept in text categorization. Here, N number of consecutive pixel intensities are considered in a particular direction. The image is then represented with the help of histogram of occurrences of the Pixel N-grams in an image. Shape and texture of mammographic lesions play an important role in determining the malignancy of the lesion. It was hypothesized that the Pixel N-grams would be able to distinguish between various textures and shapes. Experiments carried out on benchmark texture databases and binary basic shapes database have demonstrated that the hypothesis was correct. Moreover, the Pixel N-grams were able to distinguish between various shapes irrespective of size and location of shape in an image. The efficacy of the Pixel N-gram technique was tested on mammographic database of primary digital mammograms sourced from a radiological facility in Australia (LakeImaging Pty Ltd) and secondary digital mammograms (benchmark miniMIAS database). A senior radiologist from LakeImaging provided real time de-identified high resolution mammogram images with annotated regions of interests (which were used as groundtruth), and valuable radiological diagnostic knowledge. Two types of classifications were observed on these two datasets. Normal/abnormal classification useful for automated screening and circumscribed/speculation/normal classification useful for automated diagnosis of breast cancer. The classification results on both the mammography datasets using Pixel N-grams were promising. Classification performance (Fscore, sensitivity and specificity) using Pixel N-gram technique was observed to be significantly better than the existing techniques such as intensity histogram, co-occurrence matrix based features and comparable with the BoVW features. Further, Pixel N-gram features are found to be computationally less complex than the co-occurrence matrix based features as well as BoVW features paving the way for mammogram classification on low resource computers. Although, the Pixel N-gram technique was designed for mammographic classification, it could be applied to other image classification applications such as diabetic retinopathy, histopathological image classification, lung tumour detection using CT images, brain tumour detection using MRI images, wound image classification and tooth decay classification using dentistry x-ray images. Further, texture and shape classification is also useful for classification of real world images outside the medical domain. Therefore, the pixel N-gram technique could be extended for applications such as classification of satellite imagery and other object detection tasks.Doctor of Philosoph
    corecore