53 research outputs found

    Automatic Annotation, Classification and Retrieval of Traumatic Brain Injury CT Images

    Get PDF
    Ph.DDOCTOR OF PHILOSOPH

    Data fusion by using machine learning and computational intelligence techniques for medical image analysis and classification

    Get PDF
    Data fusion is the process of integrating information from multiple sources to produce specific, comprehensive, unified data about an entity. Data fusion is categorized as low level, feature level and decision level. This research is focused on both investigating and developing feature- and decision-level data fusion for automated image analysis and classification. The common procedure for solving these problems can be described as: 1) process image for region of interest\u27 detection, 2) extract features from the region of interest and 3) create learning model based on the feature data. Image processing techniques were performed using edge detection, a histogram threshold and a color drop algorithm to determine the region of interest. The extracted features were low-level features, including textual, color and symmetrical features. For image analysis and classification, feature- and decision-level data fusion techniques are investigated for model learning using and integrating computational intelligence and machine learning techniques. These techniques include artificial neural networks, evolutionary algorithms, particle swarm optimization, decision tree, clustering algorithms, fuzzy logic inference, and voting algorithms. This work presents both the investigation and development of data fusion techniques for the application areas of dermoscopy skin lesion discrimination, content-based image retrieval, and graphic image type classification --Abstract, page v

    Discovering a Domain Knowledge Representation for Image Grouping: Multimodal Data Modeling, Fusion, and Interactive Learning

    Get PDF
    In visually-oriented specialized medical domains such as dermatology and radiology, physicians explore interesting image cases from medical image repositories for comparative case studies to aid clinical diagnoses, educate medical trainees, and support medical research. However, general image classification and retrieval approaches fail in grouping medical images from the physicians\u27 viewpoint. This is because fully-automated learning techniques cannot yet bridge the gap between image features and domain-specific content for the absence of expert knowledge. Understanding how experts get information from medical images is therefore an important research topic. As a prior study, we conducted data elicitation experiments, where physicians were instructed to inspect each medical image towards a diagnosis while describing image content to a student seated nearby. Experts\u27 eye movements and their verbal descriptions of the image content were recorded to capture various aspects of expert image understanding. This dissertation aims at an intuitive approach to extracting expert knowledge, which is to find patterns in expert data elicited from image-based diagnoses. These patterns are useful to understand both the characteristics of the medical images and the experts\u27 cognitive reasoning processes. The transformation from the viewed raw image features to interpretation as domain-specific concepts requires experts\u27 domain knowledge and cognitive reasoning. This dissertation also approximates this transformation using a matrix factorization-based framework, which helps project multiple expert-derived data modalities to high-level abstractions. To combine additional expert interventions with computational processing capabilities, an interactive machine learning paradigm is developed to treat experts as an integral part of the learning process. Specifically, experts refine medical image groups presented by the learned model locally, to incrementally re-learn the model globally. This paradigm avoids the onerous expert annotations for model training, while aligning the learned model with experts\u27 sense-making

    A Survey on Deep Learning in Medical Image Analysis

    Full text link
    Deep learning algorithms, in particular convolutional networks, have rapidly become a methodology of choice for analyzing medical images. This paper reviews the major deep learning concepts pertinent to medical image analysis and summarizes over 300 contributions to the field, most of which appeared in the last year. We survey the use of deep learning for image classification, object detection, segmentation, registration, and other tasks and provide concise overviews of studies per application area. Open challenges and directions for future research are discussed.Comment: Revised survey includes expanded discussion section and reworked introductory section on common deep architectures. Added missed papers from before Feb 1st 201

    Data fusion techniques for biomedical informatics and clinical decision support

    Get PDF
    Data fusion can be used to combine multiple data sources or modalities to facilitate enhanced visualization, analysis, detection, estimation, or classification. Data fusion can be applied at the raw-data, feature-based, and decision-based levels. Data fusion applications of different sorts have been built up in areas such as statistics, computer vision and other machine learning aspects. It has been employed in a variety of realistic scenarios such as medical diagnosis, clinical decision support, and structural health monitoring. This dissertation includes investigation and development of methods to perform data fusion for cervical cancer intraepithelial neoplasia (CIN) and a clinical decision support system. The general framework for these applications includes image processing followed by feature development and classification of the detected region of interest (ROI). Image processing methods such as k-means clustering based on color information, dilation, erosion and centroid locating methods were used for ROI detection. The features extracted include texture, color, nuclei-based and triangle features. Analysis and classification was performed using feature- and decision-level data fusion techniques such as support vector machine, statistical methods such as logistic regression, linear discriminant analysis and voting algorithms --Abstract, page iv

    Cloud-Based Benchmarking of Medical Image Analysis

    Get PDF
    Medical imagin

    Fine Art Pattern Extraction and Recognition

    Get PDF
    This is a reprint of articles from the Special Issue published online in the open access journal Journal of Imaging (ISSN 2313-433X) (available at: https://www.mdpi.com/journal/jimaging/special issues/faper2020)

    Visual-Linguistic Semantic Alignment: Fusing Human Gaze and Spoken Narratives for Image Region Annotation

    Get PDF
    Advanced image-based application systems such as image retrieval and visual question answering depend heavily on semantic image region annotation. However, improvements in image region annotation are limited because of our inability to understand how humans, the end users, process these images and image regions. In this work, we expand a framework for capturing image region annotations where interpreting an image is influenced by the end user\u27s visual perception skills, conceptual knowledge, and task-oriented goals. Human image understanding is reflected by individuals\u27 visual and linguistic behaviors, but the meaningful computational integration and interpretation of their multimodal representations (e.g. gaze, text) remain a challenge. Our work explores the hypothesis that eye movements can help us understand experts\u27 perceptual processes and that spoken language descriptions can reveal conceptual elements of image inspection tasks. We propose that there exists a meaningful relation between gaze, spoken narratives, and image content. Using unsupervised bitext alignment, we create meaningful mappings between participants\u27 eye movements (which reveal key areas of images) and spoken descriptions of those images. The resulting alignments are then used to annotate image regions with concept labels. Our alignment accuracy exceeds baseline alignments that are obtained using both simultaneous and a fixed-delay temporal correspondence. Additionally, comparison of alignment accuracy between a method that identifies clusters in the images based on eye movements and a method that identifies clusters using image features shows that the two approaches perform well on different types of images and concept labels. This suggests that an image annotation framework could integrate information from more than one technique to handle heterogeneous images. The resulting alignments can be used to create a database of low-level image features and high-level semantic annotations corresponding to perceptually important image regions. We demonstrate the applicability of the proposed framework with two datasets: one consisting of general-domain images and another with images from the domain of medicine. This work is an important contribution toward the highly challenging problem of fusing human-elicited multimodal data sources, a problem that will become increasingly important as low-resource scenarios become more common

    Caracterización de Patrones Anormales en Mamografías

    Get PDF
    Abstract. Computer-guided image interpretation is an extensive research area whose main purpose is to provide tools to support decision-making, for which a large number of automatic techniques have been proposed, such as, feature extraction, pattern recognition, image processing, machine learning, among others. In breast cancer, the results obtained at this area, they have led to the development of diagnostic support systems, which have even been approved by the FDA (Federal Drug Administration). However, the use of those systems is not widely extended in clinic scenarios, mainly because their performance is unstable and poorly reproducible. This is due to the high variability of the abnormal patterns associated with this neoplasia. This thesis addresses the main problem associated with the characterization and interpretation of breast masses and architectural distortion, mammographic findings directly related to the presence of breast cancer with higher variability in their form, size and location. This document introduces the design, implementation and evaluation of strategies to characterize abnormal patterns and to improve the mammographic interpretation during the diagnosis process. The herein proposed strategies allow to characterize visual patterns of these lesions and the relationship between them to infer their clinical significance according to BI-RADS (Breast Imaging Reporting and Data System), a radiologic tool used for mammographic evaluation and reporting. The obtained results outperform some obtained by methods reported in the literature both tasks classification and interpretation of masses and architectural distortion, respectively, demonstrating the effectiveness and versatility of the proposed strategies.Resumen. La interpretación de imágenes guiada por computador es una área extensa de investigación cuyo objetivo principal es proporcionar herramientas para el soporte a la toma de decisiones, para lo cual se han usado un gran número de técnicas de extracción de características, reconocimiento de patrones, procesamiento de imágenes, aprendizaje de máquina, entre otras. En el cáncer de mama, los resultados obtenidos en esta área han dado lugar al desarrollo de sistemas de apoyo al diagnóstico que han sido incluso aprobados por la FDA (Federal Drug Administration). Sin embargo, el uso de estos sistemas no es ampliamente extendido, debido principalmente, a que su desempeño resulta inestable y poco reproducible frente a la alta variabilidad de los patrones anormales asociados a esta neoplasia. Esta tesis trata el principal problema asociado a la caracterización y análisis de masas y distorsión de la arquitectura debido a que son hallazgos directamente relacionados con la presencia de cáncer y que usualmente presentan mayor variabilidad en su forma, tamaño y localización, lo que altera los resultados diagnósticos. Este documento introduce el diseño, implementación y evaluación de un conjunto de estrategias para caracterizar patrones anormales relacionados con este tipo de hallazgos para mejorar la interpretación y soportar el diagnóstico mediante la imagen mamaria. Los modelos aquí propuestos permiten caracterizar patrones visuales y la relación entre estos para inferir su significado clínico según el estándar BI-RADS (Breast Imaging Reporting and Data System) usado para la evaluación y reporte mamográfico. Los resultados obtenidos han demostrado mejorar a los resultados obtenidos por los métodos reportados en la literatura en tareas como clasificación e interpretación de masas y distorsión arquitectural, demostrando la efectividad y versatilidad de las estrategia propuestas.Doctorad
    corecore