30 research outputs found

    Feature Extraction and Design in Deep Learning Models

    Get PDF
    The selection and computation of meaningful features is critical for developing good deep learning methods. This dissertation demonstrates how focusing on this process can significantly improve the results of learning-based approaches. Specifically, this dissertation presents a series of different studies in which feature extraction and design was a significant factor for obtaining effective results. The first two studies are a content-based image retrieval system (CBIR) and a seagrass quantification study in which deep learning models were used to extract meaningful high-level features that significantly increased the performance of the approaches. Secondly, a method for change detection is proposed where the multispectral channels of satellite images are combined with different feature indices to improve the results. Then, two novel feature operators for mesh convolutional networks are presented that successfully extract invariant features from the faces and vertices of a mesh, respectively. The novel feature operators significantly outperform the previous state of the art for mesh classification and segmentation and provide two novel architectures for applying convolutional operations to the faces and vertices of geometric 3D meshes. Finally, a novel approach for automatic generation of 3D meshes is presented. The generative model efficiently uses the vertex-based feature operators proposed in the previous study and successfully learns to produce shapes from a mesh dataset with arbitrary topology

    See Through the Fog: Curriculum Learning with Progressive Occlusion in Medical Imaging

    Full text link
    In recent years, deep learning models have revolutionized medical image interpretation, offering substantial improvements in diagnostic accuracy. However, these models often struggle with challenging images where critical features are partially or fully occluded, which is a common scenario in clinical practice. In this paper, we propose a novel curriculum learning-based approach to train deep learning models to handle occluded medical images effectively. Our method progressively introduces occlusion, starting from clear, unobstructed images and gradually moving to images with increasing occlusion levels. This ordered learning process, akin to human learning, allows the model to first grasp simple, discernable patterns and subsequently build upon this knowledge to understand more complicated, occluded scenarios. Furthermore, we present three novel occlusion synthesis methods, namely Wasserstein Curriculum Learning (WCL), Information Adaptive Learning (IAL), and Geodesic Curriculum Learning (GCL). Our extensive experiments on diverse medical image datasets demonstrate substantial improvements in model robustness and diagnostic accuracy over conventional training methodologies.Comment: 20 pages, 3 figures, 1 tabl

    Image processing in medicine advances for phenotype characterization, computer-assisted diagnosis and surgical planning

    Get PDF
    En esta Tesis presentamos nuestras contribuciones al estado del arte en procesamiento digital de imágenes médicas, articulando nuestra exposición en torno a los tres principales objetivos de la adquisición de imágenes en medicina: la prevención, el diagnóstico y el tratamiento de las enfermedades. La prevención de la enfermedad se puede conseguir a veces mediante una caracterización cuidadosa de los fenotipos propios de la misma. Tal caracterización a menudo se alcanza a partir de imágenes. Presentamos nuestro trabajo en caracterización del enfisema pulmonar a partir de imágenes TAC (Tomografía Axial Computerizada) de tórax en alta resolución, a través del análisis de las texturas locales de la imagen. Nos proponemos llenar el vacío existente entre la práctica clínica actual, y las sofisticadas pero costosas técnicas de caracterización de regiones texturadas, disponibles en la literatura. Lo hacemos utilizando la distribución local de intensidades como un descriptor adecuado para determinar el grado de destrucción de tejido en pulmones enfisematosos. Se presentan interesantes resultados derivados del análisis de varios cientos de imágenes para niveles variables de severidad de la enfermedad, sugiriendo tanto la validez de nuestras hipótesis, como la pertinencia de este tipo de análisis para la comprensión de la enfermedad pulmonar obstructiva crónica. El procesado de imágenes médicas también puede asistir en el diagnóstico y detección de enfermedades. Presentamos nuestras contribuciones a este campo, que consisten en técnicas de segmentación y cuantificación de imágenes dermatoscópicas de lesiones de la piel. La segmentación se obtiene mediante un novedoso algoritmo basado en contornos activos que explota al máximo el contenido cromático de las imágenes, gracias a la maximización de la discrepancia mediante comparaciones cross-bin. La cuantificación de texturas en lesiones melanocíticas se lleva a cabo utilizando un modelado de los patrones de pigmentación basado en campos aleatorios de Markov, en un esfuerzo por adoptar la tendencia emergente en dermatología: la detección de la malignidad mediante el análisis de la irregularidad de la textura. Los resultados para ambas técnicas son validados con un conjunto significativo de imágenes dermatológicas, sugiriendo líneas interesantes para la detección automática del melanoma maligno. Cuando la enfermedad ya está presente, el tratamiento digital de imágenes puede asistir en la planificación quirúrgica y la intervención guiada por imagen. La planificación terapeútica, ejemplicada por la planificación de cirugía plástica usando realidad virtual, se aborda en nuestro trabajo en segmentación de hueso/grasa/músculo en imágenes TAC. Usando un abordaje interactivo e incremental, nuestro sistema permite obtener segmentaciones precisas a partir de unos cuantos clics de ratón para una gran variedad de condiciones de adquisición y frente a anatomícas anormales. Presentamos nuestra metodología, y nuestra validación experimental profusa basada tanto en segmentaciones manuales como en valoraciones subjetivas de los usuarios, e indicamos referencias al lector que detallan los beneficios obtenidos con el uso de la plataforma de planifificación que utiliza nuestro algoritmo. Como conclusión presentamos una disertación final sobre la importancia de nuestros resultados y las líneas probables de trabajo futuro hacía el objetivo último de mejorar el cuidado de la salud mediante técnicas de tratamiento digital de imágenes médicas.In this Thesis we present our contributions to the state-of-the-art in medical image processing, articulating our exposition around the three main roles of medical imaging: disease prevention, diagnosis and treatment. Disease prevention can sometimes be achieved by proper characterization of disease phenotypes. Such characterization is often attained from the standpoint of imaging. We present our work in characterization of emphysema from highresolution computed-tomography images via quanti_cation of local texture. We propose to _ll the gap between current clinical practice and sophisticated texture approaches by the use of local intensity distributions as an adequate descriptor for the degree of tissue destruction in the emphysematous lung. Interesting results are presented from the analysis of several hundred datasets of lung CT for varying disease severity, suggesting both the correctness of our hypotheses and the pertinence of _ne emphysema quanti_cation for understanding of chronic obstructive pulmonary disease. Medical image processing can also assist in the diagnosis and detection of disease. We introduce our contributions to this_eld, consisting of segmentation and quanti_cation techniques in application to dermatoscopy images of skin lesions. Segmentation is achieved via a novel active contour algorithm that fully exploits the color content of the images, via cross-bin histogram dissimilarity maximization. Texture quanti_cation in the context of melanocytic lesions is performed using modelization of the pigmentation patterns via Markov random elds, in an e_ort to embrace the emerging trend in dermatology: malignancy assessment based on texture irregularity analysis. Experimental results for both, the segmentation and quanti_cation proposed techniques, will be validated on a signi_cant set of dermatoscopy images, suggesting interesting pathways towards automatic detection and diagnosis of malignant melanoma. Once disease has occurred, image processing can assist in therapeutical planning and image-guided intervention. Therapeutical planning, exempli_ed by virtual reality surgical planning, is tackled by our work in segmentation of bone/fat/muscle in CT images for plastic surgery planning. Using an interactive, incremental approach, our system is able to provide accurate segmentations based on a couple of mouse-clicks for a wide variety of imaging conditions and abnormal anatomies. We present our methodology, and provide profuse experimental validation based on manual segmentations and subjective assessment, and refer the reader to related work reporting on the clinical bene_ts obtained using the virtual reality platform hosting our algorithm. As a conclusion we present a _nal dissertation on the signi_cance of our results and the probable lines of future work towards fully bene_tting healthcare using medical image processing

    Pattern classification approaches for breast cancer identification via MRI: state‐of‐the‐art and vision for the future

    Get PDF
    Mining algorithms for Dynamic Contrast Enhanced Magnetic Resonance Imaging (DCEMRI) of breast tissue are discussed. The algorithms are based on recent advances in multidimensional signal processing and aim to advance current state‐of‐the‐art computer‐aided detection and analysis of breast tumours when these are observed at various states of development. The topics discussed include image feature extraction, information fusion using radiomics, multi‐parametric computer‐aided classification and diagnosis using information fusion of tensorial datasets as well as Clifford algebra based classification approaches and convolutional neural network deep learning methodologies. The discussion also extends to semi‐supervised deep learning and self‐supervised strategies as well as generative adversarial networks and algorithms using generated confrontational learning approaches. In order to address the problem of weakly labelled tumour images, generative adversarial deep learning strategies are considered for the classification of different tumour types. The proposed data fusion approaches provide a novel Artificial Intelligence (AI) based framework for more robust image registration that can potentially advance the early identification of heterogeneous tumour types, even when the associated imaged organs are registered as separate entities embedded in more complex geometric spaces. Finally, the general structure of a high‐dimensional medical imaging analysis platform that is based on multi‐task detection and learning is proposed as a way forward. The proposed algorithm makes use of novel loss functions that form the building blocks for a generated confrontation learning methodology that can be used for tensorial DCE‐MRI. Since some of the approaches discussed are also based on time‐lapse imaging, conclusions on the rate of proliferation of the disease can be made possible. The proposed framework can potentially reduce the costs associated with the interpretation of medical images by providing automated, faster and more consistent diagnosis

    Mathematical Modelling and Machine Learning Methods for Bioinformatics and Data Science Applications

    Get PDF
    Mathematical modeling is routinely used in physical and engineering sciences to help understand complex systems and optimize industrial processes. Mathematical modeling differs from Artificial Intelligence because it does not exclusively use the collected data to describe an industrial phenomenon or process, but it is based on fundamental laws of physics or engineering that lead to systems of equations able to represent all the variables that characterize the process. Conversely, Machine Learning methods require a large amount of data to find solutions, remaining detached from the problem that generated them and trying to infer the behavior of the object, material or process to be examined from observed samples. Mathematics allows us to formulate complex models with effectiveness and creativity, describing nature and physics. Together with the potential of Artificial Intelligence and data collection techniques, a new way of dealing with practical problems is possible. The insertion of the equations deriving from the physical world in the data-driven models can in fact greatly enrich the information content of the sampled data, allowing to simulate very complex phenomena, with drastically reduced calculation times. Combined approaches will constitute a breakthrough in cutting-edge applications, providing precise and reliable tools for the prediction of phenomena in biological macro/microsystems, for biotechnological applications and for medical diagnostics, particularly in the field of precision medicine

    Deep Learning in Medical Image Analysis

    Get PDF
    The accelerating power of deep learning in diagnosing diseases will empower physicians and speed up decision making in clinical environments. Applications of modern medical instruments and digitalization of medical care have generated enormous amounts of medical images in recent years. In this big data arena, new deep learning methods and computational models for efficient data processing, analysis, and modeling of the generated data are crucially important for clinical applications and understanding the underlying biological process. This book presents and highlights novel algorithms, architectures, techniques, and applications of deep learning for medical image analysis

    Generative Models for Inverse Imaging Problems

    Get PDF

    Deep Learning for Medical Imaging in a Biased Environment

    Get PDF
    Deep learning (DL) based applications have successfully solved numerous problems in machine perception. In radiology, DL-based image analysis systems are rapidly evolving and show progress in guiding treatment decisions, diagnosing, localizing disease on medical images, and improving radiologists\u27 workflow. However, many DL-based radiological systems fail to generalize when deployed in new hospital settings, and the causes of these failures are not always clear. Although significant effort continues to be invested in applying DL algorithms to radiological data, many open questions and issues that arise from incomplete datasets remain. To bridge the gap, we first review the current state of artificial intelligence applied to radiology data, followed by juxtaposing the use of classical computer vision features (i.e., hand-crafted features) with the recent advances caused by deep learning. However, using DL is not an excuse for a lack of rigorous study design, which we demonstrate by proposing sanity tests that determine when a DL system is right for the wrong reasons. Having established the appropriate way to assess DL systems, we then turn to improve their efficacy and generalizability by leveraging prior information about human physiology and data derived from dual energy computed tomography scans. In this dissertation, we address the gaps in the radiology literature by introducing new tools, testing strategies, and methods to mitigate the influence of dataset biases

    Novel Computer-Aided Diagnosis Schemes for Radiological Image Analysis

    Get PDF
    The computer-aided diagnosis (CAD) scheme is a powerful tool in assisting clinicians (e.g., radiologists) to interpret medical images more accurately and efficiently. In developing high-performing CAD schemes, classic machine learning (ML) and deep learning (DL) algorithms play an essential role because of their advantages in capturing meaningful patterns that are important for disease (e.g., cancer) diagnosis and prognosis from complex datasets. This dissertation, organized into four studies, investigates the feasibility of developing several novel ML-based and DL-based CAD schemes for different cancer research purposes. The first study aims to develop and test a unique radiomics-based CT image marker that can be used to detect lymph node (LN) metastasis for cervical cancer patients. A total of 1,763 radiomics features were first computed from the segmented primary cervical tumor depicted on one CT image with the maximal tumor region. Next, a principal component analysis algorithm was applied on the initial feature pool to determine an optimal feature cluster. Then, based on this optimal cluster, machine learning models (e.g., support vector machine (SVM)) were trained and optimized to generate an image marker to detect LN metastasis. The SVM based imaging marker achieved an AUC (area under the ROC curve) value of 0.841 ± 0.035. This study initially verifies the feasibility of combining CT images and the radiomics technology to develop a low-cost image marker for LN metastasis detection among cervical cancer patients. In the second study, the purpose is to develop and evaluate a unique global mammographic image feature analysis scheme to identify case malignancy for breast cancer. From the entire breast area depicted on the mammograms, 59 features were initially computed to characterize the breast tissue properties in both the spatial and frequency domain. Given that each case consists of two cranio-caudal and two medio-lateral oblique view images of left and right breasts, two feature pools were built, which contain the computed features from either two positive images of one breast or all the four images of two breasts. For each feature pool, a particle swarm optimization (PSO) method was applied to determine the optimal feature cluster followed by training an SVM classifier to generate a final score for predicting likelihood of the case being malignant. The classification performances measured by AUC were 0.79±0.07 and 0.75±0.08 when applying the SVM classifiers trained using image features computed from two-view and four-view images, respectively. This study demonstrates the potential of developing a global mammographic image feature analysis-based scheme to predict case malignancy without including an arduous segmentation of breast lesions. In the third study, given that the performance of DL-based models in the medical imaging field is generally bottlenecked by a lack of sufficient labeled images, we specifically investigate the effectiveness of applying the latest transferring generative adversarial networks (GAN) technology to augment limited data for performance boost in the task of breast mass classification. This transferring GAN model was first pre-trained on a dataset of 25,000 mammogram patches (without labels). Then its generator and the discriminator were fine-tuned on a much smaller dataset containing 1024 labeled breast mass images. A supervised loss was integrated with the discriminator, such that it can be used to directly classify the benign/malignant masses. Our proposed approach improved the classification accuracy by 6.002%, when compared with the classifiers trained without traditional data augmentation. This investigation may provide a new perspective for researchers to effectively train the GAN models on a medical imaging task with only limited datasets. Like the third study, our last study also aims to alleviate DL models’ reliance on large amounts of annotations but uses a totally different approach. We propose employing a semi-supervised method, i.e., virtual adversarial training (VAT), to learn and leverage useful information underlying in unlabeled data for better classification of breast masses. Accordingly, our VAT-based models have two types of losses, namely supervised and virtual adversarial losses. The former loss acts as in supervised classification, while the latter loss works towards enhancing the model’s robustness against virtual adversarial perturbation, thus improving model generalizability. A large CNN and a small CNN were used in this investigation, and both were trained with and without the adversarial loss. When the labeled ratios were 40% and 80%, VAT-based CNNs delivered the highest classification accuracy of 0.740±0.015 and 0.760±0.015, respectively. The experimental results suggest that the VAT-based CAD scheme can effectively utilize meaningful knowledge from unlabeled data to better classify mammographic breast mass images. In summary, several innovative approaches have been investigated and evaluated in this dissertation to develop ML-based and DL-based CAD schemes for the diagnosis of cervical cancer and breast cancer. The promising results demonstrate the potential of these CAD schemes in assisting radiologists to achieve a more accurate interpretation of radiological images
    corecore