275 research outputs found

    Computer-Aided Assessment of Tuberculosis with Radiological Imaging: From rule-based methods to Deep Learning

    Get PDF
    Mención Internacional en el título de doctorTuberculosis (TB) is an infectious disease caused by Mycobacterium tuberculosis (Mtb.) that produces pulmonary damage due to its airborne nature. This fact facilitates the disease fast-spreading, which, according to the World Health Organization (WHO), in 2021 caused 1.2 million deaths and 9.9 million new cases. Traditionally, TB has been considered a binary disease (latent/active) due to the limited specificity of the traditional diagnostic tests. Such a simple model causes difficulties in the longitudinal assessment of pulmonary affectation needed for the development of novel drugs and to control the spread of the disease. Fortunately, X-Ray Computed Tomography (CT) images enable capturing specific manifestations of TB that are undetectable using regular diagnostic tests, which suffer from limited specificity. In conventional workflows, expert radiologists inspect the CT images. However, this procedure is unfeasible to process the thousands of volume images belonging to the different TB animal models and humans required for a suitable (pre-)clinical trial. To achieve suitable results, automatization of different image analysis processes is a must to quantify TB. It is also advisable to measure the uncertainty associated with this process and model causal relationships between the specific mechanisms that characterize each animal model and its level of damage. Thus, in this thesis, we introduce a set of novel methods based on the state of the art Artificial Intelligence (AI) and Computer Vision (CV). Initially, we present an algorithm to assess Pathological Lung Segmentation (PLS) employing an unsupervised rule-based model which was traditionally considered a needed step before biomarker extraction. This procedure allows robust segmentation in a Mtb. infection model (Dice Similarity Coefficient, DSC, 94%±4%, Hausdorff Distance, HD, 8.64mm±7.36mm) of damaged lungs with lesions attached to the parenchyma and affected by respiratory movement artefacts. Next, a Gaussian Mixture Model ruled by an Expectation-Maximization (EM) algorithm is employed to automatically quantify the burden of Mtb.using biomarkers extracted from the segmented CT images. This approach achieves a strong correlation (R2 ≈ 0.8) between our automatic method and manual extraction. Consequently, Chapter 3 introduces a model to automate the identification of TB lesions and the characterization of disease progression. To this aim, the method employs the Statistical Region Merging algorithm to detect lesions subsequently characterized by texture features that feed a Random Forest (RF) estimator. The proposed procedure enables a selection of a simple but powerful model able to classify abnormal tissue. The latest works base their methodology on Deep Learning (DL). Chapter 4 extends the classification of TB lesions. Namely, we introduce a computational model to infer TB manifestations present in each lung lobe of CT scans by employing the associated radiologist reports as ground truth. We do so instead of using the classical manually delimited segmentation masks. The model adjusts the three-dimensional architecture, V-Net, to a multitask classification context in which loss function is weighted by homoscedastic uncertainty. Besides, the method employs Self-Normalizing Neural Networks (SNNs) for regularization. Our results are promising with a Root Mean Square Error of 1.14 in the number of nodules and F1-scores above 0.85 for the most prevalent TB lesions (i.e., conglomerations, cavitations, consolidations, trees in bud) when considering the whole lung. In Chapter 5, we present a DL model capable of extracting disentangled information from images of different animal models, as well as information of the mechanisms that generate the CT volumes. The method provides the segmentation mask of axial slices from three animal models of different species employing a single trained architecture. It also infers the level of TB damage and generates counterfactual images. So, with this methodology, we offer an alternative to promote generalization and explainable AI models. To sum up, the thesis presents a collection of valuable tools to automate the quantification of pathological lungs and moreover extend the methodology to provide more explainable results which are vital for drug development purposes. Chapter 6 elaborates on these conclusions.Programa de Doctorado en Multimedia y Comunicaciones por la Universidad Carlos III de Madrid y la Universidad Rey Juan CarlosPresidenta: María Jesús Ledesma Carbayo.- Secretario: David Expósito Singh.- Vocal: Clarisa Sánchez Gutiérre

    Machine Learning/Deep Learning in Medical Image Processing

    Get PDF
    Many recent studies on medical image processing have involved the use of machine learning (ML) and deep learning (DL). This special issue, “Machine Learning/Deep Learning in Medical Image Processing”, has been launched to provide an opportunity for researchers in the area of medical image processing to highlight recent developments made in their fields with ML/DL. Seven excellent papers that cover a wide variety of medical/clinical aspects are selected in this special issue

    Investigating the role of machine learning and deep learning techniques in medical image segmentation

    Get PDF
    openThis work originates from the growing interest of the medical imaging community in the application of machine learning techniques and, from deep learning to improve the accuracy of cancerscreening. The thesis is structured into two different tasks. In the first part, magnetic resonance images were analysed in order to support clinical experts in the treatment of patients with brain tumour metastases (BM). The main topic related to this study was to investigate whether BM segmentation may be approached successfully by two supervised ML classifiers belonging to feature-based and deep learning approaches, respectively. SVM and V-Net Convolutional Neural Network model are selected from the literature as representative of the two approaches. The second task related to this thesisis illustrated the development of a deep learning study aimed to process and classify lesions in mammograms with the use of slender neural networks. Mammography has a central role in screening and diagnosis of breast lesions. Deep Convolutional Neural Networks have shown a great potentiality to address the issue of early detection of breast cancer with an acceptable level of accuracy and reproducibility. A traditional convolution network was compared with a novel one obtained making use of much more efficient depth wise separable convolution layers. As a final goal to integrate the system developed in clinical practice, for both fields studied, all the Medical Imaging and Pattern Recognition algorithmic solutions have been integrated into a MATLAB® software packageopenInformatica e matematica del calcologonella gloriaGonella, Glori

    ResBCDU-Net: A Deep Learning Framework for Lung CT Image Segmentation

    Get PDF
    Lung CT image segmentation is a key process in many applications such as lung cancer detection. It is considered a challenging problem due to existing similar image densities in the pulmonary structures, different types of scanners, and scanning protocols. Most of the current semi-automatic segmentation methods rely on human factors therefore it might suffer from lack of accuracy. Another shortcoming of these methods is their high false-positive rate. In recent years, several approaches, based on a deep learning framework, have been effectively applied in medical image segmentation. Among existing deep neural networks, the U-Net has provided great success in this field. In this paper, we propose a deep neural network architecture to perform an automatic lung CT image segmentation process. In the proposed method, several extensive preprocessing techniques are applied to raw CT images. Then, ground truths corresponding to these images are extracted via some morphological operations and manual reforms. Finally, all the prepared images with the corresponding ground truth are fed into a modified U-Net in which the encoder is replaced with a pre-trained ResNet-34 network (referred to as Res BCDU-Net). In the architecture, we employ BConvLSTM (Bidirectional Convolutional Long Short-term Memory)as an advanced integrator module instead of simple traditional concatenators. This is to merge the extracted feature maps of the corresponding contracting path into the previous expansion of the up-convolutional layer. Finally, a densely connected convolutional layer is utilized for the contracting path. The results of our extensive experiments on lung CT images (LIDC-IDRI database) confirm the effectiveness of the proposed method where a dice coefficient index of 97.31% is achieved

    Computational methods for the analysis of functional 4D-CT chest images.

    Get PDF
    Medical imaging is an important emerging technology that has been intensively used in the last few decades for disease diagnosis and monitoring as well as for the assessment of treatment effectiveness. Medical images provide a very large amount of valuable information that is too huge to be exploited by radiologists and physicians. Therefore, the design of computer-aided diagnostic (CAD) system, which can be used as an assistive tool for the medical community, is of a great importance. This dissertation deals with the development of a complete CAD system for lung cancer patients, which remains the leading cause of cancer-related death in the USA. In 2014, there were approximately 224,210 new cases of lung cancer and 159,260 related deaths. The process begins with the detection of lung cancer which is detected through the diagnosis of lung nodules (a manifestation of lung cancer). These nodules are approximately spherical regions of primarily high density tissue that are visible in computed tomography (CT) images of the lung. The treatment of these lung cancer nodules is complex, nearly 70% of lung cancer patients require radiation therapy as part of their treatment. Radiation-induced lung injury is a limiting toxicity that may decrease cure rates and increase morbidity and mortality treatment. By finding ways to accurately detect, at early stage, and hence prevent lung injury, it will have significant positive consequences for lung cancer patients. The ultimate goal of this dissertation is to develop a clinically usable CAD system that can improve the sensitivity and specificity of early detection of radiation-induced lung injury based on the hypotheses that radiated lung tissues may get affected and suffer decrease of their functionality as a side effect of radiation therapy treatment. These hypotheses have been validated by demonstrating that automatic segmentation of the lung regions and registration of consecutive respiratory phases to estimate their elasticity, ventilation, and texture features to provide discriminatory descriptors that can be used for early detection of radiation-induced lung injury. The proposed methodologies will lead to novel indexes for distinguishing normal/healthy and injured lung tissues in clinical decision-making. To achieve this goal, a CAD system for accurate detection of radiation-induced lung injury that requires three basic components has been developed. These components are the lung fields segmentation, lung registration, and features extraction and tissue classification. This dissertation starts with an exploration of the available medical imaging modalities to present the importance of medical imaging in today’s clinical applications. Secondly, the methodologies, challenges, and limitations of recent CAD systems for lung cancer detection are covered. This is followed by introducing an accurate segmentation methodology of the lung parenchyma with the focus of pathological lungs to extract the volume of interest (VOI) to be analyzed for potential existence of lung injuries stemmed from the radiation therapy. After the segmentation of the VOI, a lung registration framework is introduced to perform a crucial and important step that ensures the co-alignment of the intra-patient scans. This step eliminates the effects of orientation differences, motion, breathing, heart beats, and differences in scanning parameters to be able to accurately extract the functionality features for the lung fields. The developed registration framework also helps in the evaluation and gated control of the radiotherapy through the motion estimation analysis before and after the therapy dose. Finally, the radiation-induced lung injury is introduced, which combines the previous two medical image processing and analysis steps with the features estimation and classification step. This framework estimates and combines both texture and functional features. The texture features are modeled using the novel 7th-order Markov Gibbs random field (MGRF) model that has the ability to accurately models the texture of healthy and injured lung tissues through simultaneously accounting for both vertical and horizontal relative dependencies between voxel-wise signals. While the functionality features calculations are based on the calculated deformation fields, obtained from the 4D-CT lung registration, that maps lung voxels between successive CT scans in the respiratory cycle. These functionality features describe the ventilation, the air flow rate, of the lung tissues using the Jacobian of the deformation field and the tissues’ elasticity using the strain components calculated from the gradient of the deformation field. Finally, these features are combined in the classification model to detect the injured parts of the lung at an early stage and enables an earlier intervention

    Advanced Computational Methods for Oncological Image Analysis

    Get PDF
    [Cancer is the second most common cause of death worldwide and encompasses highly variable clinical and biological scenarios. Some of the current clinical challenges are (i) early diagnosis of the disease and (ii) precision medicine, which allows for treatments targeted to specific clinical cases. The ultimate goal is to optimize the clinical workflow by combining accurate diagnosis with the most suitable therapies. Toward this, large-scale machine learning research can define associations among clinical, imaging, and multi-omics studies, making it possible to provide reliable diagnostic and prognostic biomarkers for precision oncology. Such reliable computer-assisted methods (i.e., artificial intelligence) together with clinicians’ unique knowledge can be used to properly handle typical issues in evaluation/quantification procedures (i.e., operator dependence and time-consuming tasks). These technical advances can significantly improve result repeatability in disease diagnosis and guide toward appropriate cancer care. Indeed, the need to apply machine learning and computational intelligence techniques has steadily increased to effectively perform image processing operations—such as segmentation, co-registration, classification, and dimensionality reduction—and multi-omics data integration.

    Automated Grading of Bladder Cancer using Deep Learning

    Get PDF
    PhD thesis in Information technologyUrothelial carcinoma is the most common type of bladder cancer and is among the cancer types with the highest recurrence rate and lifetime treatment cost per patient. Diagnosed patients are stratified into risk groups, mainly based on the histological grade and stage. However, it is well known that correct grading of bladder cancer suffers from intra- and interobserver variability and inconsistent reproducibility between pathologists, potentially leading to under- or overtreatment of the patients. The economic burden, unnecessary patient suffering, and additional load on the health care system illustrate the importance of developing new tools to aid pathologists. With the introduction of digital pathology, large amounts of data have been made available in the form of digital histological whole-slide images (WSI). However, despite the massive amount of data, annotations for the given data are lacking. Another potential problem is that the tissue samples of urothelial carcinoma contain a mixture of damaged tissue, blood, stroma, muscle, and urothelium, where it is mainly the urothelium tissue that is diagnostically relevant for grading. A method for tissue segmentation is investigated, where the aim is to segment WSIs into the six tissue classes: urothelium, stroma, muscle, damaged tissue, blood, and background. Several methods based on convolutional neural networks (CNN) for tile-wise classification are proposed. Both single-scale and multiscale models were explored to see if including more magnification levels would improve the performance. Different techniques, such as unsupervised learning, semi-supervised learning, and domain adaptation techniques, are explored to mitigate the challenge of missing large quantities of annotated data. It is necessary to extract tiles from the WSI since it is intractable to process the entire WSI at full resolution at once. We have proposed a method to parameterize and automate the task of extracting tiles from different scales with a region of interest (ROI) defined at one of the scales. The method is reproducible and easy to describe by reporting the parameters. A pipeline for automated diagnostic grading is proposed, called TRIgrade. First, the tissue segmentation method is utilized to find the diagnostically relevant urothelium tissue. Then, the parameterized tile extraction method is used to extract tiles from the urothelium regions at three magnification levels from 300 WSIs. The extracted tiles form the training, validation, and test data used to train and test a diagnostic model. The final system outputs a segmented tissue image showing all the tissue regions in the WSI, a WHO grade heatmap indicating low- and high-grade carcinoma regions, and finally, a slide-level WHO grade prediction. The proposed TRIgrade pipeline correctly graded 45 of 50 WSIs, achieving an accuracy of 90%

    Caracterización del Edema Macular Diabético mediante análisis automático de Tomografías de Coherencia Óptica

    Get PDF
    Programa Oficial de Doctorado en Computación. 5009V01[Abstract] Diabetic Macular Edema (DME) is one of the most important complications of diabetes and a leading cause of preventable blindness in the developed countries. Among the di erent image modalities, Optical Coherence Tomography (OCT) is a non-invasive, cross-sectional and high-resolution imaging technique that is commonly used for the analysis and interpretation of many retinal structures and ocular disorders. In this way, the development of Computer-Aided Diagnosis (CAD) systems has become relevant over the recent years, facilitating and simplifying the work of the clinical specialists in many relevant diagnostic processes, replacing manual procedures that are tedious and highly time-consuming. This thesis proposes a complete methodology for the identi cation and characterization of DMEs using OCT images. To do so, the system combines and exploits di erent clinical knowledge with image processing and machine learning strategies. This automatic system is able to identify and characterize the main retinal structures and several pathological conditions that are associated with the DME disease, following the clinical classi cation of reference in the ophthalmological eld. Despite the complexity and heterogeneity of this relevant ocular pathology, the proposed system achieved satisfactory results, proving to be robust enough to be used in the daily clinical practice, helping the clinicians to produce a more accurate diagnosis and indicate adequate treatments[Resumen] El Edema Macular Diabético (EMD) es una de las complicaciones más importantes de la diabetes y una de las principales causas de ceguera prevenible en los países desarrollados. Entre las diferentes modalidades de imagen, la Tomografía de Coherencia Óptica (TCO) es una técnica de imagen no invasiva, transversal y de alta resolución que se usa comúnmente para el análisis e interpretación de múltiples estructuras retinianas y trastornos oculares. De esta manera, el desarrollo de los sistemas de Diagnóstico Asistido por Ordenador (DAO) se ha vuelto relevante en los últimos años, facilitando y simplificando el trabajo de los especialistas clínicos en muchos procesos diagnósticos relevantes, reemplazando procedimientos manuales que son tediosos y requieren mucho tiempo. Esta tesis propone una metodología completa para la identificación y caracterización de EMDs utilizando imágenes TCO. Para ello, el sistema desarrollado combina y explota diferentes conocimientos clínicos con estrategias de procesamiento de imágenes y aprendizaje automático. Este sistema automático es capaz de identificar y caracterizar las principales estructuras retinianas y diferentes afecciones patológicas asociadas con el EMD, siguiendo la clasificación clínica de referencia en el campo oftalmológico. A pesar de la complejidad de esta relevante patología ocular, el sistema propuesto logró resultados satisfactorios, demostrando ser lo sufi cientemente robusto como para ser usado en la práctica clínica diaria, ayudando a los médicos a producir diagnósticos más precisos y tratamientos más adecuados.[Resumo] O Edema Macular Diabético ( EMD) é unha das complicacións máis importantes da diabetes e unha das principais causas de cegueira prevenible nos países desenvoltos. Entre as diferentes modalidades de imaxe, a Tomografía de Coherencia Óptica ( TCO) é unha técnica de imaxe non invasiva, transversal e de alta resolución que se usa comunmente para a análise e interpretación de múltiples estruturas retinianas e trastornos oculares. Desta maneira, o desenvolvemento dos sistemas de Diagnóstico Asistido por Computador ( DAO) volveuse relevante nos últimos anos, facilitando e simplificando o traballo dos especialistas clínicos en moitos procesos diagnósticos relevantes, substituíndo procedementos manuais que son tediosos e requiren moito tempo. Esta tese propón unha metodoloxía completa para a identificación e caracterización de EMDs utilizando imaxes TCO. Para iso, o sistema desenvolto combina e explota diferentes coñecementos clínicos con estratexias de procesamento de imaxes e aprendizaxe automático. Este sistema automático é capaz de identificar e caracterizar as principais estruturas retinianas e diferentes afeccións patolóxicas asociadas co EMD, seguindo a clasificación clínica de referencia no campo oftalmolóxico. A pesar da complexidade desta relevante patoloxía ocular, o sistema proposto logrou resultados satisfactorios, demostrando ser o sufi cientemente robusto como para ser usado na práctica clínica diaria, axudando aos médicos para producir diagnósticos máis precisos e tratamentos máis adecuados
    corecore