111 research outputs found

    Computer-Aided Assessment of Tuberculosis with Radiological Imaging: From rule-based methods to Deep Learning

    Get PDF
    Mención Internacional en el título de doctorTuberculosis (TB) is an infectious disease caused by Mycobacterium tuberculosis (Mtb.) that produces pulmonary damage due to its airborne nature. This fact facilitates the disease fast-spreading, which, according to the World Health Organization (WHO), in 2021 caused 1.2 million deaths and 9.9 million new cases. Traditionally, TB has been considered a binary disease (latent/active) due to the limited specificity of the traditional diagnostic tests. Such a simple model causes difficulties in the longitudinal assessment of pulmonary affectation needed for the development of novel drugs and to control the spread of the disease. Fortunately, X-Ray Computed Tomography (CT) images enable capturing specific manifestations of TB that are undetectable using regular diagnostic tests, which suffer from limited specificity. In conventional workflows, expert radiologists inspect the CT images. However, this procedure is unfeasible to process the thousands of volume images belonging to the different TB animal models and humans required for a suitable (pre-)clinical trial. To achieve suitable results, automatization of different image analysis processes is a must to quantify TB. It is also advisable to measure the uncertainty associated with this process and model causal relationships between the specific mechanisms that characterize each animal model and its level of damage. Thus, in this thesis, we introduce a set of novel methods based on the state of the art Artificial Intelligence (AI) and Computer Vision (CV). Initially, we present an algorithm to assess Pathological Lung Segmentation (PLS) employing an unsupervised rule-based model which was traditionally considered a needed step before biomarker extraction. This procedure allows robust segmentation in a Mtb. infection model (Dice Similarity Coefficient, DSC, 94%±4%, Hausdorff Distance, HD, 8.64mm±7.36mm) of damaged lungs with lesions attached to the parenchyma and affected by respiratory movement artefacts. Next, a Gaussian Mixture Model ruled by an Expectation-Maximization (EM) algorithm is employed to automatically quantify the burden of Mtb.using biomarkers extracted from the segmented CT images. This approach achieves a strong correlation (R2 ≈ 0.8) between our automatic method and manual extraction. Consequently, Chapter 3 introduces a model to automate the identification of TB lesions and the characterization of disease progression. To this aim, the method employs the Statistical Region Merging algorithm to detect lesions subsequently characterized by texture features that feed a Random Forest (RF) estimator. The proposed procedure enables a selection of a simple but powerful model able to classify abnormal tissue. The latest works base their methodology on Deep Learning (DL). Chapter 4 extends the classification of TB lesions. Namely, we introduce a computational model to infer TB manifestations present in each lung lobe of CT scans by employing the associated radiologist reports as ground truth. We do so instead of using the classical manually delimited segmentation masks. The model adjusts the three-dimensional architecture, V-Net, to a multitask classification context in which loss function is weighted by homoscedastic uncertainty. Besides, the method employs Self-Normalizing Neural Networks (SNNs) for regularization. Our results are promising with a Root Mean Square Error of 1.14 in the number of nodules and F1-scores above 0.85 for the most prevalent TB lesions (i.e., conglomerations, cavitations, consolidations, trees in bud) when considering the whole lung. In Chapter 5, we present a DL model capable of extracting disentangled information from images of different animal models, as well as information of the mechanisms that generate the CT volumes. The method provides the segmentation mask of axial slices from three animal models of different species employing a single trained architecture. It also infers the level of TB damage and generates counterfactual images. So, with this methodology, we offer an alternative to promote generalization and explainable AI models. To sum up, the thesis presents a collection of valuable tools to automate the quantification of pathological lungs and moreover extend the methodology to provide more explainable results which are vital for drug development purposes. Chapter 6 elaborates on these conclusions.Programa de Doctorado en Multimedia y Comunicaciones por la Universidad Carlos III de Madrid y la Universidad Rey Juan CarlosPresidenta: María Jesús Ledesma Carbayo.- Secretario: David Expósito Singh.- Vocal: Clarisa Sánchez Gutiérre

    Computational methods for the analysis of functional 4D-CT chest images.

    Get PDF
    Medical imaging is an important emerging technology that has been intensively used in the last few decades for disease diagnosis and monitoring as well as for the assessment of treatment effectiveness. Medical images provide a very large amount of valuable information that is too huge to be exploited by radiologists and physicians. Therefore, the design of computer-aided diagnostic (CAD) system, which can be used as an assistive tool for the medical community, is of a great importance. This dissertation deals with the development of a complete CAD system for lung cancer patients, which remains the leading cause of cancer-related death in the USA. In 2014, there were approximately 224,210 new cases of lung cancer and 159,260 related deaths. The process begins with the detection of lung cancer which is detected through the diagnosis of lung nodules (a manifestation of lung cancer). These nodules are approximately spherical regions of primarily high density tissue that are visible in computed tomography (CT) images of the lung. The treatment of these lung cancer nodules is complex, nearly 70% of lung cancer patients require radiation therapy as part of their treatment. Radiation-induced lung injury is a limiting toxicity that may decrease cure rates and increase morbidity and mortality treatment. By finding ways to accurately detect, at early stage, and hence prevent lung injury, it will have significant positive consequences for lung cancer patients. The ultimate goal of this dissertation is to develop a clinically usable CAD system that can improve the sensitivity and specificity of early detection of radiation-induced lung injury based on the hypotheses that radiated lung tissues may get affected and suffer decrease of their functionality as a side effect of radiation therapy treatment. These hypotheses have been validated by demonstrating that automatic segmentation of the lung regions and registration of consecutive respiratory phases to estimate their elasticity, ventilation, and texture features to provide discriminatory descriptors that can be used for early detection of radiation-induced lung injury. The proposed methodologies will lead to novel indexes for distinguishing normal/healthy and injured lung tissues in clinical decision-making. To achieve this goal, a CAD system for accurate detection of radiation-induced lung injury that requires three basic components has been developed. These components are the lung fields segmentation, lung registration, and features extraction and tissue classification. This dissertation starts with an exploration of the available medical imaging modalities to present the importance of medical imaging in today’s clinical applications. Secondly, the methodologies, challenges, and limitations of recent CAD systems for lung cancer detection are covered. This is followed by introducing an accurate segmentation methodology of the lung parenchyma with the focus of pathological lungs to extract the volume of interest (VOI) to be analyzed for potential existence of lung injuries stemmed from the radiation therapy. After the segmentation of the VOI, a lung registration framework is introduced to perform a crucial and important step that ensures the co-alignment of the intra-patient scans. This step eliminates the effects of orientation differences, motion, breathing, heart beats, and differences in scanning parameters to be able to accurately extract the functionality features for the lung fields. The developed registration framework also helps in the evaluation and gated control of the radiotherapy through the motion estimation analysis before and after the therapy dose. Finally, the radiation-induced lung injury is introduced, which combines the previous two medical image processing and analysis steps with the features estimation and classification step. This framework estimates and combines both texture and functional features. The texture features are modeled using the novel 7th-order Markov Gibbs random field (MGRF) model that has the ability to accurately models the texture of healthy and injured lung tissues through simultaneously accounting for both vertical and horizontal relative dependencies between voxel-wise signals. While the functionality features calculations are based on the calculated deformation fields, obtained from the 4D-CT lung registration, that maps lung voxels between successive CT scans in the respiratory cycle. These functionality features describe the ventilation, the air flow rate, of the lung tissues using the Jacobian of the deformation field and the tissues’ elasticity using the strain components calculated from the gradient of the deformation field. Finally, these features are combined in the classification model to detect the injured parts of the lung at an early stage and enables an earlier intervention

    Automatic 3D extraction of pleural plaques and diffuse pleural thickening from lung MDCT images

    Full text link
    Pleural plaques (PPs) and diffuse pleural thickening (DPT) are very common asbestos related pleural diseases (ARPD). They are currently identified non-invasively using medical imaging techniques. A fully automatic algorithm for 3D detection of calcified pleura in the diaphragmatic area and thickened pleura on the costal surfaces from multi detector computed tomography (MDCT) images has been developed and tested. The algorithm for detecting diaphragmatic pleura includes estimation of the diaphragm top surface in 3D and identifying those voxels at a certain vertical distance from the estimated diaphragm, and with intensities close to that of bone, as calcified pleura. The algorithm for detecting thickened pleura on the costal surfaces includes: estimation of the pleural costal surface in 3D, estimation of the centrelines of ribs and costal cartilages and the surfaces that they lie on, calculating the mean distance between the two surfaces, and identifying any space between the two surfaces whose distance exceeds the mean distance as thickened pleura. The accuracy and performance of the proposed algorithm was tested on 20 MDCT datasets from patients diagnosed with existing PPs and/or DPT and the results were compared against the ground truth provided by an experienced radiologist. Several metrics were employed and evaluations indicate high performance of both calcified pleura detection in the diaphragmatic area and thickened pleura on the costal surfaces. This work has made significant contributions to both medical image analysis and medicine. For the first time in medical image analysis, the approach uses other stable organs such as the ribs and costal cartilage, besides the lungs themselves, for referencing and landmarking in 3D. It also estimates fat thickness between the rib surface and pleura (which is usually very thin) and excludes it from the detected areas, when identifying the thickened pleura. It also distinguishes the calcified pleura attached to the rib(s), separates them in 3D and detects calcified pleura on the lung diaphragmatic surfaces. The key contribution to medicine is effective detection of pleural thickening of any size and recognition of any changes, however small. This could have a significant impact on managing patient risks

    Evaluation of Six Registration Methods for the Human Abdomen on Clinically Acquired CT

    Get PDF
    Objective: This work evaluates current 3-D image registration tools on clinically acquired abdominal computed tomography (CT) scans. Methods: Thirteen abdominal organs were manually labeled on a set of 100 CT images, and the 100 labeled images (i.e., atlases) were pairwise registered based on intensity information with six registration tools (FSL, ANTS-CC, ANTS-QUICK-MI, IRTK, NIFTYREG, and DEEDS). The Dice similarity coefficient (DSC), mean surface distance, and Hausdorff distance were calculated on the registered organs individually. Permutation tests and indifference-zone ranking were performed to examine the statistical and practical significance, respectively. Results: The results suggest that DEEDS yielded the best registration performance. However, due to the overall low DSC values, and substantial portion of low-performing outliers, great care must be taken when image registration is used for local interpretation of abdominal CT. Conclusion: There is substantial room for improvement in image registration for abdominal CT. Significance: All data and source code are available so that innovations in registration can be directly compared with the current generation of tools without excessive duplication of effort

    A non-invasive diagnostic system for early assessment of acute renal transplant rejection.

    Get PDF
    Early diagnosis of acute renal transplant rejection (ARTR) is of immense importance for appropriate therapeutic treatment administration. Although the current diagnostic technique is based on renal biopsy, it is not preferred due to its invasiveness, recovery time (1-2 weeks), and potential for complications, e.g., bleeding and/or infection. In this thesis, a computer-aided diagnostic (CAD) system for early detection of ARTR from 4D (3D + b-value) diffusion-weighted (DW) MRI data is developed. The CAD process starts from a 3D B-spline-based data alignment (to handle local deviations due to breathing and heart beat) and kidney tissue segmentation with an evolving geometric (level-set-based) deformable model. The latter is guided by a voxel-wise stochastic speed function, which follows from a joint kidney-background Markov-Gibbs random field model accounting for an adaptive kidney shape prior and for on-going visual kidney-background appearances. A cumulative empirical distribution of apparent diffusion coefficient (ADC) at different b-values of the segmented DW-MRI is considered a discriminatory transplant status feature. Finally, a classifier based on deep learning of a non-negative constrained stacked auto-encoder is employed to distinguish between rejected and non-rejected renal transplants. In the “leave-one-subject-out” experiments on 53 subjects, 98% of the subjects were correctly classified (namely, 36 out of 37 rejected transplants and 16 out of 16 nonrejected ones). Additionally, a four-fold cross-validation experiment was performed, and an average accuracy of 96% was obtained. These experimental results hold promise of the proposed CAD system as a reliable non-invasive diagnostic tool

    Shape analysis for assessment of progression in spinal deformities

    Get PDF
    Adolescent idiopathic scoliosis (AIS) is a three-dimensional structural spinal deformation. It is the most common type of scoliosis. It can be visually detected as a lateral curvature in the postero-anterior plane. This condition starts in early puberty, affecting between 1-4% of the adolescent population between 10-18 years old, affecting in majority female. In severe cases (0.1% of population with AIS) the patient will require a surgical treatment. To date, the diagnosis of AIS relies on the quantification of the major curvature observed on posteroanterior and sagittal radiographs. Radiographs in standing position are the common imaging modality used in clinical settings to diagnose AIS. The assessment of the deformation is carried out using the Cobb angle method. This angle is calculated in the postero-anterior plane, and it is formed between a line drawn parallel to the superior endplate of the upper vertebra included in the scoliotic curve and a line drawn parallel to the inferior endplate of the lower vertebra of the same curve. Patients that present a Cobb angle of more than 10°, are diagnosed with AIS. The gold standard to classify curve deformations is the Lenke classification method. This paradigm is widely accepted in the clinical community. It divides spines with scoliosis into six types and provides treatment recommendations depending on the type. This method is limited to the analysis of the spine in the 2D space, since it relies on the observation of radiographs and Cobb angle measurements. On the one hand, when clinicians are treating patients with AIS, one of the main concerns is to determine whether the deformation will progress through time. Knowing beforehand of how the shape of the spine is going to evolve would aid to guide treatments strategies. On the other hand, however, patients at higher risks of progression require to be monitored more frequently, which results in constant exposure to radiation. Therefore, there is a need for an alternative radiation-free technology to reduce the use of radiographs and alleviate the perils of other health issues derived from current imaging modalities. This thesis presents a framework designed to characterize and model the variation of the shape of the spine throughout AIS. This framework includes three contributions: 1) two measurement techniques for computing 3D descriptors of the spine, and a classification method to categorize spine deformations, 2) a method to simulate the variation of the shape of the spine through time, and 3) a protocol to generate a 3D model of the spine from a volume reconstruction produced from ultrasound images. In our first contribution, we introduced two measurement techniques to characterize the shape of the spine in the 3D space, leave-n-out, and fan leave-n-out angles. In addition, a dynamic ensemble method was presented as an automated alternative to classify spinal deformations. Our measurement techniques were designed for computing the 3D descriptors and to be easy to use in a clinical setting. Also, the classification method contributes by assisting clinicians to identify patient-specific descriptors, which could help improving the classification in borderline curve deformations and, hence, suggests the proper management strategies. In order to observe how the shape of the spine progresses through time, in our second contribution, we designed a method to visualize the shape’s variation from the first visit up to 18 months, for every three months. Our method is trained with modes of variation, computed using independent component analysis from 3D model reconstructions of the spine of patients with AIS. Each of the modes of variation can be visualized for interpretation. This contribution could aid clinicians to identify which spine progression pattern might be prone to progression. Finally, our third contribution addresses the necessity of a radiation-free image modality for assessing and monitoring patients with AIS. We proposed a protocol to model a spine by identifying the spinous processes on a volume reconstruction. This reconstruction was computed from ultrasound images acquired from the external geometry of the subject. Our acquisition protocol documents a setup for image acquisition, as well as some recommendations to take into account depending on the body composition of the subjects to be scanned. We believe that this protocol could contribute to reduce the use of radiographs during the assessment and monitoring of patients with AIS

    Data and knowledge engineering for medical image and sensor data

    Get PDF

    A 3D computer assisted Orthopedic Surgery Planning approach based on planar radiography

    Get PDF
    Dissertação de mestrado integrado em Engenharia Biomédica (área de especialização em Informática Médica)The main goal of this work consisted in develop a system to perform the 3D reconstruction of bone models from radiographic images. This system can be then integrated with a commercial software that performs pre-operative planning of orthopedic surgeries. The benefit of performing this 3D reconstruction from planar radiography is that this modality has some advantages over other modalities that perform this reconstruction directly, like CT and MRI. To develop the system it was used radiographic images of the femur obtained from medical image databases online. It was also used a generic model of the femur available in the online repository BEL. This generic model completes the information missing in the radiographic images. It was developed two methods to perform the 3D reconstruction through the deformation of the generic model, one uses triangulation of extracted edge points and the other don't. The first method was not successful, the final model had very low thickness, possibly because the triangulation process was not performed correctly. With the second method it was obtained a 3D bone model of the femur aligned with the radiographic images of the patient and with the same size as the patient's bone. However, the obtained model still needs some adjustment to coincide fully with reality. To perform this is necessary to enhance the deformation step of the model so that it will have the same shape as the patient's bone. The second method is more advantageous because it doesn't need the parameters of the x-ray imaging system. However, it's necessary to enhance the step deformation of this method so that the final model matches patient's anatomy.O principal objetivo deste trabalho consistiu em desenvolver um sistema capaz de realizar a reconstrução 3D de modelos ósseos a partir de imagens radiográficas. Este sistema pode posteriormente ser integrado num produto comercial que realiza o planeamento pré-operativo de cirurgias ortopédicas. O benefício de realizar esta reconstrução 3D a partir de radiografias está relacionado com o facto desta modalidade ter vantagens em relação às outras modalidades que fazem esta reconstrução diretamente, como as modalidades CT e MRI. Para desenvolver este sistema foram usadas imagens radiográficas do fémur obtidas através de bases de dados online de imagens médicas. Também foi usado um modelo genérico do fémur disponível no repositório online BEL. Este modelo genérico completa a informação que está em falta nas imagens radiográficas. Foram desenvolvidos dois métodos, que realizam a reconstrução 3D através da deformação do modelo genérico sendo que num é feita a triangulação de pontos dos contornos e noutro não. O primeiro método não foi bem sucedido, visto que o modelo final tinha uma espessura muito pequena, possivelmente devido ao facto do processo de triangulação não ter sido executado corretamente. Com o segundo método foi obtido um modelo 3D do fémur alinhado com as imagens radiográficas do paciente e com o mesmo tamanho do osso do paciente. No entanto, o modelo obtido carece ainda de alguma afinação de modo a coincidir na íntegra com a realidade. Para fazer isto é necessário melhorar o passo de deformação do modelo, para que este fique com a mesma forma do osso do paciente. O segundo método é mais vantajoso porque não necessita dos parâmetros dos sistema de raios- X. No entanto, é necessário melhorar o passo de deformação deste método para que o modelo final coincida com a anatomia do paciente

    Machine Learning in Tribology

    Get PDF
    Tribology has been and continues to be one of the most relevant fields, being present in almost all aspects of our lives. The understanding of tribology provides us with solutions for future technical challenges. At the root of all advances made so far are multitudes of precise experiments and an increasing number of advanced computer simulations across different scales and multiple physical disciplines. Based upon this sound and data-rich foundation, advanced data handling, analysis and learning methods can be developed and employed to expand existing knowledge. Therefore, modern machine learning (ML) or artificial intelligence (AI) methods provide opportunities to explore the complex processes in tribological systems and to classify or quantify their behavior in an efficient or even real-time way. Thus, their potential also goes beyond purely academic aspects into actual industrial applications. To help pave the way, this article collection aimed to present the latest research on ML or AI approaches for solving tribology-related issues generating true added value beyond just buzzwords. In this sense, this Special Issue can support researchers in identifying initial selections and best practice solutions for ML in tribology
    corecore