280 research outputs found

    Learning Algorithms for Fat Quantification and Tumor Characterization

    Get PDF
    Obesity is one of the most prevalent health conditions. About 30% of the world\u27s and over 70% of the United States\u27 adult populations are either overweight or obese, causing an increased risk for cardiovascular diseases, diabetes, and certain types of cancer. Among all cancers, lung cancer is the leading cause of death, whereas pancreatic cancer has the poorest prognosis among all major cancers. Early diagnosis of these cancers can save lives. This dissertation contributes towards the development of computer-aided diagnosis tools in order to aid clinicians in establishing the quantitative relationship between obesity and cancers. With respect to obesity and metabolism, in the first part of the dissertation, we specifically focus on the segmentation and quantification of white and brown adipose tissue. For cancer diagnosis, we perform analysis on two important cases: lung cancer and Intraductal Papillary Mucinous Neoplasm (IPMN), a precursor to pancreatic cancer. This dissertation proposes an automatic body region detection method trained with only a single example. Then a new fat quantification approach is proposed which is based on geometric and appearance characteristics. For the segmentation of brown fat, a PET-guided CT co-segmentation method is presented. With different variants of Convolutional Neural Networks (CNN), supervised learning strategies are proposed for the automatic diagnosis of lung nodules and IPMN. In order to address the unavailability of a large number of labeled examples required for training, unsupervised learning approaches for cancer diagnosis without explicit labeling are proposed. We evaluate our proposed approaches (both supervised and unsupervised) on two different tumor diagnosis challenges: lung and pancreas with 1018 CT and 171 MRI scans respectively. The proposed segmentation, quantification and diagnosis approaches explore the important adiposity-cancer association and help pave the way towards improved diagnostic decision making in routine clinical practice

    Alzheimer’s Disease Diagnosis Using Machine Learning: A Survey

    Get PDF
    Alzheimer’s is a neurodegenerative disorder affecting the central nervous system and cognitive processes, explicitly impairing detailed mental analysis. Throughout this condition, the affected individual’s cognitive abilities to process and analyze information gradually deteriorate, resulting in mental decline. In recent years, there has been a notable increase in endeavors aimed at identifying Alzheimer’s disease and addressing its progression. Research studies have demonstrated the significant involvement of genetic factors, stress, and nutrition in developing this condition. The utilization of computer-aided analysis models based on machine learning and artificial intelligence has the potential to significantly enhance the exploration of various neuroimaging methods and non-image biomarkers. This study conducts a comparative assessment of more than 80 publications that have been published since 2017. Alzheimer’s disease detection is facilitated by utilizing fundamental machine learning architectures such as support vector machines, decision trees, and ensemble models. Furthermore, around 50 papers that utilized a specific architectural or design approach concerning Alzheimer’s disease were examined. The body of literature under consideration has been categorized and elucidated through the utilization of data-related, methodology-related, and medical-fostering components to illustrate the underlying challenges. The conclusion section of our study encompasses a discussion of prospective avenues for further investigation and furnishes recommendations for future research activities on the diagnosis of Alzheimer’s disease

    Advanced machine learning methods for oncological image analysis

    Get PDF
    Cancer is a major public health problem, accounting for an estimated 10 million deaths worldwide in 2020 alone. Rapid advances in the field of image acquisition and hardware development over the past three decades have resulted in the development of modern medical imaging modalities that can capture high-resolution anatomical, physiological, functional, and metabolic quantitative information from cancerous organs. Therefore, the applications of medical imaging have become increasingly crucial in the clinical routines of oncology, providing screening, diagnosis, treatment monitoring, and non/minimally- invasive evaluation of disease prognosis. The essential need for medical images, however, has resulted in the acquisition of a tremendous number of imaging scans. Considering the growing role of medical imaging data on one side and the challenges of manually examining such an abundance of data on the other side, the development of computerized tools to automatically or semi-automatically examine the image data has attracted considerable interest. Hence, a variety of machine learning tools have been developed for oncological image analysis, aiming to assist clinicians with repetitive tasks in their workflow. This thesis aims to contribute to the field of oncological image analysis by proposing new ways of quantifying tumor characteristics from medical image data. Specifically, this thesis consists of six studies, the first two of which focus on introducing novel methods for tumor segmentation. The last four studies aim to develop quantitative imaging biomarkers for cancer diagnosis and prognosis. The main objective of Study I is to develop a deep learning pipeline capable of capturing the appearance of lung pathologies, including lung tumors, and integrating this pipeline into the segmentation networks to leverage the segmentation accuracy. The proposed pipeline was tested on several comprehensive datasets, and the numerical quantifications show the superiority of the proposed prior-aware DL framework compared to the state of the art. Study II aims to address a crucial challenge faced by supervised segmentation models: dependency on the large-scale labeled dataset. In this study, an unsupervised segmentation approach is proposed based on the concept of image inpainting to segment lung and head- neck tumors in images from single and multiple modalities. The proposed autoinpainting pipeline shows great potential in synthesizing high-quality tumor-free images and outperforms a family of well-established unsupervised models in terms of segmentation accuracy. Studies III and IV aim to automatically discriminate the benign from the malignant pulmonary nodules by analyzing the low-dose computed tomography (LDCT) scans. In Study III, a dual-pathway deep classification framework is proposed to simultaneously take into account the local intra-nodule heterogeneities and the global contextual information. Study IV seeks to compare the discriminative power of a series of carefully selected conventional radiomics methods, end-to-end Deep Learning (DL) models, and deep features-based radiomics analysis on the same dataset. The numerical analyses show the potential of fusing the learned deep features into radiomic features for boosting the classification power. Study V focuses on the early assessment of lung tumor response to the applied treatments by proposing a novel feature set that can be interpreted physiologically. This feature set was employed to quantify the changes in the tumor characteristics from longitudinal PET-CT scans in order to predict the overall survival status of the patients two years after the last session of treatments. The discriminative power of the introduced imaging biomarkers was compared against the conventional radiomics, and the quantitative evaluations verified the superiority of the proposed feature set. Whereas Study V focuses on a binary survival prediction task, Study VI addresses the prediction of survival rate in patients diagnosed with lung and head-neck cancer by investigating the potential of spherical convolutional neural networks and comparing their performance against other types of features, including radiomics. While comparable results were achieved in intra- dataset analyses, the proposed spherical-based features show more predictive power in inter-dataset analyses. In summary, the six studies incorporate different imaging modalities and a wide range of image processing and machine-learning techniques in the methods developed for the quantitative assessment of tumor characteristics and contribute to the essential procedures of cancer diagnosis and prognosis

    Precision Monitoring for Disease Progression in Patients with Multiple Sclerosis: A Deep Learning Approach

    Get PDF
    Artificial intelligence has tremendous potential in a range of clinical applications. Leveraging recent advances in deep learning, the works in this thesis has generated a range of technologies for patients with Multiple Sclerosis (MS) that facilitate precision monitoring using routine MRI and clinical assessments; and contribute to realising the goal of personalised disease management. MS is a chronic inflammatory demyelinating disease of the central nervous system (CNS), characterised by focal demyelinating plaques in the brain and spinal cord; and progressive neurodegeneration. Despite success in cohort studies and clinical trials, the measurement of disease activity using conventional imaging biomarkers in real-world clinical practice is limited to qualitative assessment of lesion activity, which is time consuming and prone to human error. Quantitative measures, such as T2 lesion load, volumetric assessment of lesion activity and brain atrophy, are constrained by challenges associated with handling real-world data variances. In this thesis, DeepBVC was developed for robust brain atrophy assessment through imaging synthesis, while a lesion segmentation model was developed using a novel federated learning framework, Fed-CoT, to leverage large data collaborations. With existing quantitative brain structural analyses, this work has developed an effective deep learning analysis pipeline, which delivers a fully automated suite of MS-specific clinical imaging biomarkers to facilitate the precision monitoring of patients with MS and response to disease modifying therapy. The framework for individualised MRI-guided management in this thesis was complemented by a disease prognosis model, based on a Large Language Model, providing insights into the risks of clinical worsening over the subsequent 3 years. The value and performance of the MS biomarkers in this thesis are underpinned by extensive validation in real-world, multi-centre data from more than 1030 patients

    Brainlesion: Glioma, Multiple Sclerosis, Stroke and Traumatic Brain Injuries

    Get PDF
    This two-volume set LNCS 12962 and 12963 constitutes the thoroughly refereed proceedings of the 7th International MICCAI Brainlesion Workshop, BrainLes 2021, as well as the RSNA-ASNR-MICCAI Brain Tumor Segmentation (BraTS) Challenge, the Federated Tumor Segmentation (FeTS) Challenge, the Cross-Modality Domain Adaptation (CrossMoDA) Challenge, and the challenge on Quantification of Uncertainties in Biomedical Image Quantification (QUBIQ). These were held jointly at the 23rd Medical Image Computing for Computer Assisted Intervention Conference, MICCAI 2020, in September 2021. The 91 revised papers presented in these volumes were selected form 151 submissions. Due to COVID-19 pandemic the conference was held virtually. This is an open access book

    Quantitative assessment of the human inner ear: toward endolymphatic hydrops segmentation

    Get PDF

    MSSEG-2 challenge proceedings: Multiple sclerosis new lesions segmentation challenge using a data management and processing infrastructure

    Get PDF
    International audienceThis proceedings book gathers methodological papers describing the segmenta-tion methods evaluated at the second MICCAI Challenge on Multiple Sclerosisnew lesions segmentation challenge using a data management and processinginfrastructure. This challenge took place as part of an effort of the OFSEP1(French registry on multiple sclerosis aiming at gathering, for research purposes,imaging data, clinical data and biological samples from the French populationof multiple sclerosis subjects) and FLI2(France Life Imaging, devoted to setupa national distributed e-infrastructure to manage and process medical imagingdata). These joint efforts are directed towards automatic segmentation of MRIscans of MS patients to help clinicians in their daily practice. This challengetook place at the MICCAI 2021 conference, on September 23rd 2021.More precisely, the problem addressed in this challenge is as follows. Con-ventional MRI is widely used for disease diagnosis, patient follow-up, monitoringof therapies, and more generally for the understanding of the natural history ofMS. A growing literature is interested in the delineation of new MS lesions onT2/FLAIR by comparing one time point to another. This marker is even morecrucial than the total number and volume of lesions as the accumulation of newlesions allows clinicians to know if a given anti-inflammatory DMD (disease mod-ifying drug) works for the patient. The only indicator of drug efficacy is indeedthe absence of new T2 lesions within the central nervous system. Performingthis new lesions count by hand is however a very complex and time consumingtask. Automating the detection of these new lesions would therefore be a majoradvance for evaluating the patient disease activity.Based on the success of the first MSSEG challenge, we have organized aMICCAI sponsored online challenge, this time on new MS lesions detection3.This challenge has allowed to 1) estimate the progress performed during the2016 - 2021 period, 2) extend the number of patients, and 3) focus on the newlesions crucial clinical marker. We have performed the evaluation task on a largedatabase (100 patients, each with two time points) compiled from the OFSEPcohort with 3D FLAIR images from different centers and scanners. As in ourprevious challenge, we have conducted the evaluation on a dedicated platform(FLI-IAM) to automate the evaluation and remove the potential biases due tochallengers seeing the images on which the evaluation is made
    • …
    corecore