187 research outputs found

    Learning Algorithms for Fat Quantification and Tumor Characterization

    Get PDF
    Obesity is one of the most prevalent health conditions. About 30% of the world\u27s and over 70% of the United States\u27 adult populations are either overweight or obese, causing an increased risk for cardiovascular diseases, diabetes, and certain types of cancer. Among all cancers, lung cancer is the leading cause of death, whereas pancreatic cancer has the poorest prognosis among all major cancers. Early diagnosis of these cancers can save lives. This dissertation contributes towards the development of computer-aided diagnosis tools in order to aid clinicians in establishing the quantitative relationship between obesity and cancers. With respect to obesity and metabolism, in the first part of the dissertation, we specifically focus on the segmentation and quantification of white and brown adipose tissue. For cancer diagnosis, we perform analysis on two important cases: lung cancer and Intraductal Papillary Mucinous Neoplasm (IPMN), a precursor to pancreatic cancer. This dissertation proposes an automatic body region detection method trained with only a single example. Then a new fat quantification approach is proposed which is based on geometric and appearance characteristics. For the segmentation of brown fat, a PET-guided CT co-segmentation method is presented. With different variants of Convolutional Neural Networks (CNN), supervised learning strategies are proposed for the automatic diagnosis of lung nodules and IPMN. In order to address the unavailability of a large number of labeled examples required for training, unsupervised learning approaches for cancer diagnosis without explicit labeling are proposed. We evaluate our proposed approaches (both supervised and unsupervised) on two different tumor diagnosis challenges: lung and pancreas with 1018 CT and 171 MRI scans respectively. The proposed segmentation, quantification and diagnosis approaches explore the important adiposity-cancer association and help pave the way towards improved diagnostic decision making in routine clinical practice

    Deep Representation Learning with Limited Data for Biomedical Image Synthesis, Segmentation, and Detection

    Get PDF
    Biomedical imaging requires accurate expert annotation and interpretation that can aid medical staff and clinicians in automating differential diagnosis and solving underlying health conditions. With the advent of Deep learning, it has become a standard for reaching expert-level performance in non-invasive biomedical imaging tasks by training with large image datasets. However, with the need for large publicly available datasets, training a deep learning model to learn intrinsic representations becomes harder. Representation learning with limited data has introduced new learning techniques, such as Generative Adversarial Networks, Semi-supervised Learning, and Self-supervised Learning, that can be applied to various biomedical applications. For example, ophthalmologists use color funduscopy (CF) and fluorescein angiography (FA) to diagnose retinal degenerative diseases. However, fluorescein angiography requires injecting a dye, which can create adverse reactions in the patients. So, to alleviate this, a non-invasive technique needs to be developed that can translate fluorescein angiography from fundus images. Similarly, color funduscopy and optical coherence tomography (OCT) are also utilized to semantically segment the vasculature and fluid build-up in spatial and volumetric retinal imaging, which can help with the future prognosis of diseases. Although many automated techniques have been proposed for medical image segmentation, the main drawback is the model's precision in pixel-wise predictions. Another critical challenge in the biomedical imaging field is accurately segmenting and quantifying dynamic behaviors of calcium signals in cells. Calcium imaging is a widely utilized approach to studying subcellular calcium activity and cell function; however, large datasets have yielded a profound need for fast, accurate, and standardized analyses of calcium signals. For example, image sequences from calcium signals in colonic pacemaker cells ICC (Interstitial cells of Cajal) suffer from motion artifacts and high periodic and sensor noise, making it difficult to accurately segment and quantify calcium signal events. Moreover, it is time-consuming and tedious to annotate such a large volume of calcium image stacks or videos and extract their associated spatiotemporal maps. To address these problems, we propose various deep representation learning architectures that utilize limited labels and annotations to address the critical challenges in these biomedical applications. To this end, we detail our proposed semi-supervised, generative adversarial networks and transformer-based architectures for individual learning tasks such as retinal image-to-image translation, vessel and fluid segmentation from fundus and OCT images, breast micro-mass segmentation, and sub-cellular calcium events tracking from videos and spatiotemporal map quantification. We also illustrate two multi-modal multi-task learning frameworks with applications that can be extended to other domains of biomedical applications. The main idea is to incorporate each of these as individual modules to our proposed multi-modal frameworks to solve the existing challenges with 1) Fluorescein angiography synthesis, 2) Retinal vessel and fluid segmentation, 3) Breast micro-mass segmentation, and 4) Dynamic quantification of calcium imaging datasets

    A Review on Computer Aided Diagnosis of Acute Brain Stroke.

    Full text link
    Amongst the most common causes of death globally, stroke is one of top three affecting over 100 million people worldwide annually. There are two classes of stroke, namely ischemic stroke (due to impairment of blood supply, accounting for ~70% of all strokes) and hemorrhagic stroke (due to bleeding), both of which can result, if untreated, in permanently damaged brain tissue. The discovery that the affected brain tissue (i.e., 'ischemic penumbra') can be salvaged from permanent damage and the bourgeoning growth in computer aided diagnosis has led to major advances in stroke management. Abiding to the Preferred Reporting Items for Systematic Review and Meta-Analyses (PRISMA) guidelines, we have surveyed a total of 177 research papers published between 2010 and 2021 to highlight the current status and challenges faced by computer aided diagnosis (CAD), machine learning (ML) and deep learning (DL) based techniques for CT and MRI as prime modalities for stroke detection and lesion region segmentation. This work concludes by showcasing the current requirement of this domain, the preferred modality, and prospective research areas

    JOINT CODING OF MULTIMODAL BIOMEDICAL IMAGES US ING CONVOLUTIONAL NEURAL NETWORKS

    Get PDF
    The massive volume of data generated daily by the gathering of medical images with different modalities might be difficult to store in medical facilities and share through communication networks. To alleviate this issue, efficient compression methods must be implemented to reduce the amount of storage and transmission resources required in such applications. However, since the preservation of all image details is highly important in the medical context, the use of lossless image compression algorithms is of utmost importance. This thesis presents the research results on a lossless compression scheme designed to encode both computerized tomography (CT) and positron emission tomography (PET). Different techniques, such as image-to-image translation, intra prediction, and inter prediction are used. Redundancies between both image modalities are also investigated. To perform the image-to-image translation approach, we resort to lossless compression of the original CT data and apply a cross-modality image translation generative adversarial network to obtain an estimation of the corresponding PET. Two approaches were implemented and evaluated to determine a PET residue that will be compressed along with the original CT. In the first method, the residue resulting from the differences between the original PET and its estimation is encoded, whereas in the second method, the residue is obtained using encoders inter-prediction coding tools. Thus, in alternative to compressing two independent picture modalities, i.e., both images of the original PET-CT pair solely the CT is independently encoded alongside with the PET residue, in the proposed method. Along with the proposed pipeline, a post-processing optimization algorithm that modifies the estimated PET image by altering the contrast and rescaling the image is implemented to maximize the compression efficiency. Four different versions (subsets) of a publicly available PET-CT pair dataset were tested. The first proposed subset was used to demonstrate that the concept developed in this work is capable of surpassing the traditional compression schemes. The obtained results showed gains of up to 8.9% using the HEVC. On the other side, JPEG2k proved not to be the most suitable as it failed to obtain good results, having reached only -9.1% compression gain. For the remaining (more challenging) subsets, the results reveal that the proposed refined post-processing scheme attains, when compared to conventional compression methods, up 6.33% compression gain using HEVC, and 7.78% using VVC

    KNOWLEDGE FUSION IN ALGORITHMS FOR MEDICAL IMAGE ANALYSIS

    Get PDF
    Medical imaging is one of the primary modalities used for clinical diagnosis and treatment planning. Building up a reliable automatic system to assist clinicians read the enormous amount of images benefits the efficiency and accuracy in general clinical trail. Recently deep learning techniques have been widely applied on medical images, but for applications in real clinical scenario, the accuracy, robustness, interpretability of those algorithms requires further validation. In this dissertation, we introduce different strategies of knowledge fusion for improving current approaches in various tasks in medical image analysis. (i) To improve the robustness of segmentation algorithm, we propose to learn the shape prior for organ segmentation and apply it for automatic quality assessment. (ii) To detect pancreatic lesion with patient-level label only, we propose to extract shape and texture information from CT scans and combine them with a fusion network. (iii) In image registration, semantic information is important yet hard to obtain. We propose two methods for introducing semantic knowledge without the need of segmentation label. The first one designs a joint framework for registration synthesis and segmentation to share knowledge between different tasks. The second one introduces unsupervised semantic embedding to improve regular registration framework. (iv) To reduce the false positives in tumor detection task, we propose a hybrid feature engineering system extracting features of the tumor candidates from various perspectives and merging them in the decision stage

    Towards AI-Assisted Disease Diagnosis: Learning Deep Feature Representations for Medical Image Analysis

    Get PDF
    Artificial Intelligence (AI) has impacted our lives in many meaningful ways. For our research, we focus on improving disease diagnosis systems by analyzing medical images using AI, specifically deep learning technologies. The recent advances in deep learning technologies are leading to enhanced performance for medical image analysis and computer-aided disease diagnosis. In this dissertation, we explore a major research area in medical image analysis - Image classification. Image classification is the process to assign an image a label from a fixed set of categories. For our research, we focus on the problem of Alzheimer\u27s Disease (AD) diagnosis from 3D structural Magnetic Resonance Imaging (sMRI) and Positron Emission Tomography (PET) brain scans. Alzheimer\u27s Disease is a severe neurological disorder. In this dissertation, we address challenges related to Alzheimer\u27s Disease diagnosis and propose several models for improved diagnosis. We focus on analyzing the 3D Structural MRI (sMRI) and Positron Emission Tomography (PET) brain scans to identify the current stage of Alzheimer\u27s Disease: Normal Controls (CN), Mild Cognitive Impairment (MCI), and Alzheimer\u27s Disease (AD). This dissertation demonstrates ways to improve the performance of a Convolutional Neural Network (CNN) for Alzheimer\u27s Disease diagnosis. Besides, we present approaches to solve the class-imbalance problem and improving classification performance with limited training data for medical image analysis. To understand the decision of the CNN, we present methods to visualize the behavior of a CNN model for disease diagnosis. As a case study, we analyzed brain PET scans of AD and CN patients to see how CNN discriminates among data samples of different classes. Additionally, this dissertation proposes a novel approach to generate synthetic medical images using Generative Adversarial Networks (GANs). Working with the limited dataset and small amount of annotated samples makes it difficult to develop a robust automated disease diagnosis model. Our proposed model can solve such issue and generate brain MRI and PET images for three different stages of Alzheimer\u27s Disease - Normal Control (CN), Mild Cognitive Impairment (MCI), and Alzheimer\u27s Disease (AD). Our proposed approach can be generalized to create synthetic data for other medical image analysis problems and help to develop better disease diagnosis model
    corecore