743 research outputs found
Role of machine learning in early diagnosis of kidney diseases.
Machine learning (ML) and deep learning (DL) approaches have been used as indispensable tools in modern artificial intelligence-based computer-aided diagnostic (AIbased CAD) systems that can provide non-invasive, early, and accurate diagnosis of a given medical condition. These AI-based CAD systems have proven themselves to be reproducible and have the generalization ability to diagnose new unseen cases with several diseases and medical conditions in different organs (e.g., kidneys, prostate, brain, liver, lung, breast, and bladder). In this dissertation, we will focus on the role of such AI-based CAD systems in early diagnosis of two kidney diseases, namely: acute rejection (AR) post kidney transplantation and renal cancer (RC). A new renal computer-assisted diagnostic (Renal-CAD) system was developed to precisely diagnose AR post kidney transplantation at an early stage. The developed Renal-CAD system perform the following main steps: (1) auto-segmentation of the renal allograft from surrounding tissues from diffusion weighted magnetic resonance imaging (DW-MRI) and blood oxygen level-dependent MRI (BOLD-MRI), (2) extraction of image markers, namely: voxel-wise apparent diffusion coefficients (ADCs) are calculated from DW-MRI scans at 11 different low and high b-values and then represented as cumulative distribution functions (CDFs) and extraction of the transverse relaxation rate (R2*) values from the segmented kidneys using BOLD-MRI scans at different echotimes, (3) integration of multimodal image markers with the associated clinical biomarkers, serum creatinine (SCr) and creatinine clearance (CrCl), and (4) diagnosing renal allograft status as nonrejection (NR) or AR by utilizing these integrated biomarkers and the developed deep learning classification model built on stacked auto-encoders (SAEs). Using a leaveone- subject-out cross-validation approach along with SAEs on a total of 30 patients with transplanted kidney (AR = 10 and NR = 20), the Renal-CAD system demonstrated 93.3% accuracy, 90.0% sensitivity, and 95.0% specificity in differentiating AR from NR. Robustness of the Renal-CAD system was also confirmed by the area under the curve value of 0.92. Using a stratified 10-fold cross-validation approach, the Renal-CAD system demonstrated its reproduciblity and robustness with a diagnostic accuracy of 86.7%, sensitivity of 80.0%, specificity of 90.0%, and AUC of 0.88. In addition, a new renal cancer CAD (RC-CAD) system for precise diagnosis of RC at an early stage was developed, which incorporates the following main steps: (1) estimating the morphological features by applying a new parametric spherical harmonic technique, (2) extracting appearance-based features, namely: first order textural features are calculated and second order textural features are extracted after constructing the graylevel co-occurrence matrix (GLCM), (3) estimating the functional features by constructing wash-in/wash-out slopes to quantify the enhancement variations across different contrast enhanced computed tomography (CE-CT) phases, (4) integrating all the aforementioned features and modeling a two-stage multilayer perceptron artificial neural network (MLPANN) classifier to classify the renal tumor as benign or malignant and identify the malignancy subtype. On a total of 140 RC patients (malignant = 70 patients (ccRCC = 40 and nccRCC = 30) and benign angiomyolipoma tumors = 70), the developed RC-CAD system was validated using a leave-one-subject-out cross-validation approach. The developed RC-CAD system achieved a sensitivity of 95.3% Ā± 2.0%, a specificity of 99.9% Ā± 0.4%, and Dice similarity coefficient of 0.98 Ā± 0.01 in differentiating malignant from benign renal tumors, as well as an overall accuracy of 89.6% Ā± 5.0% in the sub-typing of RCC. The diagnostic abilities of the developed RC-CAD system were further validated using a randomly stratified 10-fold cross-validation approach. The results obtained using the proposed MLP-ANN classification model outperformed other machine learning classifiers (e.g., support vector machine, random forests, and relational functional gradient boosting) as well as other different approaches from the literature. In summary, machine and deep learning approaches have shown potential abilities to be utilized to build AI-based CAD systems. This is evidenced by the promising diagnostic performance obtained by both Renal-CAD and RC-CAD systems. For the Renal- CAD, the integration of functional markers extracted from multimodal MRIs with clinical biomarkers using SAEs classification model, potentially improved the final diagnostic results evidenced by high accuracy, sensitivity, and specificity. The developed Renal-CAD demonstrated high feasibility and efficacy for early, accurate, and non-invasive identification of AR. For the RC-CAD, integrating morphological, textural, and functional features extracted from CE-CT images using a MLP-ANN classification model eventually enhanced the final results in terms of accuracy, sensitivity, and specificity, making the proposed RC-CAD a reliable noninvasive diagnostic tool for RC. The early and accurate diagnosis of AR or RC will help physicians to provide early intervention with the appropriate treatment plan to prolong the life span of the diseased kidney, increase the survival chance of the patient, and thus improve the healthcare outcome in the U.S. and worldwide
Quantitative analysis with machine learning models for multi-parametric brain imaging data
Gliomas are considered to be the most common primary adult malignant brain tumor. With the dramatic increases in computational power and improvements in image analysis algorithms, computer-aided medical image analysis has been introduced into clinical applications. Precision tumor grading and genotyping play an indispensable role in clinical diagnosis, treatment and prognosis. Gliomas diagnostic procedures include histopathological imaging tests, molecular imaging scans and tumor grading. Pathologic review of tumor morphology in histologic sections is the traditional method for cancer classification and grading, yet human study has limitations that can result in low reproducibility and inter-observer agreement. Compared with histopathological images, Magnetic resonance (MR) imaging present the different structure and functional features, which might serve as noninvasive surrogates for tumor genotypes. Therefore, computer-aided image analysis has been adopted in clinical application, which might partially overcome these shortcomings due to its capacity to quantitatively and reproducibly measure multilevel features on multi-parametric medical information. Imaging features obtained from a single modal image do not fully represent the disease, so quantitative imaging features, including morphological, structural, cellular and molecular level features, derived from multi-modality medical images should be integrated into computer-aided medical image analysis. The image quality differentiation between multi-modality images is a challenge in the field of computer-aided medical image analysis. In this thesis, we aim to integrate the quantitative imaging data obtained from multiple modalities into mathematical models of tumor prediction response to achieve additional insights into practical predictive value. Our major contributions in this thesis are: 1. Firstly, to resolve the imaging quality difference and observer-dependent in histological image diagnosis, we proposed an automated machine-learning brain tumor-grading platform to investigate contributions of multi-parameters from multimodal data including imaging parameters or features from Whole Slide Images (WSI) and the proliferation marker KI-67. For each WSI, we extract both visual parameters such as morphology parameters and sub-visual parameters including first-order and second-order features. A quantitative interpretable machine learning approach (Local Interpretable Model-Agnostic Explanations) was followed to measure the contribution of features for single case. Most grading systems based on machine learning models are considered āblack boxes,ā whereas with this system the clinically trusted reasoning could be revealed. The quantitative analysis and explanation may assist clinicians to better understand the disease and accordingly to choose optimal treatments for improving clinical outcomes. 2. Based on the automated brain tumor-grading platform we propose, multimodal Magnetic Resonance Images (MRIs) have been introduced in our research. A new imagingātissue correlation based approach called RA-PA-Thomics was proposed to predict the IDH genotype. Inspired by the concept of image fusion, we integrate multimodal MRIs and the scans of histopathological images for indirect, fast, and cost saving IDH genotyping. The proposed model has been verified by multiple evaluation criteria for the integrated data set and compared to the results in the prior art. The experimental data set includes public data sets and image information from two hospitals. Experimental results indicate that the model provided improves the accuracy of glioma grading and genotyping
Fast and robust hybrid framework for infant brain classification from structural MRI : a case study for early diagnosis of autism.
The ultimate goal of this work is to develop a computer-aided diagnosis (CAD) system for early autism diagnosis from infant structural magnetic resonance imaging (MRI). The vital step to achieve this goal is to get accurate segmentation of the different brain structures: whitematter, graymatter, and cerebrospinal fluid, which will be the main focus of this thesis. The proposed brain classification approach consists of two major steps. First, the brain is extracted based on the integration of a stochastic model that serves to learn the visual appearance of the brain texture, and a geometric model that preserves the brain geometry during the extraction process. Secondly, the brain tissues are segmented based on shape priors, built using a subset of co-aligned training images, that is adapted during the segmentation process using first- and second-order visual appearance features of infant MRIs. The accuracy of the presented segmentation approach has been tested on 300 infant subjects and evaluated blindly on 15 adult subjects. The experimental results have been evaluated by the MICCAI MR Brain Image Segmentation (MRBrainS13) challenge organizers using three metrics: Dice coefficient, 95-percentile Hausdorff distance, and absolute volume difference. The proposed method has been ranked the first in terms of performance and speed
A Novel Center-based Deep Contrastive Metric Learning Method for the Detection of Polymicrogyria in Pediatric Brain MRI
Polymicrogyria (PMG) is a disorder of cortical organization mainly seen in
children, which can be associated with seizures, developmental delay and motor
weakness. PMG is typically diagnosed on magnetic resonance imaging (MRI) but
some cases can be challenging to detect even for experienced radiologists. In
this study, we create an open pediatric MRI dataset (PPMR) with PMG and
controls from the Children's Hospital of Eastern Ontario (CHEO), Ottawa,
Canada. The differences between PMG MRIs and control MRIs are subtle and the
true distribution of the features of the disease is unknown. This makes
automatic detection of cases of potential PMG in MRI difficult. We propose an
anomaly detection method based on a novel center-based deep contrastive metric
learning loss function (cDCM) which enables the automatic detection of cases of
potential PMG. Additionally, based on our proposed loss function, we customize
a deep learning model structure that integrates dilated convolution,
squeeze-and-excitation blocks and feature fusion for our PPMR dataset. Despite
working with a small and imbalanced dataset our method achieves 92.01% recall
at 55.04% precision. This will facilitate a computer aided tool for
radiologists to select potential PMG MRIs. To the best of our knowledge, this
research is the first to apply machine learning techniques to identify PMG from
MRI only.Comment: 24 pages, 13 figure
Artificial intelligence in cancer imaging: Clinical challenges and applications
Judgement, as one of the core tenets of medicine, relies upon the integration of multilayered data with nuanced decision making. Cancer offers a unique context for medical decisions given not only its variegated forms with evolution of disease but also the need to take into account the individual condition of patients, their ability to receive treatment, and their responses to treatment. Challenges remain in the accurate detection, characterization, and monitoring of cancers despite improved technologies. Radiographic assessment of disease most commonly relies upon visual evaluations, the interpretations of which may be augmented by advanced computational analyses. In particular, artificial intelligence (AI) promises to make great strides in the qualitative interpretation of cancer imaging by expert clinicians, including volumetric delineation of tumors over time, extrapolation of the tumor genotype and biological course from its radiographic phenotype, prediction of clinical outcome, and assessment of the impact of disease and treatment on adjacent organs. AI may automate processes in the initial interpretation of images and shift the clinical workflow of radiographic detection, management decisions on whether or not to administer an intervention, and subsequent observation to a yet to be envisioned paradigm. Here, the authors review the current state of AI as applied to medical imaging of cancer and describe advances in 4 tumor types (lung, brain, breast, and prostate) to illustrate how common clinical problems are being addressed. Although most studies evaluating AI applications in oncology to date have not been vigorously validated for reproducibility and generalizability, the results do highlight increasingly concerted efforts in pushing AI technology to clinical use and to impact future directions in cancer care
Hybrid Representation Learning for Cognitive Diagnosis in Late-Life Depression Over 5 Years with Structural MRI
Late-life depression (LLD) is a highly prevalent mood disorder occurring in
older adults and is frequently accompanied by cognitive impairment (CI).
Studies have shown that LLD may increase the risk of Alzheimer's disease (AD).
However, the heterogeneity of presentation of geriatric depression suggests
that multiple biological mechanisms may underlie it. Current biological
research on LLD progression incorporates machine learning that combines
neuroimaging data with clinical observations. There are few studies on incident
cognitive diagnostic outcomes in LLD based on structural MRI (sMRI). In this
paper, we describe the development of a hybrid representation learning (HRL)
framework for predicting cognitive diagnosis over 5 years based on T1-weighted
sMRI data. Specifically, we first extract prediction-oriented MRI features via
a deep neural network, and then integrate them with handcrafted MRI features
via a Transformer encoder for cognitive diagnosis prediction. Two tasks are
investigated in this work, including (1) identifying cognitively normal
subjects with LLD and never-depressed older healthy subjects, and (2)
identifying LLD subjects who developed CI (or even AD) and those who stayed
cognitively normal over five years. To the best of our knowledge, this is among
the first attempts to study the complex heterogeneous progression of LLD based
on task-oriented and handcrafted MRI features. We validate the proposed HRL on
294 subjects with T1-weighted MRIs from two clinically harmonized studies.
Experimental results suggest that the HRL outperforms several classical machine
learning and state-of-the-art deep learning methods in LLD identification and
prediction tasks
A Review on Computer Aided Diagnosis of Acute Brain Stroke.
Amongst the most common causes of death globally, stroke is one of top three affecting over 100 million people worldwide annually. There are two classes of stroke, namely ischemic stroke (due to impairment of blood supply, accounting for ~70% of all strokes) and hemorrhagic stroke (due to bleeding), both of which can result, if untreated, in permanently damaged brain tissue. The discovery that the affected brain tissue (i.e., 'ischemic penumbra') can be salvaged from permanent damage and the bourgeoning growth in computer aided diagnosis has led to major advances in stroke management. Abiding to the Preferred Reporting Items for Systematic Review and Meta-Analyses (PRISMA) guidelines, we have surveyed a total of 177 research papers published between 2010 and 2021 to highlight the current status and challenges faced by computer aided diagnosis (CAD), machine learning (ML) and deep learning (DL) based techniques for CT and MRI as prime modalities for stroke detection and lesion region segmentation. This work concludes by showcasing the current requirement of this domain, the preferred modality, and prospective research areas
Recommended from our members
Pattern classification approaches for breast cancer identification via MRI: stateāofātheāart and vision for the future
Mining algorithms for Dynamic Contrast Enhanced Magnetic Resonance Imaging (DCEMRI)
of breast tissue are discussed. The algorithms are based on recent advances in multidimensional
signal processing and aim to advance current stateāofātheāart computerāaided detection
and analysis of breast tumours when these are observed at various states of development. The topics
discussed include image feature extraction, information fusion using radiomics, multiāparametric
computerāaided classification and diagnosis using information fusion of tensorial datasets as well
as Clifford algebra based classification approaches and convolutional neural network deep learning
methodologies. The discussion also extends to semiāsupervised deep learning and selfāsupervised
strategies as well as generative adversarial networks and algorithms using generated
confrontational learning approaches. In order to address the problem of weakly labelled tumour
images, generative adversarial deep learning strategies are considered for the classification of
different tumour types. The proposed data fusion approaches provide a novel Artificial Intelligence
(AI) based framework for more robust image registration that can potentially advance the early
identification of heterogeneous tumour types, even when the associated imaged organs are
registered as separate entities embedded in more complex geometric spaces. Finally, the general
structure of a highādimensional medical imaging analysis platform that is based on multiātask
detection and learning is proposed as a way forward. The proposed algorithm makes use of novel
loss functions that form the building blocks for a generated confrontation learning methodology
that can be used for tensorial DCEāMRI. Since some of the approaches discussed are also based on
timeālapse imaging, conclusions on the rate of proliferation of the disease can be made possible. The
proposed framework can potentially reduce the costs associated with the interpretation of medical
images by providing automated, faster and more consistent diagnosis
AliFuse: Aligning and Fusing Multi-modal Medical Data for Computer-Aided Diagnosis
Medical data collected for making a diagnostic decision are typically
multi-modal and provide complementary perspectives of a subject. A
computer-aided diagnosis system welcomes multi-modal inputs; however, how to
effectively fuse such multi-modal data is a challenging task and attracts a lot
of attention in the medical research field. In this paper, we propose a
transformer-based framework, called Alifuse, for aligning and fusing
multi-modal medical data. Specifically, we convert images and unstructured and
structured texts into vision and language tokens, and use intramodal and
intermodal attention mechanisms to learn holistic representations of all
imaging and non-imaging data for classification. We apply Alifuse to classify
Alzheimer's disease and obtain state-of-the-art performance on five public
datasets, by outperforming eight baselines. The source code will be available
online later
- ā¦