9 research outputs found

    Role of deep learning techniques in non-invasive diagnosis of human diseases.

    Get PDF
    Machine learning, a sub-discipline in the domain of artificial intelligence, concentrates on algorithms able to learn and/or adapt their structure (e.g., parameters) based on a set of observed data. The adaptation is performed by optimizing over a cost function. Machine learning obtained a great attention in the biomedical community because it offers a promise for improving sensitivity and/or specificity of detection and diagnosis of diseases. It also can increase objectivity of the decision making, decrease the time and effort on health care professionals during the process of disease detection and diagnosis. The potential impact of machine learning is greater than ever due to the increase in medical data being acquired, the presence of novel modalities being developed and the complexity of medical data. In all of these scenarios, machine learning can come up with new tools for interpreting the complex datasets that confront clinicians. Much of the excitement for the application of machine learning to biomedical research comes from the development of deep learning which is modeled after computation in the brain. Deep learning can help in attaining insights that would be impossible to obtain through manual analysis. Deep learning algorithms and in particular convolutional neural networks are different from traditional machine learning approaches. Deep learning algorithms are known by their ability to learn complex representations to enhance pattern recognition from raw data. On the other hand, traditional machine learning requires human engineering and domain expertise to design feature extractors and structure data. With increasing demands upon current radiologists, there are growing needs for automating the diagnosis. This is a concern that deep learning is able to address. In this dissertation, we present four different successful applications of deep learning for diseases diagnosis. All the work presented in the dissertation utilizes medical images. In the first application, we introduce a deep-learning based computer-aided diagnostic system for the early detection of acute renal transplant rejection. The system is based on the fusion of both imaging markers (apparent diffusion coefficients derived from diffusion-weighted magnetic resonance imaging) and clinical biomarkers (creatinine clearance and serum plasma creatinine). The fused data is then used as an input to train and test a convolutional neural network based classifier. The proposed system is tested on scans collected from 56 subjects from geographically diverse populations and different scanner types/image collection protocols. The overall accuracy of the proposed system is 92.9% with 93.3% sensitivity and 92.3% specificity in distinguishing non-rejected kidney transplants from rejected ones. In the second application, we propose a novel deep learning approach for the automated segmentation and quantification of the LV from cardiac cine MR images. We aimed at achieving lower errors for the estimated heart parameters compared to the previous studies by proposing a novel deep learning segmentation method. Using fully convolutional neural networks, we proposed novel methods for the extraction of a region of interest that contains the left ventricle, and the segmentation of the left ventricle. Following myocardial segmentation, functional and mass parameters of the left ventricle are estimated. Automated Cardiac Diagnosis Challenge dataset was used to validate our framework, which gave better segmentation, accurate estimation of cardiac parameters, and produced less error compared to other methods applied on the same dataset. Furthermore, we showed that our segmentation approach generalizes well across different datasets by testing its performance on a locally acquired dataset. In the third application, we propose a novel deep learning approach for automated quantification of strain from cardiac cine MR images of mice. For strain analysis, we developed a Laplace-based approach to track the LV wall points by solving the Laplace equation between the LV contours of each two successive image frames over the cardiac cycle. Following tracking, the strain estimation is performed using the Lagrangian-based approach. This new automated system for strain analysis was validated by comparing the outcome of these analysis with the tagged MR images from the same mice. There were no significant differences between the strain data obtained from our algorithm using cine compared to tagged MR imaging. In the fourth application, we demonstrate how a deep learning approach can be utilized for the automated classification of kidney histopathological images. Our approach can classify four classes: the fat, the parenchyma, the clear cell renal cell carcinoma, and the unusual cancer which has been discovered recently, called clear cell papillary renal cell carcinoma. Our framework consists of three convolutional neural networks and the whole-slide kidney images were divided into patches with three different sizes to be inputted to the networks. Our approach can provide patch-wise and pixel-wise classification. Our approach classified the four classes accurately and surpassed other state-of-the-art methods such as ResNet (pixel accuracy: 0.89 Resnet18, 0.93 proposed). In conclusion, the results of our proposed systems demonstrate the potential of deep learning for the efficient, reproducible, fast, and affordable disease diagnosis

    Left ventricle segmentation and quantification using deep learning

    Get PDF
    Cardiac MRI is a widely used noninvasive tool that can provide us with an evaluation of cardiac anatomy and function. It can also be used for heart diagnosis. Heart diagnosis through the estimation of physiological heart parameters requires careful segmentation of the left ventricle (LV) from the images of cardiac MRI. Therefore we aim at building a new deep learning method for the automated delineation and quantification of the LV from cine cardiac MRI. Our goal is to reach lower errors for the calculated heart parameters than the previous works by introducing a new deep learning cardiac segmentation method. Our pipeline starts with an accurate LV localization by finding LV cavity center point using a fully convolutional neural network (FCN) model called FCN1. Then, from all heart sections, we extract a region of interest (ROI) that encompasses the LV. A segmentation for the LV cavity and myocardium is performed from the extracted ROIs using FCN called FCN2. The FCN2 model is associated with multiple bottleneck layers and uses less memory footprint than traditional models such as U-net. Furthermore, we introduced a novel loss function called radial loss that works on minimizing the distance between the ground truth LV contours and the predicted contours. After myocardial segmentation, we estimate the functional and mass parameters of the LV. We used the Automated Cardiac Diagnosis Challenge (ACDC-2017) dataset to validate our pipeline, which provided better segmentation, accurate calculation of heart parameters, and produced fewer errors compared to other approaches applied on the same dataset. Additionally, our segmentation approach showed that it can generalize well across different datasets by validating its performance on a locally collected cardiac dataset. To sum up, we propose a novel deep learning framework that we can translate it into a clinical tool for cardiac diagnosis

    Predicting the Level of Respiratory Support in COVID-19 Patients Using Machine Learning

    Get PDF
    In this paper, a machine learning-based system for the prediction of the required level of respiratory support in COVID-19 patients is proposed. The level of respiratory support is divided into three classes: class 0 which refers to minimal support, class 1 which refers to non-invasive support, and class 2 which refers to invasive support. A two-stage classification system is built. First, the classification between class 0 and others is performed. Then, the classification between class 1 and class 2 is performed. The system is built using a dataset collected retrospectively from 3491 patients admitted to tertiary care hospitals at the University of Louisville Medical Center. The use of the feature selection method based on analysis of variance is demonstrated in the paper. Furthermore, a dimensionality reduction method called principal component analysis is used. XGBoost classifier achieves the best classification accuracy (84%) in the first stage. It also achieved optimal performance in the second stage, with a classification accuracy of 83%

    Automatic segmentation and functional assessment of the left ventricle using u-net fully convolutional network

    Get PDF
    © 2019 IEEE. A new method for the automatic segmentation and quantitative assessment of the left ventricle (LV) is proposed in this paper. The method is composed of two steps. First, a fully convolutional U-net is used for the segmentation of the epi- A nd endo-cardial boundaries of the LV from cine MR images. This step incorporates a novel loss function that accounts for the class imbalance problem caused by the binary cross entropy (BCE) loss function. Our novel loss function maximizes the segmentation accuracy and penalizes the effect of the class-imbalance caused by BCE. In the second step, the ventricular volume curves are constructed from which LV function parameter is estimated (i.e., ejection fraction). Our method demonstrated a statistical significance in the segmentation of the epi- A nd endo-cardial boundaries (Dice score of 0.94 and 0.96, respectively) compared with the BCE loss (Dice score of 0.89 and 0.86, respectively). Furthermore, a high positive correlation of 0.97 between the estimated ejection fraction and the gold standard was obtained

    A Novel Deep Learning Approach for Left Ventricle Automatic Segmentation in Cardiac Cine MR

    Get PDF
    © 2019 IEEE. Cardiac magnetic resonance imaging provides a way for heart\u27s functional analysis. Through segmentation of the left ventricle from cardiac cine images, physiological parameters can be obtained. However, manual segmentation of the left ventricle requires significant time and effort. Therefore, automated segmentation of the left ventricle is the desired and practical alternative. This paper introduces a novel framework for the automated segmentation of the epi- and endo-cardial walls of the left ventricle, directly from the cardiac images using a fully convolutional neural network similar to the U-net. There is an acute class imbalance in cardiac images because left ventricle tissues comprise a very small proportion of the images. This imbalance negatively affects the learning process of the network by making it biased toward the majority class. To overcome the class imbalance problem, we propose a novel loss function into our framework, instead of the traditional binary cross entropy loss that causes learning bias in the model. Our new loss maximizes the overall accuracy while penalizing the learning bias caused by binary cross entropy. Our method obtained promising segmentation accuracies for the epi- and endo-cardial walls (Dice 0.94 and 0.96, respectively) compared with the traditional loss (Dice 0.89 and 0.87, respectively

    A pyramidal deep learning pipeline for kidney whole-slide histology images classification

    Get PDF
    Renal cell carcinoma is the most common type of kidney cancer. There are several subtypes of renal cell carcinoma with distinct clinicopathologic features. Among the subtypes, clear cell renal cell carcinoma is the most common and tends to portend poor prognosis. In contrast, clear cell papillary renal cell carcinoma has an excellent prognosis. These two subtypes are primarily classified based on the histopathologic features. However, a subset of cases can a have a significant degree of histopathologic overlap. In cases with ambiguous histologic features, the correct diagnosis is dependent on the pathologist's experience and usage of immunohistochemistry. We propose a new method to address this diagnostic task based on a deep learning pipeline for automated classification. The model can detect tumor and non-tumoral portions of kidney and classify the tumor as either clear cell renal cell carcinoma or clear cell papillary renal cell carcinoma. Our framework consists of three convolutional neural networks and the whole slide images of kidney which were divided into patches of three different sizes for input into the networks. Our approach can provide patchwise and pixelwise classification. The kidney histology images consist of 64 whole slide images. Our framework results in an image map that classifies the slide image on the pixel-level. Furthermore, we applied generalized Gauss-Markov random field smoothing to maintain consistency in the map. Our approach classified the four classes accurately and surpassed other state-of-the-art methods, such as ResNet (pixel accuracy: 0.89 Resnet18, 0.92 proposed). We conclude that deep learning has the potential to augment the pathologist's capabilities by providing automated classification for histopathological images

    A deep learning-based approach for automatic segmentation and quantification of the left ventricle from cardiac cine MR images

    Get PDF
    © 2020 Elsevier Ltd Cardiac MRI has been widely used for noninvasive assessment of cardiac anatomy and function as well as heart diagnosis. The estimation of physiological heart parameters for heart diagnosis essentially require accurate segmentation of the Left ventricle (LV) from cardiac MRI. Therefore, we propose a novel deep learning approach for the automated segmentation and quantification of the LV from cardiac cine MR images. We aim to achieve lower errors for the estimated heart parameters compared to the previous studies by proposing a novel deep learning segmentation method. Our framework starts by an accurate localization of the LV blood pool center-point using a fully convolutional neural network (FCN) architecture called FCN1. Then, a region of interest (ROI) that contains the LV is extracted from all heart sections. The extracted ROIs are used for the segmentation of LV cavity and myocardium via a novel FCN architecture called FCN2. The FCN2 network has several bottleneck layers and uses less memory footprint than conventional architectures such as U-net. Furthermore, a new loss function called radial loss that minimizes the distance between the predicted and true contours of the LV is introduced into our model. Following myocardial segmentation, functional and mass parameters of the LV are estimated. Automated Cardiac Diagnosis Challenge (ACDC-2017) dataset was used to validate our framework, which gave better segmentation, accurate estimation of cardiac parameters, and produced less error compared to other methods applied on the same dataset. Furthermore, we showed that our segmentation approach generalizes well across different datasets by testing its performance on a locally acquired dataset. To sum up, we propose a deep learning approach that can be translated into a clinical tool for heart diagnosis

    2 Left ventricle segmentation for cine MR using deep learning

    Get PDF
    Functional analysis of the heart can be performed using cardiac magnetic resonance imaging. Cine imaging is the most widely used cardiac modality. After careful segmentation of the left ventricle (LV), physiological heart indexes can be calculated for heart function evaluation. However, the physician spends significant time and effort to segment the LV. Therefore, physicians need an automated tool for LV segmentation to save time and effort. This chapter proposes a new approach for the automatic segmentation of the epicardial and endocardial boundaries of the LV. The segmentation is performed on the original cine cardiac images using a fully convolutional neural network known as the U-net. In cardiac images, there is a severe problem called class imbalance. This problem happens because the LV region comprises a tiny proportion of the image compared with the background. Because of the fact that the background represents the majority class, the network became biased toward the background during the learning process of the network. To avoid the class imbalance problem, we present a new loss function into our network. We did not use the traditional binary cross-entropy loss alone, because it encourages learning bias in the framework. We modified the loss function to maximize the value of accuracy. On the other hand, it works on reducing the learning bias that happens due to the binary cross-entropy. Our approach results in good segmentation performances for the epi- and endocardial boundaries (Dice 0.941 and 0.962, respectively) compared with the conventional loss function (Dice 0.893 and 0.872, respectively)

    A deep learning framework for automated classification of histopathological kidney whole-slide images

    No full text
    Background: Renal cell carcinoma is the most common type of malignant kidney tumor and is responsible for 14,830 deaths per year in the United States. Among the four most common subtypes of renal cell carcinoma, clear cell renal cell carcinoma has the worst prognosis and clear cell papillary renal cell carcinoma appears to have no malignant potential. Distinction between these two subtypes can be difficult due to morphologic overlap on examination of histopathological preparation stained with hematoxylin and eosin. Ancillary techniques, such as immunohistochemistry, can be helpful, but they are not universally available. We propose and evaluate a new deep learning framework for tumor classification tasks to distinguish clear cell renal cell carcinoma from papillary renal cell carcinoma. Methods: Our deep learning framework is composed of three convolutional neural networks. We divided whole-slide kidney images into patches with three different sizes where each network processes a specific patch size. Our framework provides patchwise and pixelwise classification. The histopathological kidney data is composed of 64 image slides that belong to 4 categories: fat, parenchyma, clear cell renal cell carcinoma, and clear cell papillary renal cell carcinoma. The final output of our framework is an image map where each pixel is classified into one class. To maintain consistency, we processed the map with Gauss-Markov random field smoothing. Results: Our framework succeeded in classifying the four classes and showed superior performance compared to well-established state-of-the-art methods (pixel accuracy: 0.89 ResNet18, 0.92 proposed). Conclusions: Deep learning techniques have a significant potential for cancer diagnosis
    corecore