757 research outputs found

    Spatio-Temporal Modelling of Perfusion Cardiovascular MRI

    Get PDF
    Myocardial perfusion MRI provides valuable insight into how coronary artery and microvascular diseases affect myocardial tissue. Stenosis in a coronary vessel leads to reduced maximum blood flow (MBF), but collaterals may secure the blood supply of the myocardium but with altered tracer kinetics. To date, quantitative analysis of myocardial perfusion MRI has only been performed on a local level, largely ignoring the contextual information inherent in different myocardial segments. This paper proposes to quantify the spatial dependencies between the local kinetics via a Hierarchical Bayesian Model (HBM). In the proposed framework, all local systems are modelled simultaneously along with their dependencies, thus allowing more robust context-driven estimation of local kinetics. Detailed validation on both simulated and patient data is provided

    Role of deep learning techniques in non-invasive diagnosis of human diseases.

    Get PDF
    Machine learning, a sub-discipline in the domain of artificial intelligence, concentrates on algorithms able to learn and/or adapt their structure (e.g., parameters) based on a set of observed data. The adaptation is performed by optimizing over a cost function. Machine learning obtained a great attention in the biomedical community because it offers a promise for improving sensitivity and/or specificity of detection and diagnosis of diseases. It also can increase objectivity of the decision making, decrease the time and effort on health care professionals during the process of disease detection and diagnosis. The potential impact of machine learning is greater than ever due to the increase in medical data being acquired, the presence of novel modalities being developed and the complexity of medical data. In all of these scenarios, machine learning can come up with new tools for interpreting the complex datasets that confront clinicians. Much of the excitement for the application of machine learning to biomedical research comes from the development of deep learning which is modeled after computation in the brain. Deep learning can help in attaining insights that would be impossible to obtain through manual analysis. Deep learning algorithms and in particular convolutional neural networks are different from traditional machine learning approaches. Deep learning algorithms are known by their ability to learn complex representations to enhance pattern recognition from raw data. On the other hand, traditional machine learning requires human engineering and domain expertise to design feature extractors and structure data. With increasing demands upon current radiologists, there are growing needs for automating the diagnosis. This is a concern that deep learning is able to address. In this dissertation, we present four different successful applications of deep learning for diseases diagnosis. All the work presented in the dissertation utilizes medical images. In the first application, we introduce a deep-learning based computer-aided diagnostic system for the early detection of acute renal transplant rejection. The system is based on the fusion of both imaging markers (apparent diffusion coefficients derived from diffusion-weighted magnetic resonance imaging) and clinical biomarkers (creatinine clearance and serum plasma creatinine). The fused data is then used as an input to train and test a convolutional neural network based classifier. The proposed system is tested on scans collected from 56 subjects from geographically diverse populations and different scanner types/image collection protocols. The overall accuracy of the proposed system is 92.9% with 93.3% sensitivity and 92.3% specificity in distinguishing non-rejected kidney transplants from rejected ones. In the second application, we propose a novel deep learning approach for the automated segmentation and quantification of the LV from cardiac cine MR images. We aimed at achieving lower errors for the estimated heart parameters compared to the previous studies by proposing a novel deep learning segmentation method. Using fully convolutional neural networks, we proposed novel methods for the extraction of a region of interest that contains the left ventricle, and the segmentation of the left ventricle. Following myocardial segmentation, functional and mass parameters of the left ventricle are estimated. Automated Cardiac Diagnosis Challenge dataset was used to validate our framework, which gave better segmentation, accurate estimation of cardiac parameters, and produced less error compared to other methods applied on the same dataset. Furthermore, we showed that our segmentation approach generalizes well across different datasets by testing its performance on a locally acquired dataset. In the third application, we propose a novel deep learning approach for automated quantification of strain from cardiac cine MR images of mice. For strain analysis, we developed a Laplace-based approach to track the LV wall points by solving the Laplace equation between the LV contours of each two successive image frames over the cardiac cycle. Following tracking, the strain estimation is performed using the Lagrangian-based approach. This new automated system for strain analysis was validated by comparing the outcome of these analysis with the tagged MR images from the same mice. There were no significant differences between the strain data obtained from our algorithm using cine compared to tagged MR imaging. In the fourth application, we demonstrate how a deep learning approach can be utilized for the automated classification of kidney histopathological images. Our approach can classify four classes: the fat, the parenchyma, the clear cell renal cell carcinoma, and the unusual cancer which has been discovered recently, called clear cell papillary renal cell carcinoma. Our framework consists of three convolutional neural networks and the whole-slide kidney images were divided into patches with three different sizes to be inputted to the networks. Our approach can provide patch-wise and pixel-wise classification. Our approach classified the four classes accurately and surpassed other state-of-the-art methods such as ResNet (pixel accuracy: 0.89 Resnet18, 0.93 proposed). In conclusion, the results of our proposed systems demonstrate the potential of deep learning for the efficient, reproducible, fast, and affordable disease diagnosis

    Comprehensive Brain MRI Segmentation in High Risk Preterm Newborns

    Get PDF
    Most extremely preterm newborns exhibit cerebral atrophy/growth disturbances and white matter signal abnormalities on MRI at term-equivalent age. MRI brain volumes could serve as biomarkers for evaluating the effects of neonatal intensive care and predicting neurodevelopmental outcomes. This requires detailed, accurate, and reliable brain MRI segmentation methods. We describe our efforts to develop such methods in high risk newborns using a combination of manual and automated segmentation tools. After intensive efforts to accurately define structural boundaries, two trained raters independently performed manual segmentation of nine subcortical structures using axial T2-weighted MRI scans from 20 randomly selected extremely preterm infants. All scans were re-segmented by both raters to assess reliability. High intra-rater reliability was achieved, as assessed by repeatability and intra-class correlation coefficients (ICC range: 0.97 to 0.99) for all manually segmented regions. Inter-rater reliability was slightly lower (ICC range: 0.93 to 0.99). A semi-automated segmentation approach was developed that combined the parametric strengths of the Hidden Markov Random Field Expectation Maximization algorithm with non-parametric Parzen window classifier resulting in accurate white matter, gray matter, and CSF segmentation. Final manual correction of misclassification errors improved accuracy (similarity index range: 0.87 to 0.89) and facilitated objective quantification of white matter signal abnormalities. The semi-automated and manual methods were seamlessly integrated to generate full brain segmentation within two hours. This comprehensive approach can facilitate the evaluation of large cohorts to rigorously evaluate the utility of regional brain volumes as biomarkers of neonatal care and surrogate endpoints for neurodevelopmental outcomes

    A Survey on Deep Learning in Medical Image Analysis

    Full text link
    Deep learning algorithms, in particular convolutional networks, have rapidly become a methodology of choice for analyzing medical images. This paper reviews the major deep learning concepts pertinent to medical image analysis and summarizes over 300 contributions to the field, most of which appeared in the last year. We survey the use of deep learning for image classification, object detection, segmentation, registration, and other tasks and provide concise overviews of studies per application area. Open challenges and directions for future research are discussed.Comment: Revised survey includes expanded discussion section and reworked introductory section on common deep architectures. Added missed papers from before Feb 1st 201

    Analysis of contrast-enhanced medical images.

    Get PDF
    Early detection of human organ diseases is of great importance for the accurate diagnosis and institution of appropriate therapies. This can potentially prevent progression to end-stage disease by detecting precursors that evaluate organ functionality. In addition, it also assists the clinicians for therapy evaluation, tracking diseases progression, and surgery operations. Advances in functional and contrast-enhanced (CE) medical images enabled accurate noninvasive evaluation of organ functionality due to their ability to provide superior anatomical and functional information about the tissue-of-interest. The main objective of this dissertation is to develop a computer-aided diagnostic (CAD) system for analyzing complex data from CE magnetic resonance imaging (MRI). The developed CAD system has been tested in three case studies: (i) early detection of acute renal transplant rejection, (ii) evaluation of myocardial perfusion in patients with ischemic heart disease after heart attack; and (iii), early detection of prostate cancer. However, developing a noninvasive CAD system for the analysis of CE medical images is subject to multiple challenges, including, but are not limited to, image noise and inhomogeneity, nonlinear signal intensity changes of the images over the time course of data acquisition, appearances and shape changes (deformations) of the organ-of-interest during data acquisition, determination of the best features (indexes) that describe the perfusion of a contrast agent (CA) into the tissue. To address these challenges, this dissertation focuses on building new mathematical models and learning techniques that facilitate accurate analysis of CAs perfusion in living organs and include: (i) accurate mathematical models for the segmentation of the object-of-interest, which integrate object shape and appearance features in terms of pixel/voxel-wise image intensities and their spatial interactions; (ii) motion correction techniques that combine both global and local models, which exploit geometric features, rather than image intensities to avoid problems associated with nonlinear intensity variations of the CE images; (iii) fusion of multiple features using the genetic algorithm. The proposed techniques have been integrated into CAD systems that have been tested in, but not limited to, three clinical studies. First, a noninvasive CAD system is proposed for the early and accurate diagnosis of acute renal transplant rejection using dynamic contrast-enhanced MRI (DCE-MRI). Acute rejection–the immunological response of the human immune system to a foreign kidney–is the most sever cause of renal dysfunction among other diagnostic possibilities, including acute tubular necrosis and immune drug toxicity. In the U.S., approximately 17,736 renal transplants are performed annually, and given the limited number of donors, transplanted kidney salvage is an important medical concern. Thus far, biopsy remains the gold standard for the assessment of renal transplant dysfunction, but only as the last resort because of its invasive nature, high cost, and potential morbidity rates. The diagnostic results of the proposed CAD system, based on the analysis of 50 independent in-vivo cases were 96% with a 95% confidence interval. These results clearly demonstrate the promise of the proposed image-based diagnostic CAD system as a supplement to the current technologies, such as nuclear imaging and ultrasonography, to determine the type of kidney dysfunction. Second, a comprehensive CAD system is developed for the characterization of myocardial perfusion and clinical status in heart failure and novel myoregeneration therapy using cardiac first-pass MRI (FP-MRI). Heart failure is considered the most important cause of morbidity and mortality in cardiovascular disease, which affects approximately 6 million U.S. patients annually. Ischemic heart disease is considered the most common underlying cause of heart failure. Therefore, the detection of the heart failure in its earliest forms is essential to prevent its relentless progression to premature death. While current medical studies focus on detecting pathological tissue and assessing contractile function of the diseased heart, this dissertation address the key issue of the effects of the myoregeneration therapy on the associated blood nutrient supply. Quantitative and qualitative assessment in a cohort of 24 perfusion data sets demonstrated the ability of the proposed framework to reveal regional perfusion improvements with therapy, and transmural perfusion differences across the myocardial wall; thus, it can aid in follow-up on treatment for patients undergoing the myoregeneration therapy. Finally, an image-based CAD system for early detection of prostate cancer using DCE-MRI is introduced. Prostate cancer is the most frequently diagnosed malignancy among men and remains the second leading cause of cancer-related death in the USA with more than 238,000 new cases and a mortality rate of about 30,000 in 2013. Therefore, early diagnosis of prostate cancer can improve the effectiveness of treatment and increase the patient’s chance of survival. Currently, needle biopsy is the gold standard for the diagnosis of prostate cancer. However, it is an invasive procedure with high costs and potential morbidity rates. Additionally, it has a higher possibility of producing false positive diagnosis due to relatively small needle biopsy samples. Application of the proposed CAD yield promising results in a cohort of 30 patients that would, in the near future, represent a supplement of the current technologies to determine prostate cancer type. The developed techniques have been compared to the state-of-the-art methods and demonstrated higher accuracy as shown in this dissertation. The proposed models (higher-order spatial interaction models, shape models, motion correction models, and perfusion analysis models) can be used in many of today’s CAD applications for early detection of a variety of diseases and medical conditions, and are expected to notably amplify the accuracy of CAD decisions based on the automated analysis of CE images

    Deep Learning in Cardiology

    Full text link
    The medical field is creating large amount of data that physicians are unable to decipher and use efficiently. Moreover, rule-based expert systems are inefficient in solving complicated medical tasks or for creating insights using big data. Deep learning has emerged as a more accurate and effective technology in a wide range of medical problems such as diagnosis, prediction and intervention. Deep learning is a representation learning method that consists of layers that transform the data non-linearly, thus, revealing hierarchical relationships and structures. In this review we survey deep learning application papers that use structured data, signal and imaging modalities from cardiology. We discuss the advantages and limitations of applying deep learning in cardiology that also apply in medicine in general, while proposing certain directions as the most viable for clinical use.Comment: 27 pages, 2 figures, 10 table

    Computational methods for the analysis of functional 4D-CT chest images.

    Get PDF
    Medical imaging is an important emerging technology that has been intensively used in the last few decades for disease diagnosis and monitoring as well as for the assessment of treatment effectiveness. Medical images provide a very large amount of valuable information that is too huge to be exploited by radiologists and physicians. Therefore, the design of computer-aided diagnostic (CAD) system, which can be used as an assistive tool for the medical community, is of a great importance. This dissertation deals with the development of a complete CAD system for lung cancer patients, which remains the leading cause of cancer-related death in the USA. In 2014, there were approximately 224,210 new cases of lung cancer and 159,260 related deaths. The process begins with the detection of lung cancer which is detected through the diagnosis of lung nodules (a manifestation of lung cancer). These nodules are approximately spherical regions of primarily high density tissue that are visible in computed tomography (CT) images of the lung. The treatment of these lung cancer nodules is complex, nearly 70% of lung cancer patients require radiation therapy as part of their treatment. Radiation-induced lung injury is a limiting toxicity that may decrease cure rates and increase morbidity and mortality treatment. By finding ways to accurately detect, at early stage, and hence prevent lung injury, it will have significant positive consequences for lung cancer patients. The ultimate goal of this dissertation is to develop a clinically usable CAD system that can improve the sensitivity and specificity of early detection of radiation-induced lung injury based on the hypotheses that radiated lung tissues may get affected and suffer decrease of their functionality as a side effect of radiation therapy treatment. These hypotheses have been validated by demonstrating that automatic segmentation of the lung regions and registration of consecutive respiratory phases to estimate their elasticity, ventilation, and texture features to provide discriminatory descriptors that can be used for early detection of radiation-induced lung injury. The proposed methodologies will lead to novel indexes for distinguishing normal/healthy and injured lung tissues in clinical decision-making. To achieve this goal, a CAD system for accurate detection of radiation-induced lung injury that requires three basic components has been developed. These components are the lung fields segmentation, lung registration, and features extraction and tissue classification. This dissertation starts with an exploration of the available medical imaging modalities to present the importance of medical imaging in today’s clinical applications. Secondly, the methodologies, challenges, and limitations of recent CAD systems for lung cancer detection are covered. This is followed by introducing an accurate segmentation methodology of the lung parenchyma with the focus of pathological lungs to extract the volume of interest (VOI) to be analyzed for potential existence of lung injuries stemmed from the radiation therapy. After the segmentation of the VOI, a lung registration framework is introduced to perform a crucial and important step that ensures the co-alignment of the intra-patient scans. This step eliminates the effects of orientation differences, motion, breathing, heart beats, and differences in scanning parameters to be able to accurately extract the functionality features for the lung fields. The developed registration framework also helps in the evaluation and gated control of the radiotherapy through the motion estimation analysis before and after the therapy dose. Finally, the radiation-induced lung injury is introduced, which combines the previous two medical image processing and analysis steps with the features estimation and classification step. This framework estimates and combines both texture and functional features. The texture features are modeled using the novel 7th-order Markov Gibbs random field (MGRF) model that has the ability to accurately models the texture of healthy and injured lung tissues through simultaneously accounting for both vertical and horizontal relative dependencies between voxel-wise signals. While the functionality features calculations are based on the calculated deformation fields, obtained from the 4D-CT lung registration, that maps lung voxels between successive CT scans in the respiratory cycle. These functionality features describe the ventilation, the air flow rate, of the lung tissues using the Jacobian of the deformation field and the tissues’ elasticity using the strain components calculated from the gradient of the deformation field. Finally, these features are combined in the classification model to detect the injured parts of the lung at an early stage and enables an earlier intervention
    corecore