10 research outputs found

    Integrated Graph Theoretic, Radiomics, and Deep Learning Framework for Personalized Clinical Diagnosis, Prognosis, and Treatment Response Assessment of Body Tumors

    Get PDF
    Purpose: A new paradigm is beginning to emerge in radiology with the advent of increased computational capabilities and algorithms. The future of radiological reading rooms is heading towards a unique collaboration between computer scientists and radiologists. The goal of computational radiology is to probe the underlying tissue using advanced algorithms and imaging parameters and produce a personalized diagnosis that can be correlated to pathology. This thesis presents a complete computational radiology framework (I GRAD) for personalized clinical diagnosis, prognosis and treatment planning using an integration of graph theory, radiomics, and deep learning. Methods: There are three major components of the I GRAD framework–image segmentation, feature extraction, and clinical decision support. Image Segmentation: I developed the multiparametric deep learning (MPDL) tissue signature model for segmentation of normal and abnormal tissue from multiparametric (mp) radiological images. The segmentation MPDL network was constructed from stacked sparse autoencoders (SSAE) with five hidden layers. The MPDL network parameters were optimized using k-fold cross-validation. In addition, the MPDL segmentation network was tested on an independent dataset. Feature Extraction: I developed the radiomic feature mapping (RFM) and contribution scattergram (CSg) methods for characterization of spatial and inter-parametric relationships in multiparametric imaging datasets. The radiomic feature maps were created by filtering radiological images with first and second order statistical texture filters followed by the development of standardized features for radiological correlation to biology and clinical decision support. The contribution scattergram was constructed to visualize and understand the inter-parametric relationships of the breast MRI as a complex network. This multiparametric imaging complex network was modeled using manifold learning and evaluated using graph theoretic analysis. Feature Integration: The different clinical and radiological features extracted from multiparametric radiological images and clinical records were integrated using a hybrid multiview manifold learning technique termed the Informatics Radiomics Integration System (IRIS). IRIS uses hierarchical clustering in combination with manifold learning to visualize the high-dimensional patient space on a two-dimensional heatmap. The heatmap highlights the similarity and dissimilarity between different patients and variables. Results: All the algorithms and techniques presented in this dissertation were developed and validated using breast cancer as a model for diagnosis and prognosis using multiparametric breast magnetic resonance imaging (MRI). The deep learning MPDL method demonstrated excellent dice similarity of 0.87±0.05 and 0.84±0.07 for segmentation of lesions on malignant and benign breast patients, respectively. Furthermore, each of the methods, MPDL, RFM, and CSg demonstrated excellent results for breast cancer diagnosis with area under the receiver (AUC) operating characteristic (ROC) curve of 0.85, 0.91, and 0.87, respectively. Furthermore, IRIS classified patients with low risk of breast cancer recurrence from patients with medium and high risk with an AUC of 0.93 compared to OncotypeDX, a 21 gene assay for breast cancer recurrence. Conclusion: By integrating advanced computer science methods into the radiological setting, the I-GRAD framework presented in this thesis can be used to model radiological imaging data in combination with clinical and histopathological data and produce new tools for personalized diagnosis, prognosis or treatment planning by physicians

    Robust density modelling using the student's t-distribution for human action recognition

    Full text link
    The extraction of human features from videos is often inaccurate and prone to outliers. Such outliers can severely affect density modelling when the Gaussian distribution is used as the model since it is highly sensitive to outliers. The Gaussian distribution is also often used as base component of graphical models for recognising human actions in the videos (hidden Markov model and others) and the presence of outliers can significantly affect the recognition accuracy. In contrast, the Student's t-distribution is more robust to outliers and can be exploited to improve the recognition rate in the presence of abnormal data. In this paper, we present an HMM which uses mixtures of t-distributions as observation probabilities and show how experiments over two well-known datasets (Weizmann, MuHAVi) reported a remarkable improvement in classification accuracy. © 2011 IEEE

    Semantic Assisted, Multiresolution Image Retrieval in 3D Brain MR Volumes

    Get PDF
    Content Based Image Retrieval (CBIR) is an important research area in the field of multimedia information retrieval. The application of CBIR in the medical domain has been attempted before, however the use of CBIR in medical diagnostics is a daunting task. The goal of diagnostic medical image retrieval is to provide diagnostic support by displaying relevant past cases, along with proven pathologies as ground truths. Moreover, medical image retrieval can be extremely useful as a training tool for medical students and residents, follow-up studies, and for research purposes. Despite the presence of an impressive amount of research in the area of CBIR, its acceptance for mainstream and practical applications is quite limited. The research in CBIR has mostly been conducted as an academic pursuit, rather than for providing the solution to a need. For example, many researchers proposed CBIR systems where the image database consists of images belonging to a heterogeneous mixture of man-made objects and natural scenes while ignoring the practical uses of such systems. Furthermore, the intended use of CBIR systems is important in addressing the problem of "Semantic Gap". Indeed, the requirements for the semantics in an image retrieval system for pathological applications are quite different from those intended for training and education. Moreover, many researchers have underestimated the level of accuracy required for a useful and practical image retrieval system. The human eye is extremely dexterous and efficient in visual information processing; consequently, CBIR systems should be highly precise in image retrieval so as to be useful to human users. Unsurprisingly, due to these and other reasons, most of the proposed systems have not found useful real world applications. In this dissertation, an attempt is made to address the challenging problem of developing a retrieval system for medical diagnostics applications. More specifically, a system for semantic retrieval of Magnetic Resonance (MR) images in 3D brain volumes is proposed. The proposed retrieval system has a potential to be useful for clinical experts where the human eye may fail. Previously proposed systems used imprecise segmentation and feature extraction techniques, which are not suitable for precise matching requirements of the image retrieval in this application domain. This dissertation uses multiscale representation for image retrieval, which is robust against noise and MR inhomogeneity. In order to achieve a higher degree of accuracy in the presence of misalignments, an image registration based retrieval framework is developed. Additionally, to speed-up the retrieval system, a fast discrete wavelet based feature space is proposed. Further improvement in speed is achieved by semantically classifying of the human brain into various "Semantic Regions", using an SVM based machine learning approach. A novel and fast identification system is proposed for identifying a 3D volume given a 2D image slice. To this end, we used SVM output probabilities for ranking and identification of patient volumes. The proposed retrieval systems are tested not only for noise conditions but also for healthy and abnormal cases, resulting in promising retrieval performance with respect to multi-modality, accuracy, speed and robustness. This dissertation furnishes medical practitioners with a valuable set of tools for semantic retrieval of 2D images, where the human eye may fail. Specifically, the proposed retrieval algorithms provide medical practitioners with the ability to retrieve 2D MR brain images accurately and monitor the disease progression in various lobes of the human brain, with the capability to monitor the disease progression in multiple patients simultaneously. Additionally, the proposed semantic classification scheme can be extremely useful for semantic based categorization, clustering and annotation of images in MR brain databases. This research framework may evolve in a natural progression towards developing more powerful and robust retrieval systems. It also provides a foundation to researchers in semantic based retrieval systems on how to expand existing toolsets for solving retrieval problems

    A graph-based approach for the retrieval of multi-modality medical images

    Get PDF
    Medical imaging has revolutionised modern medicine and is now an integral aspect of diagnosis and patient monitoring. The development of new imaging devices for a wide variety of clinical cases has spurred an increase in the data volume acquired in hospitals. These large data collections offer opportunities for search-based applications in evidence-based diagnosis, education, and biomedical research. However, conventional search methods that operate upon manual annotations are not feasible for this data volume. Content-based image retrieval (CBIR) is an image search technique that uses automatically derived visual features as search criteria and has demonstrable clinical benefits. However, very few studies have investigated the CBIR of multi-modality medical images, which are making a monumental impact in healthcare, e.g., combined positron emission tomography and computed tomography (PET-CT) for cancer diagnosis. In this thesis, we propose a new graph-based method for the CBIR of multi-modality medical images. We derive a graph representation that emphasises the spatial relationships between modalities by structurally constraining the graph based on image features, e.g., spatial proximity of tumours and organs. We also introduce a graph similarity calculation algorithm that prioritises the relationships between tumours and related organs. To enable effective human interpretation of retrieved multi-modality images, we also present a user interface that displays graph abstractions alongside complex multi-modality images. Our results demonstrated that our method achieved a high precision when retrieving images on the basis of tumour location within organs. The evaluation of our proposed UI design by user surveys revealed that it improved the ability of users to interpret and understand the similarity between retrieved PET-CT images. The work in this thesis advances the state-of-the-art by enabling a novel approach for the retrieval of multi-modality medical images

    Reasoning with Uncertainty in Deep Learning for Safer Medical Image Computing

    Get PDF
    Deep learning is now ubiquitous in the research field of medical image computing. As such technologies progress towards clinical translation, the question of safety becomes critical. Once deployed, machine learning systems unavoidably face situations where the correct decision or prediction is ambiguous. However, the current methods disproportionately rely on deterministic algorithms, lacking a mechanism to represent and manipulate uncertainty. In safety-critical applications such as medical imaging, reasoning under uncertainty is crucial for developing a reliable decision making system. Probabilistic machine learning provides a natural framework to quantify the degree of uncertainty over different variables of interest, be it the prediction, the model parameters and structures, or the underlying data (images and labels). Probability distributions are used to represent all the uncertain unobserved quantities in a model and how they relate to the data, and probability theory is used as a language to compute and manipulate these distributions. In this thesis, we explore probabilistic modelling as a framework to integrate uncertainty information into deep learning models, and demonstrate its utility in various high-dimensional medical imaging applications. In the process, we make several fundamental enhancements to current methods. We categorise our contributions into three groups according to the types of uncertainties being modelled: (i) predictive; (ii) structural and (iii) human uncertainty. Firstly, we discuss the importance of quantifying predictive uncertainty and understanding its sources for developing a risk-averse and transparent medical image enhancement application. We demonstrate how a measure of predictive uncertainty can be used as a proxy for the predictive accuracy in the absence of ground-truths. Furthermore, assuming the structure of the model is flexible enough for the task, we introduce a way to decompose the predictive uncertainty into its orthogonal sources i.e. aleatoric and parameter uncertainty. We show the potential utility of such decoupling in providing a quantitative “explanations” into the model performance. Secondly, we introduce our recent attempts at learning model structures directly from data. One work proposes a method based on variational inference to learn a posterior distribution over connectivity structures within a neural network architecture for multi-task learning, and share some preliminary results in the MR-only radiotherapy planning application. Another work explores how the training algorithm of decision trees could be extended to grow the architecture of a neural network to adapt to the given availability of data and the complexity of the task. Lastly, we develop methods to model the “measurement noise” (e.g., biases and skill levels) of human annotators, and integrate this information into the learning process of the neural network classifier. In particular, we show that explicitly modelling the uncertainty involved in the annotation process not only leads to an improvement in robustness to label noise, but also yields useful insights into the patterns of errors that characterise individual experts

    Deep Learning in Medical Image Analysis

    Get PDF
    The accelerating power of deep learning in diagnosing diseases will empower physicians and speed up decision making in clinical environments. Applications of modern medical instruments and digitalization of medical care have generated enormous amounts of medical images in recent years. In this big data arena, new deep learning methods and computational models for efficient data processing, analysis, and modeling of the generated data are crucially important for clinical applications and understanding the underlying biological process. This book presents and highlights novel algorithms, architectures, techniques, and applications of deep learning for medical image analysis

    Libro de actas. XXXV Congreso Anual de la Sociedad Española de Ingeniería Biomédica

    Get PDF
    596 p.CASEIB2017 vuelve a ser el foro de referencia a nivel nacional para el intercambio científico de conocimiento, experiencias y promoción de la I D i en Ingeniería Biomédica. Un punto de encuentro de científicos, profesionales de la industria, ingenieros biomédicos y profesionales clínicos interesados en las últimas novedades en investigación, educación y aplicación industrial y clínica de la ingeniería biomédica. En la presente edición, más de 160 trabajos de alto nivel científico serán presentados en áreas relevantes de la ingeniería biomédica, tales como: procesado de señal e imagen, instrumentación biomédica, telemedicina, modelado de sistemas biomédicos, sistemas inteligentes y sensores, robótica, planificación y simulación quirúrgica, biofotónica y biomateriales. Cabe destacar las sesiones dedicadas a la competición por el Premio José María Ferrero Corral, y la sesión de competición de alumnos de Grado en Ingeniería biomédica, que persiguen fomentar la participación de jóvenes estudiantes e investigadores
    corecore