115 research outputs found

    Efficient Multi-Scale 3D CNN with Fully Connected CRF for Accurate Brain Lesion Segmentation

    Get PDF
    We propose a dual pathway, 11-layers deep, three-dimensional Convolutional Neural Network for the challenging task of brain lesion segmentation. The devised architecture is the result of an in-depth analysis of the limitations of current networks proposed for similar applications. To overcome the computational burden of processing 3D medical scans, we have devised an efficient and effective dense training scheme which joins the processing of adjacent image patches into one pass through the network while automatically adapting to the inherent class imbalance present in the data. Further, we analyze the development of deeper, thus more discriminative 3D CNNs. In order to incorporate both local and larger contextual information, we employ a dual pathway architecture that processes the input images at multiple scales simultaneously. For post-processing of the network’s soft segmentation, we use a 3D fully connected Conditional Random Field which effectively removes false positives. Our pipeline is extensively evaluated on three challenging tasks of lesion segmentation in multi-channel MRI patient data with traumatic brain injuries, brain tumors, and ischemic stroke. We improve on the state-of-theart for all three applications, with top ranking performance on the public benchmarks BRATS 2015 and ISLES 2015. Our method is computationally efficient, which allows its adoption in a variety of research and clinical settings. The source code of our implementation is made publicly availabl

    Bayesian generative learning of brain and spinal cord templates from neuroimaging datasets

    Get PDF
    In the field of neuroimaging, Bayesian modelling techniques have been largely adopted and recognised as powerful tools for the purpose of extracting quantitative anatomical and functional information from medical scans. Nevertheless the potential of Bayesian inference has not yet been fully exploited, as many available tools rely on point estimation techniques, such as maximum likelihood estimation, rather than on full Bayesian inference. The aim of this thesis is to explore the value of approximate learning schemes, for instance variational Bayes, to perform inference from brain and spinal cord MRI data. The applications that will be explored in this work mainly concern image segmentation and atlas construction, with a particular emphasis on the problem of shape and intensity prior learning, from large training data sets of structural MR scans. The resulting computational tools are intended to enable integrated brain and spinal cord morphometric analyses, as opposed to the approach that is most commonly adopted in neuroimaging, which consists in optimising separate tools for brain and spine morphometrics

    Integrated navigation and visualisation for skull base surgery

    Get PDF
    Skull base surgery involves the management of tumours located on the underside of the brain and the base of the skull. Skull base tumours are intricately associated with several critical neurovascular structures making surgery challenging and high risk. Vestibular schwannoma (VS) is a benign nerve sheath tumour arising from one of the vestibular nerves and is the commonest pathology encountered in skull base surgery. The goal of modern VS surgery is maximal tumour removal whilst preserving neurological function and maintaining quality of life but despite advanced neurosurgical techniques, facial nerve paralysis remains a potentially devastating complication of this surgery. This thesis describes the development and integration of various advanced navigation and visualisation techniques to increase the precision and accuracy of skull base surgery. A novel Diffusion Magnetic Resonance Imaging (dMRI) acquisition and processing protocol for imaging the facial nerve in patients with VS was developed to improve delineation of facial nerve preoperatively. An automated Artificial Intelligence (AI)-based framework was developed to segment VS from MRI scans. A user-friendly navigation system capable of integrating dMRI and tractography of the facial nerve, 3D tumour segmentation and intraoperative 3D ultrasound was developed and validated using an anatomically-realistic acoustic phantom model of a head including the skull, brain and VS. The optical properties of five types of human brain tumour (meningioma, pituitary adenoma, schwannoma, low- and high-grade glioma) and nine different types of healthy brain tissue were examined across a wavelength spectrum of 400 nm to 800 nm in order to inform the development of an Intraoperative Hypserpectral Imaging (iHSI) system. Finally, functional and technical requirements of an iHSI were established and a prototype system was developed and tested in a first-in-patient study

    A Review on Brain Tumor Segmentation Based on Deep Learning Methods with Federated Learning Techniques

    Get PDF
    Brain tumors have become a severe medical complication in recent years due to their high fatality rate. Radiologists segment the tumor manually, which is time-consuming, error-prone, and expensive. In recent years, automated segmentation based on deep learning has demonstrated promising results in solving computer vision problems such as image classification and segmentation. Brain tumor segmentation has recently become a prevalent task in medical imaging to determine the tumor location, size, and shape using automated methods. Many researchers have worked on various machine and deep learning approaches to determine the most optimal solution using the convolutional methodology. In this review paper, we discuss the most effective segmentation techniques based on the datasets that are widely used and publicly available. We also proposed a survey of federated learning methodologies to enhance global segmentation performance and ensure privacy. A comprehensive literature review is suggested after studying more than 100 papers to generalize the most recent techniques in segmentation and multi-modality information. Finally, we concentrated on unsolved problems in brain tumor segmentation and a client-based federated model training strategy. Based on this review, future researchers will understand the optimal solution path to solve these issues

    Image Analysis for the Life Sciences - Computer-assisted Tumor Diagnostics and Digital Embryomics

    Get PDF
    Current research in the life sciences involves the analysis of such a huge amount of image data that automatization is required. This thesis presents several ways how pattern recognition techniques may contribute to improved tumor diagnostics and to the elucidation of vertebrate embryonic development. Chapter 1 studies an approach for exploiting spatial context for the improved estimation of metabolite concentrations from magnetic resonance spectroscopy imaging (MRSI) data with the aim of more robust tumor detection, and compares against a novel alternative. Chapter 2 describes a software library for training, testing and validating classification algorithms that estimate tumor probability based on MRSI. It allows flexible adaptation towards changed experimental conditions, classifier comparison and quality control without need for expertise in pattern recognition. Chapter 3 studies several models for learning tumor classifiers that allow for the common unreliability of human segmentations. For the first time, models are used for this task that additionally employ the objective image information. Chapter 4 encompasses two contributions to an image analysis pipeline for automatically reconstructing zebrafish embryonic development based on time-resolved microscopy: Two approaches for nucleus segmentation are experimentally compared, and a procedure for tracking nuclei over time is presented and evaluated

    Dynamical models and machine learning for supervised segmentation

    Get PDF
    This thesis is concerned with the problem of how to outline regions of interest in medical images, when the boundaries are weak or ambiguous and the region shapes are irregular. The focus on machine learning and interactivity leads to a common theme of the need to balance conflicting requirements. First, any machine learning method must strike a balance between how much it can learn and how well it generalises. Second, interactive methods must balance minimal user demand with maximal user control. To address the problem of weak boundaries,methods of supervised texture classification are investigated that do not use explicit texture features. These methods enable prior knowledge about the image to benefit any segmentation framework. A chosen dynamic contour model, based on probabilistic boundary tracking, combines these image priors with efficient modes of interaction. We show the benefits of the texture classifiers over intensity and gradient-based image models, in both classification and boundary extraction. To address the problem of irregular region shape, we devise a new type of statistical shape model (SSM) that does not use explicit boundary features or assume high-level similarity between region shapes. First, the models are used for shape discrimination, to constrain any segmentation framework by way of regularisation. Second, the SSMs are used for shape generation, allowing probabilistic segmentation frameworks to draw shapes from a prior distribution. The generative models also include novel methods to constrain shape generation according to information from both the image and user interactions. The shape models are first evaluated in terms of discrimination capability, and shown to out-perform other shape descriptors. Experiments also show that the shape models can benefit a standard type of segmentation algorithm by providing shape regularisers. We finally show how to exploit the shape models in supervised segmentation frameworks, and evaluate their benefits in user trials

    Machine Learning in Medical Image Analysis

    Get PDF
    Machine learning is playing a pivotal role in medical image analysis. Many algorithms based on machine learning have been applied in medical imaging to solve classification, detection, and segmentation problems. Particularly, with the wide application of deep learning approaches, the performance of medical image analysis has been significantly improved. In this thesis, we investigate machine learning methods for two key challenges in medical image analysis: The first one is segmentation of medical images. The second one is learning with weak supervision in the context of medical imaging. The first main contribution of the thesis is a series of novel approaches for image segmentation. First, we propose a framework based on multi-scale image patches and random forests to segment small vessel disease (SVD) lesions on computed tomography (CT) images. This framework is validated in terms of spatial similarity, estimated lesion volumes, visual score ratings and was compared with human experts. The results showed that the proposed framework performs as well as human experts. Second, we propose a generic convolutional neural network (CNN) architecture called the DRINet for medical image segmentation. The DRINet approach is robust in three different types of segmentation tasks, which are multi-class cerebrospinal fluid (CSF) segmentation on brain CT images, multi-organ segmentation on abdomen CT images, and multi-class tumour segmentation on brain magnetic resonance (MR) images. Finally, we propose a CNN-based framework to segment acute ischemic lesions on diffusion weighted (DW)-MR images, where the lesions are highly variable in terms of position, shape, and size. Promising results were achieved on a large clinical dataset. The second main contribution of the thesis is two novel strategies for learning with weak supervision. First, we propose a novel strategy called context restoration to make use of the images without annotations. The context restoration strategy is a proxy learning process based on the CNN, which extracts semantic features from images without using annotations. It was validated on classification, localization, and segmentation problems and was superior to existing strategies. Second, we propose a patch-based framework using multi-instance learning to distinguish normal and abnormal SVD on CT images, where there are only coarse-grained labels available. Our framework was observed to work better than classic methods and clinical practice.Open Acces

    Nephroblastoma in MRI Data

    Get PDF
    The main objective of this work is the mathematical analysis of nephroblastoma in MRI sequences. At the beginning we provide two different datasets for segmentation and classification. Based on the first dataset, we analyze the current clinical practice regarding therapy planning on the basis of annotations of a single radiologist. We can show with our benchmark that this approach is not optimal and that there may be significant differences between human annotators and even radiologists. In addition, we demonstrate that the approximation of the tumor shape currently used is too coarse granular and thus prone to errors. We address this problem and develop a method for interactive segmentation that allows an intuitive and accurate annotation of the tumor. While the first part of this thesis is mainly concerned with the segmentation of Wilms’ tumors, the second part deals with the reliability of diagnosis and the planning of the course of therapy. The second data set we compiled allows us to develop a method that dramatically improves the differential diagnosis between nephroblastoma and its precursor lesion nephroblastomatosis. Finally, we can show that even the standard MRI modality for Wilms’ tumors is sufficient to estimate the developmental tendencies of nephroblastoma under chemotherapy

    Advancing efficiency and robustness of neural networks for imaging

    Get PDF
    Enabling machines to see and analyze the world is a longstanding research objective. Advances in computer vision have the potential of influencing many aspects of our lives as they can enable machines to tackle a variety of tasks. Great progress in computer vision has been made, catalyzed by recent progress in machine learning and especially the breakthroughs achieved by deep artificial neural networks. Goal of this work is to alleviate limitations of deep neural networks that hinder their large-scale adoption for real-world applications. To this end, it investigates methodologies for constructing and training deep neural networks with low computational requirements. Moreover, it explores strategies for achieving robust performance on unseen data. Of particular interest is the application of segmenting volumetric medical scans because of the technical challenges it imposes, as well as its clinical importance. The developed methodologies are generic and of relevance to a broader computer vision and machine learning audience. More specifically, this work introduces an efficient 3D convolutional neural network architecture, which achieves high performance for segmentation of volumetric medical images, an application previously hindered by high computational requirements of 3D networks. It then investigates sensitivity of network performance on hyper-parameter configuration, which we interpret as overfitting the model configuration to the data available during development. It is shown that ensembling a set of models with diverse configurations mitigates this and improves generalization. The thesis then explores how to utilize unlabelled data for learning representations that generalize better. It investigates domain adaptation and introduces an architecture for adversarial networks tailored for adaptation of segmentation networks. Finally, a novel semi-supervised learning method is proposed that introduces a graph in the latent space of a neural network to capture relations between labelled and unlabelled samples. It then regularizes the embedding to form a compact cluster per class, which improves generalization.Open Acces
    • …
    corecore