431 research outputs found

    Unresolved Object Detection Using Synthetic Data Generation and Artificial Neural Networks

    Get PDF
    This research presents and solves constrained real-world problems of using synthetic data to train artificial neural networks (ANNs) to detect unresolved moving objects in wide field of view (WFOV) electro-optical/infrared (EO/IR) satellite motion imagery. Objectives include demonstrating the use of the Air Force Institute of Technology (AFIT) Sensor and Scene Emulation Tool (ASSET) as an effective tool for generating EO/IR motion imagery representative of real WFOV sensors and describing the ANN architectures, training, and testing results obtained. Deep learning using a 3-D convolutional neural network (3D ConvNet), long short term memory (LSTM) network, and U-Net are used to solve the problem of EO/IR unresolved object detection. U-Net is shown to be a promising ANN architecture for performing EO/IR unresolved object detection. In two of the experiments, U-Net achieved 90% and 88% pixel prediction accuracy. In addition, the results show ASSET is capable of generating sufficient information needed to train deep learning models

    Magnetic resonance image-based brain tumour segmentation methods : a systematic review

    Get PDF
    Background: Image segmentation is an essential step in the analysis and subsequent characterisation of brain tumours through magnetic resonance imaging. In the literature, segmentation methods are empowered by open-access magnetic resonance imaging datasets, such as the brain tumour segmentation dataset. Moreover, with the increased use of artificial intelligence methods in medical imaging, access to larger data repositories has become vital in method development. Purpose: To determine what automated brain tumour segmentation techniques can medical imaging specialists and clinicians use to identify tumour components, compared to manual segmentation. Methods: We conducted a systematic review of 572 brain tumour segmentation studies during 2015–2020. We reviewed segmentation techniques using T1-weighted, T2-weighted, gadolinium-enhanced T1-weighted, fluid-attenuated inversion recovery, diffusion-weighted and perfusion-weighted magnetic resonance imaging sequences. Moreover, we assessed physics or mathematics-based methods, deep learning methods, and software-based or semi-automatic methods, as applied to magnetic resonance imaging techniques. Particularly, we synthesised each method as per the utilised magnetic resonance imaging sequences, study population, technical approach (such as deep learning) and performance score measures (such as Dice score). Statistical tests: We compared median Dice score in segmenting the whole tumour, tumour core and enhanced tumour. Results: We found that T1-weighted, gadolinium-enhanced T1-weighted, T2-weighted and fluid-attenuated inversion recovery magnetic resonance imaging are used the most in various segmentation algorithms. However, there is limited use of perfusion-weighted and diffusion-weighted magnetic resonance imaging. Moreover, we found that the U-Net deep learning technology is cited the most, and has high accuracy (Dice score 0.9) for magnetic resonance imaging-based brain tumour segmentation. Conclusion: U-Net is a promising deep learning technology for magnetic resonance imaging-based brain tumour segmentation. The community should be encouraged to contribute open-access datasets so training, testing and validation of deep learning algorithms can be improved, particularly for diffusion- and perfusion-weighted magnetic resonance imaging, where there are limited datasets available

    Automatic assessment of glioma burden: A deep learning algorithm for fully automated volumetric and bi-dimensional measurement

    Get PDF
    Background Longitudinal measurement of glioma burden with MRI is the basis for treatment response assessment. In this study, we developed a deep learning algorithm that automatically segments abnormal fluid attenuated inversion recovery (FLAIR) hyperintensity and contrast-enhancing tumor, quantitating tumor volumes as well as the product of maximum bidimensional diameters according to the Response Assessment in Neuro-Oncology (RANO) criteria (AutoRANO). Methods Two cohorts of patients were used for this study. One consisted of 843 preoperative MRIs from 843 patients with low- or high-grade gliomas from 4 institutions and the second consisted of 713 longitudinal postoperative MRI visits from 54 patients with newly diagnosed glioblastomas (each with 2 pretreatment “baseline” MRIs) from 1 institution. Results The automatically generated FLAIR hyperintensity volume, contrast-enhancing tumor volume, and AutoRANO were highly repeatable for the double-baseline visits, with an intraclass correlation coefficient (ICC) of 0.986, 0.991, and 0.977, respectively, on the cohort of postoperative GBM patients. Furthermore, there was high agreement between manually and automatically measured tumor volumes, with ICC values of 0.915, 0.924, and 0.965 for preoperative FLAIR hyperintensity, postoperative FLAIR hyperintensity, and postoperative contrast-enhancing tumor volumes, respectively. Lastly, the ICCs for comparing manually and automatically derived longitudinal changes in tumor burden were 0.917, 0.966, and 0.850 for FLAIR hyperintensity volume, contrast-enhancing tumor volume, and RANO measures, respectively. Conclusions Our automated algorithm demonstrates potential utility for evaluating tumor burden in complex posttreatment settings, although further validation in multicenter clinical trials will be needed prior to widespread implementation

    Advancing efficiency and robustness of neural networks for imaging

    Get PDF
    Enabling machines to see and analyze the world is a longstanding research objective. Advances in computer vision have the potential of influencing many aspects of our lives as they can enable machines to tackle a variety of tasks. Great progress in computer vision has been made, catalyzed by recent progress in machine learning and especially the breakthroughs achieved by deep artificial neural networks. Goal of this work is to alleviate limitations of deep neural networks that hinder their large-scale adoption for real-world applications. To this end, it investigates methodologies for constructing and training deep neural networks with low computational requirements. Moreover, it explores strategies for achieving robust performance on unseen data. Of particular interest is the application of segmenting volumetric medical scans because of the technical challenges it imposes, as well as its clinical importance. The developed methodologies are generic and of relevance to a broader computer vision and machine learning audience. More specifically, this work introduces an efficient 3D convolutional neural network architecture, which achieves high performance for segmentation of volumetric medical images, an application previously hindered by high computational requirements of 3D networks. It then investigates sensitivity of network performance on hyper-parameter configuration, which we interpret as overfitting the model configuration to the data available during development. It is shown that ensembling a set of models with diverse configurations mitigates this and improves generalization. The thesis then explores how to utilize unlabelled data for learning representations that generalize better. It investigates domain adaptation and introduces an architecture for adversarial networks tailored for adaptation of segmentation networks. Finally, a novel semi-supervised learning method is proposed that introduces a graph in the latent space of a neural network to capture relations between labelled and unlabelled samples. It then regularizes the embedding to form a compact cluster per class, which improves generalization.Open Acces

    Biomedical Image Processing and Classification

    Get PDF
    Biomedical image processing is an interdisciplinary field involving a variety of disciplines, e.g., electronics, computer science, physics, mathematics, physiology, and medicine. Several imaging techniques have been developed, providing many approaches to the study of the human body. Biomedical image processing is finding an increasing number of important applications in, for example, the study of the internal structure or function of an organ and the diagnosis or treatment of a disease. If associated with classification methods, it can support the development of computer-aided diagnosis (CAD) systems, which could help medical doctors in refining their clinical picture

    Deep generative modelling of the imaged human brain

    Get PDF
    Human-machine symbiosis is a very promising opportunity for the field of neurology given that the interpretation of the imaged human brain is a trivial feat for neither entity. However, before machine learning systems can be used in real world clinical situations, many issues with automated analysis must first be solved. In this thesis I aim to address what I consider the three biggest hurdles to the adoption of automated machine learning interpretative systems. For each issue, I will first elucidate the reader on its importance given the overarching narratives of both neurology and machine learning, and then showcase my proposed solutions to these issues through the use of deep generative models of the imaged human brain. First, I start by addressing what is an uncontroversial and universal sign of intelligence: the ability to extrapolate knowledge to unseen cases. Human neuroradiologists have studied the anatomy of the healthy brain and can therefore, with some success, identify most pathologies present on an imaged brain, even without having ever been previously exposed to them. Current discriminative machine learning systems require vast amounts of labelled data in order to accurately identify diseases. In this first part I provide a generative framework that permits machine learning models to more efficiently leverage unlabelled data for better diagnoses with either none or small amounts of labels. Secondly, I address a major ethical concern in medicine: equitable evaluation of all patients, regardless of demographics or other identifying characteristics. This is, unfortunately, something that even human practitioners fail at, making the matter ever more pressing: unaddressed biases in data will become biases in the models. To address this concern I suggest a framework through which a generative model synthesises demographically counterfactual brain imaging to successfully reduce the proliferation of demographic biases in discriminative models. Finally, I tackle the challenge of spatial anatomical inference, a task at the centre of the field of lesion-deficit mapping, which given brain lesions and associated cognitive deficits attempts to discover the true functional anatomy of the brain. I provide a new Bayesian generative framework and implementation that allows for greatly improved results on this challenge, hopefully, paving part of the road towards a greater and more complete understanding of the human brain

    An Artificial Intelligence Approach to Tumor Volume Delineation

    Get PDF
    Postponed access: the file will be accessible after 2023-11-14Masteroppgave for radiograf/bioingeniørRABD395MAMD-HELS
    • …
    corecore