272 research outputs found

    On the Reliability of Diffusion Neuroimaging

    Get PDF
    Over the last years, diffusion imaging techniques like DTI, DSI or Q-Ball received increasin

    Dense 4D nanoscale reconstruction of living brain tissue

    Get PDF
    Three-dimensional (3D) reconstruction of living brain tissue down to an individual synapse level would create opportunities for decoding the dynamics and structure–function relationships of the brain’s complex and dense information processing network; however, this has been hindered by insufficient 3D resolution, inadequate signal-to-noise ratio and prohibitive light burden in optical imaging, whereas electron microscopy is inherently static. Here we solved these challenges by developing an integrated optical/machine-learning technology, LIONESS (live information-optimized nanoscopy enabling saturated segmentation). This leverages optical modifications to stimulated emission depletion microscopy in comprehensively, extracellularly labeled tissue and previous information on sample structure via machine learning to simultaneously achieve isotropic super-resolution, high signal-to-noise ratio and compatibility with living tissue. This allows dense deep-learning-based instance segmentation and 3D reconstruction at a synapse level, incorporating molecular, activity and morphodynamic information. LIONESS opens up avenues for studying the dynamic functional (nano-)architecture of living brain tissue

    Determination of Thermal Dose Model Parameters Using Magnetic Resonance Imaging

    Get PDF
    Magnetic Resonance Temperature Imaging (MRTI) is a powerful technique for noninvasively monitoring temperature during minimally invasive thermal therapy procedures. When coupled with thermal dose models, MRTI feedback provides the clinician with a real-time estimate of tissue damage by functioning as a surrogate for post-treatment verification imaging. This aids in maximizing the safety and efficacy of treatment by facilitating adaptive control of the damaged volume during therapy. The underlying thermal dose parameters are derived from laboratory experiments that do not necessarily reflect the surrogate imaging endpoints used for treatment verification. Thus, there is interest and opportunity in deriving model parameters from clinical procedures that are tailored to radiologic endpoints. The objective of this work is to develop and investigate the feasibility of a methodology for extracting thermal dose model parameters from MR data acquired during ablation procedures. To this end, two approaches are investigated. One is to optimize model parameters using post-treatment imaging outcomes. Another is to use a multi-parametric pulse sequence designed for simultaneous monitoring of temperature and damage dependent MR parameters. These methodologies were developed and investigated in phantom and feasibility established using retrospective analysis of in vivo thermal therapy treatments. This technique represents an opportunity to exploit experimental data to obtain thermal dose parameters that are highly specific for clinically relevant endpoints

    Symbiotic deep learning for medical image analysis with applications in real-time diagnosis for fetal ultrasound screening

    Get PDF
    The last hundred years have seen a monumental rise in the power and capability of machines to perform intelligent tasks in the stead of previously human operators. This rise is not expected to slow down any time soon and what this means for society and humanity as a whole remains to be seen. The overwhelming notion is that with the right goals in mind, the growing influence of machines on our every day tasks will enable humanity to give more attention to the truly groundbreaking challenges that we all face together. This will usher in a new age of human machine collaboration in which humans and machines may work side by side to achieve greater heights for all of humanity. Intelligent systems are useful in isolation, but the true benefits of intelligent systems come to the fore in complex systems where the interaction between humans and machines can be made seamless, and it is this goal of symbiosis between human and machine that may democratise complex knowledge, which motivates this thesis. In the recent past, datadriven methods have come to the fore and now represent the state-of-the-art in many different fields. Alongside the shift from rule-based towards data-driven methods we have also seen a shift in how humans interact with these technologies. Human computer interaction is changing in response to data-driven methods and new techniques must be developed to enable the same symbiosis between man and machine for data-driven methods as for previous formula-driven technology. We address five key challenges which need to be overcome for data-driven human-in-the-loop computing to reach maturity. These are (1) the ’Categorisation Challenge’ where we examine existing work and form a taxonomy of the different methods being utilised for data-driven human-in-the-loop computing; (2) the ’Confidence Challenge’, where data-driven methods must communicate interpretable beliefs in how confident their predictions are; (3) the ’Complexity Challenge’ where the aim of reasoned communication becomes increasingly important as the complexity of tasks and methods to solve also increases; (4) the ’Classification Challenge’ in which we look at how complex methods can be separated in order to provide greater reasoning in complex classification tasks; and finally (5) the ’Curation Challenge’ where we challenge the assumptions around bottleneck creation for the development of supervised learning methods.Open Acces

    Multiparametric Magnetic Resonance Imaging Artificial Intelligence Pipeline for Oropharyngeal Cancer Radiotherapy Treatment Guidance

    Get PDF
    Oropharyngeal cancer (OPC) is a widespread disease and one of the few domestic cancers that is rising in incidence. Radiographic images are crucial for assessment of OPC and aid in radiotherapy (RT) treatment. However, RT planning with conventional imaging approaches requires operator-dependent tumor segmentation, which is the primary source of treatment error. Further, OPC expresses differential tumor/node mid-RT response (rapid response) rates, resulting in significant differences between planned and delivered RT dose. Finally, clinical outcomes for OPC patients can also be variable, which warrants the investigation of prognostic models. Multiparametric MRI (mpMRI) techniques that incorporate simultaneous anatomical and functional information coupled to artificial intelligence (AI) approaches could improve clinical decision support for OPC by providing immediately actionable clinical rationale for adaptive RT planning. If tumors could be reproducibly segmented, rapid response could be classified, and prognosis could be reliably determined, overall patient outcomes would be optimized to improve the therapeutic index as a function of more risk-adapted RT volumes. Consequently, there is an unmet need for automated and reproducible imaging which can simultaneously segment tumors and provide predictive value for actionable RT adaptation. This dissertation primarily seeks to explore and optimize image processing, tumor segmentation, and patient outcomes in OPC through a combination of advanced imaging techniques and AI algorithms. In the first specific aim of this dissertation, we develop and evaluate mpMRI pre-processing techniques for use in downstream segmentation, response prediction, and outcome prediction pipelines. Various MRI intensity standardization and registration approaches were systematically compared and benchmarked. Moreover, synthetic image algorithms were developed to decrease MRI scan time in an effort to optimize our AI pipelines. We demonstrated that proper intensity standardization and image registration can improve mpMRI quality for use in AI algorithms, and developed a novel method to decrease mpMRI acquisition time. Subsequently, in the second specific aim of this dissertation, we investigated underlying questions regarding the implementation of RT-related auto-segmentation. Firstly, we quantified interobserver variability for an unprecedented large number of observers for various radiotherapy structures in several disease sites (with a particular emphasis on OPC) using a novel crowdsourcing platform. We then trained an AI algorithm on a series of extant matched mpMRI datasets to segment OPC primary tumors. Moreover, we validated and compared our best model\u27s performance to clinical expert observers. We demonstrated that AI-based mpMRI OPC tumor auto-segmentation offers decreased variability and comparable accuracy to clinical experts, and certain mpMRI input channel combinations could further improve performance. Finally, in the third specific aim of this dissertation, we predicted OPC primary tumor mid-therapy (rapid) treatment response and prognostic outcomes. Using co-registered pre-therapy and mid-therapy primary tumor manual segmentations of OPC patients, we generated and characterized treatment sensitive and treatment resistant pre-RT sub-volumes. These sub-volumes were used to train an AI algorithm to predict individual voxel-wise treatment resistance. Additionally, we developed an AI algorithm to predict OPC patient progression free survival using pre-therapy imaging from an international data science competition (ranking 1st place), and then translated these approaches to mpMRI data. We demonstrated AI models could be used to predict rapid response and prognostic outcomes using pre-therapy imaging, which could help guide treatment adaptation, though further work is needed. In summary, the completion of these aims facilitates the development of an image-guided fully automated OPC clinical decision support tool. The resultant deliverables from this project will positively impact patients by enabling optimized therapeutic interventions in OPC. Future work should consider investigating additional imaging timepoints, imaging modalities, uncertainty quantification, perceptual and ethical considerations, and prospective studies for eventual clinical implementation. A dynamic version of this dissertation is publicly available and assigned a digital object identifier through Figshare (doi: 10.6084/m9.figshare.22141871)

    Segmentation Of Intracranial Structures From Noncontrast Ct Images With Deep Learning

    Get PDF
    Presented in this work is an investigation of the application of artificially intelligent algorithms, namely deep learning, to generate segmentations for the application in functional avoidance radiotherapy treatment planning. Specific applications of deep learning for functional avoidance include generating hippocampus segmentations from computed tomography (CT) images and generating synthetic pulmonary perfusion images from four-dimensional CT (4DCT).A single institution dataset of 390 patients treated with Gamma Knife stereotactic radiosurgery was created. From these patients, the hippocampus was manually segmented on the high-resolution MR image and used for the development of the data processing methodology and model testing. It was determined that an attention-gated 3D residual network performed the best, with 80.2% of contours meeting the clinical trial acceptability criteria. After having determined the highest performing model architecture, the model was tested on data from the RTOG-0933 Phase II multi-institutional clinical trial for hippocampal avoidance whole brain radiotherapy. From the RTOG-0933 data, an institutional observer (IO) generated contours to compare the deep learning style and the style of the physicians participating in the phase II trial. The deep learning model performance was compared with contour comparison and radiotherapy treatment planning. Results showed that the deep learning contours generated plans comparable to the IO style, but differed significantly from the phase II contours, indicating further investigation is required before this technology can be apply clinically. Additionally, motivated by the observed deviation in contouring styles of the trial’s participating treating physicians, the utility of applying deep learning as a first-pass quality assurance measure was investigated. To simulate a central review, the IO contours were compared to the treating physician contours in attempt to identify unacceptable deviations. The deep learning model was found to have an AUC of 0.80 for left, 0.79 for right hippocampus, thus indicating the potential applications of deep learning as a first-pass quality assurance tool. The methods developed during the hippocampal segmentation task were then translated to the generation of synthetic pulmonary perfusion imaging for use in functional lung avoidance radiotherapy. A clinical data set of 58 pre- and post-radiotherapy SPECT perfusion studies (32 patients) with contemporaneous 4DCT studies were collected. From the data set, 50 studies were used to train a 3D-residual network, with a five-fold validation used to select the highest performing model instances (N=5). The highest performing instances were tested on a 5 patient (8 study) hold-out test set. From these predictions, 50th percentile contours of well-perfused lung were generated and compared to contours from the clinical SPECT perfusion images. On the test set the Spearman correlation coefficient was strong (0.70, IQR: 0.61-0.76) and the functional avoidance contours agreed well Dice of 0.803 (IQR: 0.750-0.810), average surface distance of 5.92 mm (IQR: 5.68-7.55) mm. This study indicates the potential applications of deep learning for the generation of synthetic pulmonary perfusion images but requires an expanded dataset for additional model testing

    Uncertainty-aware Visualization in Medical Imaging - A Survey

    Get PDF
    Medical imaging (image acquisition, image transformation, and image visualization) is a standard tool for clinicians in order to make diagnoses, plan surgeries, or educate students. Each of these steps is affected by uncertainty, which can highly influence the decision-making process of clinicians. Visualization can help in understanding and communicating these uncertainties. In this manuscript, we aim to summarize the current state-of-the-art in uncertainty-aware visualization in medical imaging. Our report is based on the steps involved in medical imaging as well as its applications. Requirements are formulated to examine the considered approaches. In addition, this manuscript shows which approaches can be combined to form uncertainty-aware medical imaging pipelines. Based on our analysis, we are able to point to open problems in uncertainty-aware medical imaging

    Towards Data-Driven Large Scale Scientific Visualization and Exploration

    Get PDF
    Technological advances have enabled us to acquire extremely large datasets but it remains a challenge to store, process, and extract information from them. This dissertation builds upon recent advances in machine learning, visualization, and user interactions to facilitate exploration of large-scale scientific datasets. First, we use data-driven approaches to computationally identify regions of interest in the datasets. Second, we use visual presentation for effective user comprehension. Third, we provide interactions for human users to integrate domain knowledge and semantic information into this exploration process. Our research shows how to extract, visualize, and explore informative regions on very large 2D landscape images, 3D volumetric datasets, high-dimensional volumetric mouse brain datasets with thousands of spatially-mapped gene expression profiles, and geospatial trajectories that evolve over time. The contribution of this dissertation include: (1) We introduce a sliding-window saliency model that discovers regions of user interest in very large images; (2) We develop visual segmentation of intensity-gradient histograms to identify meaningful components from volumetric datasets; (3) We extract boundary surfaces from a wealth of volumetric gene expression mouse brain profiles to personalize the reference brain atlas; (4) We show how to efficiently cluster geospatial trajectories by mapping each sequence of locations to a high-dimensional point with the kernel distance framework. We aim to discover patterns, relationships, and anomalies that would lead to new scientific, engineering, and medical advances. This work represents one of the first steps toward better visual understanding of large-scale scientific data by combining machine learning and human intelligence
    • …
    corecore