409 research outputs found

    Methods for Epigenetic Analyses from Long-Read Sequencing Data

    Get PDF
    Epigenetics, particularly the study of DNA methylation, is a cornerstone field for our understanding of human development and disease. DNA methylation has been included in the "hallmarks of cancer" due to its important function as a biomarker and its contribution to carcinogenesis and cancer cell plasticity. Long-read sequencing technologies, such as the Oxford Nanopore Technologies platform, have evolved the study of structural variations, while at the same time allowing direct measurement of DNA methylation on the same reads. With this, new avenues of analysis have opened up, such as long-range allele-specific methylation analysis, methylation analysis on structural variations, or relating nearby epigenetic modalities on the same read to another. Basecalling and methylation calling of Nanopore reads is a computationally expensive task which requires complex machine learning architectures. Read-level methylation calls require different approaches to data management and analysis than ones developed for methylation frequencies measured from short-read technologies or array data. The 2-dimensional nature of read and genome associated DNA methylation calls, including methylation caller uncertainties, are much more storage costly than 1-dimensional methylation frequencies. Methods for storage, retrieval, and analysis of such data therefore require careful consideration. Downstream analysis tasks, such as methylation segmentation or differential methylation calling, have the potential of benefiting from read information and allow uncertainty propagation. These avenues had not been considered in existing tools. In my work, I explored the potential of long-read DNA methylation analysis and tackled some of the challenges of data management and downstream analysis using state of the art software architecture and machine learning methods. I defined a storage standard for reference anchored and read assigned DNA methylation calls, including methylation calling uncertainties and read annotations such as haplotype or sample information. This storage container is defined as a schema for the hierarchical data format version 5, includes an index for rapid access to genomic coordinates, and is optimized for parallel computing with even load balancing. It further includes a python API for creation, modification, and data access, including convenience functions for the extraction of important quality statistics via a command line interface. Furthermore, I developed software solutions for the segmentation and differential methylation testing of DNA methylation calls from Nanopore sequencing. This implementation takes advantage of the performance benefits provided by my high performance storage container. It includes a Bayesian methylome segmentation algorithm which allows for the consensus instance segmentation of multiple sample and/or haplotype assigned DNA methylation profiles, while considering methylation calling uncertainties. Based on this segmentation, the software can then perform differential methylation testing and provides a large number of options for statistical testing and multiple testing correction. I benchmarked all tools on both simulated and publicly available real data, and show the performance benefits compared to previously existing and concurrently developed solutions. Next, I applied the methods to a cancer study on a chromothriptic cancer sample from a patient with Sonic Hedgehog Medulloblastoma. I here report regulatory genomic regions differentially methylated before and after treatment, allele-specific methylation in the tumor, as well as methylation on chromothriptic structures. Finally, I developed specialized methylation callers for the combined DNA methylation profiling of CpG, GpC, and context-free adenine methylation. These callers can be used to measure chromatin accessibility in a NOMe-seq like setup, showing the potential of long-read sequencing for the profiling of transcription factor co-binding. In conclusion, this thesis presents and subsequently benchmarks new algorithmic and infrastructural solutions for the analysis of DNA methylation data from long-read sequencing

    Need for speed:Achieving fast image processing in acute stroke care

    Get PDF
    This thesis aims to investigate the use of high-performance computing (HPC) techniques in developing imaging biomarkers to support the clinical workflow of acute stroke patients. In the first part of this thesis, we evaluate different HPC technologies and how such technologies can be leveraged by different image analysis applications used in the context of acute stroke care. More specifically, we evaluated how computers with multiple computing devices can be used to accelerate medical imaging applications in Chapter 2. Chapter 3 proposes a novel data compression technique that efficiently processes CT perfusion (CTP) images in GPUs. Unfortunately, the size of CTP datasets makes data transfers to computing devices time-consuming and, therefore, unsuitable in acute situations. Chapter 4 further evaluates the algorithm's usefulness proposed in Chapter 3 with two different applications: a double threshold segmentation and a time-intensity profile similarity (TIPS) bilateral filter to reduce noise in CTP scans. Finally, Chapter 5 presents a cloud platform for deploying high-performance medical applications for acute stroke patients. In Part 2 of this thesis, Chapter 6 presents a convolutional neural network (CNN) for detecting and volumetric segmentation of subarachnoid hemorrhages (SAH) in non-contrast CT scans. Chapter 7 proposed another method based on CNNs to quantify the final infarct volumes in follow-up non-contrast CT scans from ischemic stroke patients

    3D Segmentation & Measurement of Macular Holes

    Get PDF
    Macular holes are blinding conditions where a hole develops in the central part of retina, resulting in reduced central vision. The prognosis and treatment options are related to a number of variables including the macular hole size and shape. In this work we introduce a method to segment and measure macular holes in three-dimensional (3D) data. High-resolution spectral domain optical coherence tomography (SD-OCT) allows precise imaging of the macular hole geometry in three dimensions, but the measurement of these by human observers is time consuming and prone to high inter- and intra-observer variability, being characteristically measured in 2D rather than 3D. This work introduces several novel techniques to automatically retrieve accurate 3D measurements of the macular hole, including surface area, base area, base diameter, top area, top diameter, height, and minimum diameter. Speciļ¬cally, it is introducing a multi-scale 3D level set segmentation approach based on a state-of-the-art level set method, and introducing novel curvature-based cutting and 3D measurement procedures. The algorithm is fully automatic, and we validate the extracted measurements both qualitatively and quantitatively, where the results show the method to be robust across a variety of scenarios. A segmentation software package is presented for targeting medical and biological applications, with a high level of visual feedback and several usability enhancements over existing packages. Speciļ¬cally, it is providing a substantially faster graphics processing unit (GPU) implementation of the local Gaussian distribution ļ¬tting (LGDF) energy model, which can segment inhomogeneous objects with poorly deļ¬ned boundaries as often encountered in biomedical images. It also provides interactive brushes to guide the segmentation process in a semi-automated framework. The speed of implementation allows us to visualise the active surface in real-time with a built-in ray tracer, where users may halt evolution at any timestep to correct implausible segmentation by painting new blocking regions or new seeds. Quantitative and qualitative validation is presented, demonstrating the practical eļ¬ƒcacy of the interactive elements for a variety of real-world datasets. The size of macular holes is known to be one of the strongest predictors of surgical success both anatomically and functionally. Furthermore, it is used to guide the choice of treatment, the optimum surgical approach and to predict outcome. Our automated 3D image segmentation algorithm has extracted 3D shape-based macular hole measurements and described the dimensions and morphology. Our approach is able to robustly and accurately measure macular hole dimensions. This thesis is considered as a signiļ¬cant contribution for clinical applications particularly in the ļ¬eld of macular hole segmentation and shape analysis

    Advancing fluorescent contrast agent recovery methods for surgical guidance applications

    Get PDF
    Fluorescence-guided surgery (FGS) utilizes fluorescent contrast agents and specialized optical instruments to assist surgeons in intraoperatively identifying tissue-specific characteristics, such as perfusion, malignancy, and molecular function. In doing so, FGS represents a powerful surgical navigation tool for solving clinical challenges not easily addressed by other conventional imaging methods. With growing translational efforts, major hurdles within the FGS field include: insufficient tools for understanding contrast agent uptake behaviors, the inability to image tissue beyond a couple millimeters, and lastly, performance limitations of currently-approved contrast agents in accurately and rapidly labeling disease. The developments presented within this thesis aim to address such shortcomings. Current preclinical fluorescence imaging tools often sacrifice either 3D scale or spatial resolution. To address this gap in high-resolution, whole-body preclinical imaging tools available, the crux of this work lays on the development of a hyperspectral cryo-imaging system and image-processing techniques to accurately recapitulate high-resolution, 3D biodistributions in whole-animal experiments. Specifically, the goal is to correct each cryo-imaging dataset such that it becomes a useful reporter for whole-body biodistributions in relevant disease models. To investigate potential benefits of seeing deeper during FGS, we investigated short-wave infrared imaging (SWIR) for recovering fluorescence beyond the conventional top few millimeters. Through phantom, preclinical, and clinical SWIR imaging, we were able to 1) validate the capability of SWIR imaging with conventional NIR-I fluorophores, 2) demonstrate the translational benefits of SWIR-ICG angiography in a large animal model, and 3) detect micro-dose levels of an EGFR-targeted NIR-I probe during a Phase 0 clinical trial. Lastly, we evaluated contrast agent performances for FGS glioma resection and breast cancer margin assessment. To evaluate glioma-labeling performance of untargeted contrast agents, 3D agent biodistributions were compared voxel-by-voxel to gold-standard Gd-MRI and pathology slides. Finally, building on expertise in dual-probe ratiometric imaging at Dartmouth, a 10-pt clinical pilot study was carried out to assess the techniqueā€™s efficacy for rapid margin assessment. In summary, this thesis serves to advance FGS by introducing novel fluorescence imaging devices, techniques, and agents which overcome challenges in understanding whole-body agent biodistributions, recovering agent distributions at greater depths, and verifying agentsā€™ performance for specific FGS applications

    A Modular and Open-Source Framework for Virtual Reality Visualisation and Interaction in Bioimaging

    Get PDF
    Life science today involves computational analysis of a large amount and variety of data, such as volumetric data acquired by state-of-the-art microscopes, or mesh data from analysis of such data or simulations. The advent of new imaging technologies, such as lightsheet microscopy, has resulted in the users being confronted with an ever-growing amount of data, with even terabytes of imaging data created within a day. With the possibility of gentler and more high-performance imaging, the spatiotemporal complexity of the model systems or processes of interest is increasing as well. Visualisation is often the first step in making sense of this data, and a crucial part of building and debugging analysis pipelines. It is therefore important that visualisations can be quickly prototyped, as well as developed or embedded into full applications. In order to better judge spatiotemporal relationships, immersive hardware, such as Virtual or Augmented Reality (VR/AR) headsets and associated controllers are becoming invaluable tools. In this work we present scenery, a modular and extensible visualisation framework for the Java VM that can handle mesh and large volumetric data, containing multiple views, timepoints, and color channels. scenery is free and open-source software, works on all major platforms, and uses the Vulkan or OpenGL rendering APIs. We introduce scenery's main features, and discuss its use with VR/AR hardware and in distributed rendering. In addition to the visualisation framework, we present a series of case studies, where scenery can provide tangible benefit in developmental and systems biology: With Bionic Tracking, we demonstrate a new technique for tracking cells in 4D volumetric datasets via tracking eye gaze in a virtual reality headset, with the potential to speed up manual tracking tasks by an order of magnitude. We further introduce ideas to move towards virtual reality-based laser ablation and perform a user study in order to gain insight into performance, acceptance and issues when performing ablation tasks with virtual reality hardware in fast developing specimen. To tame the amount of data originating from state-of-the-art volumetric microscopes, we present ideas how to render the highly-efficient Adaptive Particle Representation, and finally, we present sciview, an ImageJ2/Fiji plugin making the features of scenery available to a wider audience.:Abstract Foreword and Acknowledgements Overview and Contributions Part 1 - Introduction 1 Fluorescence Microscopy 2 Introduction to Visual Processing 3 A Short Introduction to Cross Reality 4 Eye Tracking and Gaze-based Interaction Part 2 - VR and AR for System Biology 5 scenery ā€” VR/AR for Systems Biology 6 Rendering 7 Input Handling and Integration of External Hardware 8 Distributed Rendering 9 Miscellaneous Subsystems 10 Future Development Directions Part III - Case Studies C A S E S T U D I E S 11 Bionic Tracking: Using Eye Tracking for Cell Tracking 12 Towards Interactive Virtual Reality Laser Ablation 13 Rendering the Adaptive Particle Representation 14 sciview ā€” Integrating scenery into ImageJ2 & Fiji Part IV - Conclusion 15 Conclusions and Outlook Backmatter & Appendices A Questionnaire for VR Ablation User Study B Full Correlations in VR Ablation Questionnaire C Questionnaire for Bionic Tracking User Study List of Tables List of Figures Bibliography SelbststƤndigkeitserklƤrun

    The Anesthesia Continuing Education Market and the Value Creation From a Sustainable Unified Platform

    Get PDF
    Practicing anesthesia professionals in the United States are all governed by various profession-specific regulatory bodies that mandate continuing education (CE) requirements. To date, no unified resource exists for anesthesia professionals (i.e., Anesthesiologists, Certified Registered Nurse Anesthetists, and Anesthesiologist Assistants) to explore the CE offerings available within the marketplace. This study endeavored to convey the potential value of a unified anesthesia CE resource. It investigated how to cultivate a sustainable platform to potentially improve how anesthesia professionals search available CE offerings and to potentially enhance how anesthesia CE providers reach anesthesia professionals. This qualitative study was conducted utilizing an integrative review of the literature. The key concepts identified and investigated were network effect, segmentation, first to market, best of breed, search costs, transaction costs, minimally viable product, evolutionary phases of platforms, platform theory, platform business model, platform economy, and types of platforms. Inductive content analysis was chosen as the organizational method for the resultant qualitative data. The goal of the analysis was to create a conceptual, practical, and strategically applicable platform paradigm for the anesthesia CE marketplace driven by the insights and amalgamations from the literature. The analyzed concepts, dimensions, and indicators of platform successes and their applications potentially facilitate anesthesia professionalsā€™ CE explorations and CE providersā€™ marketing efforts, as well as contextualize the overarching impacts and implications onto the anesthesia CE industry and beyond. The conclusion portrays these impacts and implications

    Improved 3D MR Image Acquisition and Processing in Congenital Heart Disease

    Get PDF
    Congenital heart disease (CHD) is the most common type of birth defect, affecting about 1% of the population. MRI is an essential tool in the assessment of CHD, including diagnosis, intervention planning and follow-up. Three-dimensional MRI can provide particularly rich visualization and information. However, it is often complicated by long scan times, cardiorespiratory motion, injection of contrast agents, and complex and time-consuming postprocessing. This thesis comprises four pieces of work that attempt to respond to some of these challenges. The first piece of work aims to enable fast acquisition of 3D time-resolved cardiac imaging during free breathing. Rapid imaging was achieved using an efficient spiral sequence and a sparse parallel imaging reconstruction. The feasibility of this approach was demonstrated on a population of 10 patients with CHD, and areas of improvement were identified. The second piece of work is an integrated software tool designed to simplify and accelerate the development of machine learning (ML) applications in MRI research. It also exploits the strengths of recently developed ML libraries for efficient MR image reconstruction and processing. The third piece of work aims to reduce contrast dose in contrast-enhanced MR angiography (MRA). This would reduce risks and costs associated with contrast agents. A deep learning-based contrast enhancement technique was developed and shown to improve image quality in real low-dose MRA in a population of 40 children and adults with CHD. The fourth and final piece of work aims to simplify the creation of computational models for hemodynamic assessment of the great arteries. A deep learning technique for 3D segmentation of the aorta and the pulmonary arteries was developed and shown to enable accurate calculation of clinically relevant biomarkers in a population of 10 patients with CHD

    Translating computational modelling tools for clinical practice in congenital heart disease

    Get PDF
    Increasingly large numbers of medical centres worldwide are equipped with the means to acquire 3D images of patients by utilising magnetic resonance (MR) or computed tomography (CT) scanners. The interpretation of patient 3D image data has significant implications on clinical decision-making and treatment planning. In their raw form, MR and CT images have become critical in routine practice. However, in congenital heart disease (CHD), lesions are often anatomically and physiologically complex. In many cases, 3D imaging alone can fail to provide conclusive information for the clinical team. In the past 20-30 years, several image-derived modelling applications have shown major advancements. Tools such as computational fluid dynamics (CFD) and virtual reality (VR) have successfully demonstrated valuable uses in the management of CHD. However, due to current software limitations, these applications have remained largely isolated to research settings, and have yet to become part of clinical practice. The overall aim of this project was to explore new routes for making conventional computational modelling software more accessible for CHD clinics. The first objective was to create an automatic and fast pipeline for performing vascular CFD simulations. By leveraging machine learning, a solution was built using synthetically generated aortic anatomies, and was seen to be able to predict 3D aortic pressure and velocity flow fields with comparable accuracy to conventional CFD. The second objective was to design a virtual reality (VR) application tailored for supporting the surgical planning and teaching of CHD. The solution was a Unity-based application which included numerous specialised tools, such as mesh-editing features and online networking for group learning. Overall, the outcomes of this ongoing project showed strong indications that the integration of VR and CFD into clinical settings is possible, and has potential for extending 3D imaging and supporting the diagnosis, management and teaching of CHD
    • ā€¦
    corecore