53 research outputs found

    Fast Learning-based Registration of Sparse 3D Clinical Images

    Full text link
    We introduce SparseVM, a method that registers clinical-quality 3D MR scans both faster and more accurately than previously possible. Deformable alignment, or registration, of clinical scans is a fundamental task for many clinical neuroscience studies. However, most registration algorithms are designed for high-resolution research-quality scans. In contrast to research-quality scans, clinical scans are often sparse, missing up to 86% of the slices available in research-quality scans. Existing methods for registering these sparse images are either inaccurate or extremely slow. We present a learning-based registration method, SparseVM, that is more accurate and orders of magnitude faster than the most accurate clinical registration methods. To our knowledge, it is the first method to use deep learning specifically tailored to registering clinical images. We demonstrate our method on a clinically-acquired MRI dataset of stroke patients and on a simulated sparse MRI dataset. Our code is available as part of the VoxelMorph package at http://voxelmorph.mit.edu/.Comment: This version was accepted to CHIL. It builds on the previous version of the paper and includes more experimental result

    On postglacial sea level—III. Incorporating sediment redistribution

    Get PDF
    We derive a generalized theory for gravitationally self-consistent, static sea level variations on earth models of arbitrary complexity that takes into account the redistribution of sediments. The theory is an extension of previous work that incorporated, into the governing equations, shoreline migration due to local sea level variations and changes in the geometry of grounded, marine-based ice. In addition, we use viscoelastic Love number theory to present a version of the new theory valid for spherically symmetric earth models. The Love number theory accounts for the gravitational, deformational and rotational effects of the sediment redistribution. As a first, illustrative application of the new theory, we compute the perturbation in sea level driven by an idealized pulse of sediment transport into the Gulf of Mexico. We demonstrate that incorporating a gravitationally self-consistent water load in this case significantly improves the accuracy of sea level predictions relative to previous simplified treatments of the sediment redistribution

    Deep Group-wise Variational Diffeomorphic Image Registration

    Full text link
    Deep neural networks are increasingly used for pair-wise image registration. We propose to extend current learning-based image registration to allow simultaneous registration of multiple images. To achieve this, we build upon the pair-wise variational and diffeomorphic VoxelMorph approach and present a general mathematical framework that enables both registration of multiple images to their geodesic average and registration in which any of the available images can be used as a fixed image. In addition, we provide a likelihood based on normalized mutual information, a well-known image similarity metric in registration, between multiple images, and a prior that allows for explicit control over the viscous fluid energy to effectively regularize deformations. We trained and evaluated our approach using intra-patient registration of breast MRI and Thoracic 4DCT exams acquired over multiple time points. Comparison with Elastix and VoxelMorph demonstrates competitive quantitative performance of the proposed method in terms of image similarity and reference landmark distances at significantly faster registration

    Diffusion tensor driven image registration: a deep learning approach

    Full text link
    Tracking microsctructural changes in the developing brain relies on accurate inter-subject image registration. However, most methods rely on either structural or diffusion data to learn the spatial correspondences between two or more images, without taking into account the complementary information provided by using both. Here we propose a deep learning registration framework which combines the structural information provided by T2-weighted (T2w) images with the rich microstructural information offered by diffusion tensor imaging (DTI) scans. We perform a leave-one-out cross-validation study where we compare the performance of our multi-modality registration model with a baseline model trained on structural data only, in terms of Dice scores and differences in fractional anisotropy (FA) maps. Our results show that in terms of average Dice scores our model performs better in subcortical regions when compared to using structural data only. Moreover, average sum-of-squared differences between warped and fixed FA maps show that our proposed model performs better at aligning the diffusion data

    ParMap, an algorithm for the identification of small genomic insertions and deletions in nextgen sequencing data

    Get PDF
    <p>Abstract</p> <p>Background</p> <p>Next-generation sequencing produces high-throughput data, albeit with greater error and shorter reads than traditional Sanger sequencing methods. This complicates the detection of genomic variations, especially, small insertions and deletions.</p> <p>Findings</p> <p>Here we describe ParMap, a statistical algorithm for the identification of complex genetic variants, such as small insertion and deletions, using partially mapped reads in nextgen sequencing data.</p> <p>Conclusions</p> <p>We report ParMap's successful application to the mutation analysis of chromosome X exome-captured leukemia DNA samples.</p

    Spatiotemporal PET reconstruction using ML-EM with learned diffeomorphic deformation

    Full text link
    Patient movement in emission tomography deteriorates reconstruction quality because of motion blur. Gating the data improves the situation somewhat: each gate contains a movement phase which is approximately stationary. A standard method is to use only the data from a few gates, with little movement between them. However, the corresponding loss of data entails an increase of noise. Motion correction algorithms have been implemented to take into account all the gated data, but they do not scale well, especially not in 3D. We propose a novel motion correction algorithm which addresses the scalability issue. Our approach is to combine an enhanced ML-EM algorithm with deep learning based movement registration. The training is unsupervised, and with artificial data. We expect this approach to scale very well to higher resolutions and to 3D, as the overall cost of our algorithm is only marginally greater than that of a standard ML-EM algorithm. We show that we can significantly decrease the noise corresponding to a limited number of gates

    Reconstructing Video from Interferometric Measurements of Time-Varying Sources

    Get PDF
    Very long baseline interferometry (VLBI) makes it possible to recover images of astronomical sources with extremely high angular resolution. Most recently, the Event Horizon Telescope (EHT) has extended VLBI to short millimeter wavelengths with a goal of achieving angular resolution sufficient for imaging the event horizons of nearby supermassive black holes. VLBI provides measurements related to the underlying source image through a sparse set spatial frequencies. An image can then be recovered from these measurements by making assumptions about the underlying image. One of the most important assumptions made by conventional imaging methods is that over the course of a night's observation the image is static. However, for quickly evolving sources, such as the galactic center's supermassive black hole (Sgr A*) targeted by the EHT, this assumption is violated and these conventional imaging approaches fail. In this work we propose a new way to model VLBI measurements that allows us to recover both the appearance and dynamics of an evolving source by reconstructing a video rather than a static image. By modeling VLBI measurements using a Gaussian Markov Model, we are able to propagate information across observations in time to reconstruct a video, while simultaneously learning about the dynamics of the source's emission region. We demonstrate our proposed Expectation-Maximization (EM) algorithm, StarWarps, on realistic synthetic observations of black holes, and show how it substantially improves results compared to conventional imaging algorithms. Additionally, we demonstrate StarWarps on real VLBI data of the M87 Jet from the VLBA

    Mutation Detection with Next-Generation Resequencing through a Mediator Genome

    Get PDF
    The affordability of next generation sequencing (NGS) is transforming the field of mutation analysis in bacteria. The genetic basis for phenotype alteration can be identified directly by sequencing the entire genome of the mutant and comparing it to the wild-type (WT) genome, thus identifying acquired mutations. A major limitation for this approach is the need for an a-priori sequenced reference genome for the WT organism, as the short reads of most current NGS approaches usually prohibit de-novo genome assembly. To overcome this limitation we propose a general framework that utilizes the genome of relative organisms as mediators for comparing WT and mutant bacteria. Under this framework, both mutant and WT genomes are sequenced with NGS, and the short sequencing reads are mapped to the mediator genome. Variations between the mutant and the mediator that recur in the WT are ignored, thus pinpointing the differences between the mutant and the WT. To validate this approach we sequenced the genome of Bdellovibrio bacteriovorus 109J, an obligatory bacterial predator, and its prey-independent mutant, and compared both to the mediator species Bdellovibrio bacteriovorus HD100. Although the mutant and the mediator sequences differed in more than 28,000 nucleotide positions, our approach enabled pinpointing the single causative mutation. Experimental validation in 53 additional mutants further established the implicated gene. Our approach extends the applicability of NGS-based mutant analyses beyond the domain of available reference genomes

    SHRiMP: Accurate Mapping of Short Color-space Reads

    Get PDF
    The development of Next Generation Sequencing technologies, capable of sequencing hundreds of millions of short reads (25–70 bp each) in a single run, is opening the door to population genomic studies of non-model species. In this paper we present SHRiMP - the SHort Read Mapping Package: a set of algorithms and methods to map short reads to a genome, even in the presence of a large amount of polymorphism. Our method is based upon a fast read mapping technique, separate thorough alignment methods for regular letter-space as well as AB SOLiD (color-space) reads, and a statistical model for false positive hits. We use SHRiMP to map reads from a newly sequenced Ciona savignyi individual to the reference genome. We demonstrate that SHRiMP can accurately map reads to this highly polymorphic genome, while confirming high heterozygosity of C. savignyi in this second individual. SHRiMP is freely available at http://compbio.cs.toronto.edu/shrimp
    • …
    corecore