48,719 research outputs found

    Aggregated motion estimation for real-time MRI reconstruction

    Full text link
    Real-time magnetic resonance imaging (MRI) methods generally shorten the measuring time by acquiring less data than needed according to the sampling theorem. In order to obtain a proper image from such undersampled data, the reconstruction is commonly defined as the solution of an inverse problem, which is regularized by a priori assumptions about the object. While practical realizations have hitherto been surprisingly successful, strong assumptions about the continuity of image features may affect the temporal fidelity of the estimated images. Here we propose a novel approach for the reconstruction of serial real-time MRI data which integrates the deformations between nearby frames into the data consistency term. The method is not required to be affine or rigid and does not need additional measurements. Moreover, it handles multi-channel MRI data by simultaneously determining the image and its coil sensitivity profiles in a nonlinear formulation which also adapts to non-Cartesian (e.g., radial) sampling schemes. Experimental results of a motion phantom with controlled speed and in vivo measurements of rapid tongue movements demonstrate image improvements in preserving temporal fidelity and removing residual artifacts.Comment: This is a preliminary technical report. A polished version is published by Magnetic Resonance in Medicine. Magnetic Resonance in Medicine 201

    Learning to Predict Image-based Rendering Artifacts with Respect to a Hidden Reference Image

    Full text link
    Image metrics predict the perceived per-pixel difference between a reference image and its degraded (e. g., re-rendered) version. In several important applications, the reference image is not available and image metrics cannot be applied. We devise a neural network architecture and training procedure that allows predicting the MSE, SSIM or VGG16 image difference from the distorted image alone while the reference is not observed. This is enabled by two insights: The first is to inject sufficiently many un-distorted natural image patches, which can be found in arbitrary amounts and are known to have no perceivable difference to themselves. This avoids false positives. The second is to balance the learning, where it is carefully made sure that all image errors are equally likely, avoiding false negatives. Surprisingly, we observe, that the resulting no-reference metric, subjectively, can even perform better than the reference-based one, as it had to become robust against mis-alignments. We evaluate the effectiveness of our approach in an image-based rendering context, both quantitatively and qualitatively. Finally, we demonstrate two applications which reduce light field capture time and provide guidance for interactive depth adjustment.Comment: 13 pages, 11 figure

    Wireless Software Synchronization of Multiple Distributed Cameras

    Full text link
    We present a method for precisely time-synchronizing the capture of image sequences from a collection of smartphone cameras connected over WiFi. Our method is entirely software-based, has only modest hardware requirements, and achieves an accuracy of less than 250 microseconds on unmodified commodity hardware. It does not use image content and synchronizes cameras prior to capture. The algorithm operates in two stages. In the first stage, we designate one device as the leader and synchronize each client device's clock to it by estimating network delay. Once clocks are synchronized, the second stage initiates continuous image streaming, estimates the relative phase of image timestamps between each client and the leader, and shifts the streams into alignment. We quantitatively validate our results on a multi-camera rig imaging a high-precision LED array and qualitatively demonstrate significant improvements to multi-view stereo depth estimation and stitching of dynamic scenes. We release as open source 'libsoftwaresync', an Android implementation of our system, to inspire new types of collective capture applications.Comment: Main: 9 pages, 10 figures. Supplemental: 3 pages, 5 figure
    • …
    corecore