3,162 research outputs found

    Angular Upsampling in Infant Diffusion MRI Using Neighborhood Matching in x-q Space

    Get PDF
    Diffusion MRI requires sufficient coverage of the diffusion wavevector space, also known as the q-space, to adequately capture the pattern of water diffusion in various directions and scales. As a result, the acquisition time can be prohibitive for individuals who are unable to stay still in the scanner for an extensive period of time, such as infants. To address this problem, in this paper we harness non-local self-similar information in the x-q space of diffusion MRI data for q-space upsampling. Specifically, we first perform neighborhood matching to establish the relationships of signals in x-q space. The signal relationships are then used to regularize an ill-posed inverse problem related to the estimation of high angular resolution diffusion MRI data from its low-resolution counterpart. Our framework allows information from curved white matter structures to be used for effective regularization of the otherwise ill-posed problem. Extensive evaluations using synthetic and infant diffusion MRI data demonstrate the effectiveness of our method. Compared with the widely adopted interpolation methods using spherical radial basis functions and spherical harmonics, our method is able to produce high angular resolution diffusion MRI data with greater quality, both qualitatively and quantitatively.Comment: 15 pages, 12 figure

    Image Reconstruction from Undersampled Confocal Microscopy Data using Multiresolution Based Maximum Entropy Regularization

    Full text link
    We consider the problem of reconstructing 2D images from randomly under-sampled confocal microscopy samples. The well known and widely celebrated total variation regularization, which is the L1 norm of derivatives, turns out to be unsuitable for this problem; it is unable to handle both noise and under-sampling together. This issue is linked with the notion of phase transition phenomenon observed in compressive sensing research, which is essentially the break-down of total variation methods, when sampling density gets lower than certain threshold. The severity of this breakdown is determined by the so-called mutual incoherence between the derivative operators and measurement operator. In our problem, the mutual incoherence is low, and hence the total variation regularization gives serious artifacts in the presence of noise even when the sampling density is not very low. There has been very few attempts in developing regularization methods that perform better than total variation regularization for this problem. We develop a multi-resolution based regularization method that is adaptive to image structure. In our approach, the desired reconstruction is formulated as a series of coarse-to-fine multi-resolution reconstructions; for reconstruction at each level, the regularization is constructed to be adaptive to the image structure, where the information for adaption is obtained from the reconstruction obtained at coarser resolution level. This adaptation is achieved by using maximum entropy principle, where the required adaptive regularization is determined as the maximizer of entropy subject to the information extracted from the coarse reconstruction as constraints. We demonstrate the superiority of the proposed regularization method over existing ones using several reconstruction examples

    Depth Superresolution using Motion Adaptive Regularization

    Full text link
    Spatial resolution of depth sensors is often significantly lower compared to that of conventional optical cameras. Recent work has explored the idea of improving the resolution of depth using higher resolution intensity as a side information. In this paper, we demonstrate that further incorporating temporal information in videos can significantly improve the results. In particular, we propose a novel approach that improves depth resolution, exploiting the space-time redundancy in the depth and intensity using motion-adaptive low-rank regularization. Experiments confirm that the proposed approach substantially improves the quality of the estimated high-resolution depth. Our approach can be a first component in systems using vision techniques that rely on high resolution depth information

    End-to-End Learning of Video Super-Resolution with Motion Compensation

    Full text link
    Learning approaches have shown great success in the task of super-resolving an image given a low resolution input. Video super-resolution aims for exploiting additionally the information from multiple images. Typically, the images are related via optical flow and consecutive image warping. In this paper, we provide an end-to-end video super-resolution network that, in contrast to previous works, includes the estimation of optical flow in the overall network architecture. We analyze the usage of optical flow for video super-resolution and find that common off-the-shelf image warping does not allow video super-resolution to benefit much from optical flow. We rather propose an operation for motion compensation that performs warping from low to high resolution directly. We show that with this network configuration, video super-resolution can benefit from optical flow and we obtain state-of-the-art results on the popular test sets. We also show that the processing of whole images rather than independent patches is responsible for a large increase in accuracy.Comment: Accepted to GCPR201
    corecore