1,159 research outputs found

    Regularized Fourier ptychography using an online plug-and-play algorithm

    Full text link
    The plug-and-play priors (PnP) framework has been recently shown to achieve state-of-the-art results in regularized image reconstruction by leveraging a sophisticated denoiser within an iterative algorithm. In this paper, we propose a new online PnP algorithm for Fourier ptychographic microscopy (FPM) based on the accelerated proximal gradient method (APGM). Specifically, the proposed algorithm uses only a subset of measurements, which makes it scalable to a large set of measurements. We validate the algorithm by showing that it can lead to significant performance gains on both simulated and experimental data.https://arxiv.org/abs/1811.00120Published versio

    Regularized Fourier ptychography using an online plug-and-play algorithm

    Full text link
    The plug-and-play priors (PnP) framework has been recently shown to achieve state-of-the-art results in regularized image reconstruction by leveraging a sophisticated denoiser within an iterative algorithm. In this paper, we propose a new online PnP algorithm for Fourier ptychographic microscopy (FPM) based on the accelerated proximal gradient method (APGM). Specifically, the proposed algorithm uses only a subset of measurements, which makes it scalable to a large set of measurements. We validate the algorithm by showing that it can lead to significant performance gains on both simulated and experimental data.https://arxiv.org/abs/1811.00120Published versio

    SIMBA: scalable inversion in optical tomography using deep denoising priors

    Full text link
    Two features desired in a three-dimensional (3D) optical tomographic image reconstruction algorithm are the ability to reduce imaging artifacts and to do fast processing of large data volumes. Traditional iterative inversion algorithms are impractical in this context due to their heavy computational and memory requirements. We propose and experimentally validate a novel scalable iterative mini-batch algorithm (SIMBA) for fast and high-quality optical tomographic imaging. SIMBA enables highquality imaging by combining two complementary information sources: the physics of the imaging system characterized by its forward model and the imaging prior characterized by a denoising deep neural net. SIMBA easily scales to very large 3D tomographic datasets by processing only a small subset of measurements at each iteration. We establish the theoretical fixedpoint convergence of SIMBA under nonexpansive denoisers for convex data-fidelity terms. We validate SIMBA on both simulated and experimentally collected intensity diffraction tomography (IDT) datasets. Our results show that SIMBA can significantly reduce the computational burden of 3D image formation without sacrificing the imaging quality.https://arxiv.org/abs/1911.13241First author draf

    Plug-and-Play Methods Provably Converge with Properly Trained Denoisers

    Full text link
    Plug-and-play (PnP) is a non-convex framework that integrates modern denoising priors, such as BM3D or deep learning-based denoisers, into ADMM or other proximal algorithms. An advantage of PnP is that one can use pre-trained denoisers when there is not sufficient data for end-to-end training. Although PnP has been recently studied extensively with great empirical success, theoretical analysis addressing even the most basic question of convergence has been insufficient. In this paper, we theoretically establish convergence of PnP-FBS and PnP-ADMM, without using diminishing stepsizes, under a certain Lipschitz condition on the denoisers. We then propose real spectral normalization, a technique for training deep learning-based denoisers to satisfy the proposed Lipschitz condition. Finally, we present experimental results validating the theory.Comment: Published in the International Conference on Machine Learning, 201

    Convolutional Dictionary Regularizers for Tomographic Inversion

    Full text link
    There has been a growing interest in the use of data-driven regularizers to solve inverse problems associated with computational imaging systems. The convolutional sparse representation model has recently gained attention, driven by the development of fast algorithms for solving the dictionary learning and sparse coding problems for sufficiently large images and data sets. Nevertheless, this model has seen very limited application to tomographic reconstruction problems. In this paper, we present a model-based tomographic reconstruction algorithm using a learnt convolutional dictionary as a regularizer. The key contribution is the use of a data-dependent weighting scheme for the l1 regularization to construct an effective denoising method that is integrated into the inversion using the Plug-and-Play reconstruction framework. Using simulated data sets we demonstrate that our approach can improve performance over traditional regularizers based on a Markov random field model and a patch-based sparse representation model for sparse and limited-view tomographic data sets
    • …
    corecore