37 research outputs found

    Meta-Prior: Meta learning for Adaptive Inverse Problem Solvers

    Full text link
    Deep neural networks have become a foundational tool for addressing imaging inverse problems. They are typically trained for a specific task, with a supervised loss to learn a mapping from the observations to the image to recover. However, real-world imaging challenges often lack ground truth data, rendering traditional supervised approaches ineffective. Moreover, for each new imaging task, a new model needs to be trained from scratch, wasting time and resources. To overcome these limitations, we introduce a novel approach based on meta-learning. Our method trains a meta-model on a diverse set of imaging tasks that allows the model to be efficiently fine-tuned for specific tasks with few fine-tuning steps. We show that the proposed method extends to the unsupervised setting, where no ground truth data is available. In its bilevel formulation, the outer level uses a supervised loss, that evaluates how well the fine-tuned model performs, while the inner loss can be either supervised or unsupervised, relying only on the measurement operator. This allows the meta-model to leverage a few ground truth samples for each task while being able to generalize to new imaging tasks. We show that in simple settings, this approach recovers the Bayes optimal estimator, illustrating the soundness of our approach. We also demonstrate our method's effectiveness on various tasks, including image processing and magnetic resonance imaging

    Building firmly nonexpansive convolutional neural networks

    Get PDF
    International audienceBuilding nonexpansive Convolutional Neural Networks (CNNs) is a challenging problem that has recently gained a lot of attention from the image processing community. In particular, it appears to be the key to obtain convergent Plugand-Play algorithms. This problem, which relies on an accurate control of the the Lipschitz constant of the convolutional layers, has also been investigated for Generative Adversarial Networks to improve robustness to adversarial perturbations. However, to the best of our knowledge, no efficient method has been developed yet to build nonexpansive CNNs. In this paper, we develop an optimization algorithm that can be incorporated in the training of a network to ensure the nonexpansiveness of its convolutional layers. This is shown to allow us to build firmly nonexpansive CNNs. We apply the proposed approach to train a CNN for an image denoising task and show its effectiveness through simulations

    Scalable precision wide-field imaging in radio interferometry: II. AIRI validated on ASKAP data

    Full text link
    Accompanying Part I, this sequel delineates a validation of the recently proposed AI for Regularisation in radio-interferometric Imaging (AIRI) algorithm on observations from the Australian Square Kilometre Array Pathfinder (ASKAP). The monochromatic AIRI-ASKAP images showcased in this work are formed using the same parallelised and automated imaging framework described in Part I: ``uSARA validated on ASKAP data''. Using a Plug-and-Play approach, AIRI differs from uSARA by substituting a trained denoising deep neural network (DNN) for the proximal operator in the regularisation step of the forward-backward algorithm during deconvolution. We build a trained shelf of DNN denoisers which target the estimated image-dynamic-ranges of our selected data. Furthermore, we quantify variations of AIRI reconstructions when selecting the nearest DNN on the shelf versus using a universal DNN with the highest dynamic range, opening the door to a more complete framework that not only delivers image estimation but also quantifies epistemic model uncertainty. We continue our comparative analysis of source structure, diffuse flux measurements, and spectral index maps of selected target sources as imaged by AIRI and the algorithms in Part I -- uSARA and WSClean. Overall we see an improvement over uSARA and WSClean in the reconstruction of diffuse components in AIRI images. The scientific potential delivered by AIRI is evident in further imaging precision, more accurate spectral index maps, and a significant acceleration in deconvolution time, whereby AIRI is four times faster than its sub-iterative sparsity-based counterpart uSARA.Comment: Accepted for publication in MNRA

    Learning priors for scalable computational imaging algorithms, from theory to application in radio astronomy

    No full text
    This thesis investigates scalable and robust algorithms for image reconstruction, with applications to astronomical imaging. Our approach relies on the versatile framework of plug-and-play (PnP) algorithms, that allows to take advantage of both the robustness of optimisation algorithms through data-fidelity enforcing terms, and of the power of deep neural networks (DNNs) as prior-encoding operators for image restoration tasks. The first section of this thesis deals with theoretical aspects of PnP algorithms from the standpoint of optimisation theory. We investigate conditions in order to ensure the well-definiteness of PnP algorithms and propose two different methods to enforce the associated Lipschitz constraints on the DNNs of interest. As a result, our DNNs behave as resolvents of maximally monotone operators, ensuring a characterisation of the limit point of the associated convergent PnP algorithm. The second section of this thesis applies the proposed methods to radio astronomical imaging, a context where the robustness and scalability of the considered algorithms are paramount. To compensate for the absence of groundtruth data in radio astronomy, we propose a synthetic training dataset with adaptive dynamic range to that serves as a basis for training our DNNs. Results of our PnP algorithms reach similar (when not better) quality than the state-of-the-art, both on simulated and real data, while being significantly faster.James Watt Scholarshi

    Meta-Prior: Meta learning for Adaptive Inverse Problem Solvers

    No full text
    Deep neural networks have become a foundational tool for addressing imaging inverse problems. They are typically trained for a specific task, with a supervised loss to learn a mapping from the observations to the image to recover. However, real-world imaging challenges often lack ground truth data, rendering traditional supervised approaches ineffective. Moreover, for each new imaging task, a new model needs to be trained from scratch, wasting time and resources. To overcome these limitations, we introduce a novel approach based on meta-learning. Our method trains a meta-model on a diverse set of imaging tasks that allows the model to be efficiently fine-tuned for specific tasks with few fine-tuning steps. We show that the proposed method extends to the unsupervised setting, where no ground truth data is available. In its bilevel formulation, the outer level uses a supervised loss, that evaluates how well the fine-tuned model performs, while the inner loss can be either supervised or unsupervised, relying only on the measurement operator. This allows the meta-model to leverage a few ground truth samples for each task while being able to generalize to new imaging tasks. We show that in simple settings, this approach recovers the Bayes optimal estimator, illustrating the soundness of our approach. We also demonstrate our method's effectiveness on various tasks, including image processing and magnetic resonance imaging

    Investigating Model Robustness Against Sensor Variation

    No full text
    International audienceLarge datasets of geospatial satellite images are available online, exhibiting significant variations in both image quality and content. These variations in image quality stem from the image processing pipeline and image acquisition settings, resulting in subtle differences within datasets of images acquired with the same satellites. Recent progress in the field of image processing have considerably enhanced capabilities in noise and artifacts removal, as well as image super-resolution. Consequently, this opens up possibilities for homogenizing geospatial image datasets by reducing the intra-dataset variations in image quality. In this work, we show that conventional image detection and segmentation neural networks trained on geospatial data are robust neither to noise and artefact removal preprocessing, nor to mild resolution variations

    Stochastic MM Subspace Algorithms

    Get PDF
    International audienceIn this paper, we propose a version of the MM Subspace algorithm in a stochastic setting. We prove the convergence of the algorithm and show its good practical performances

    Investigating Model Robustness Against Sensor Variation

    No full text
    International audienceLarge datasets of geospatial satellite images are available online, exhibiting significant variations in both image quality and content. These variations in image quality stem from the image processing pipeline and image acquisition settings, resulting in subtle differences within datasets of images acquired with the same satellites. Recent progress in the field of image processing have considerably enhanced capabilities in noise and artifacts removal, as well as image super-resolution. Consequently, this opens up possibilities for homogenizing geospatial image datasets by reducing the intra-dataset variations in image quality. In this work, we show that conventional image detection and segmentation neural networks trained on geospatial data are robust neither to noise and artefact removal preprocessing, nor to mild resolution variations

    Meta-Prior: Meta learning for Adaptive Inverse Problem Solvers

    No full text
    Deep neural networks have become a foundational tool for addressing imaging inverse problems. They are typically trained for a specific task, with a supervised loss to learn a mapping from the observations to the image to recover. However, real-world imaging challenges often lack ground truth data, rendering traditional supervised approaches ineffective. Moreover, for each new imaging task, a new model needs to be trained from scratch, wasting time and resources. To overcome these limitations, we introduce a novel approach based on meta-learning. Our method trains a meta-model on a diverse set of imaging tasks that allows the model to be efficiently fine-tuned for specific tasks with few fine-tuning steps. We show that the proposed method extends to the unsupervised setting, where no ground truth data is available. In its bilevel formulation, the outer level uses a supervised loss, that evaluates how well the fine-tuned model performs, while the inner loss can be either supervised or unsupervised, relying only on the measurement operator. This allows the meta-model to leverage a few ground truth samples for each task while being able to generalize to new imaging tasks. We show that in simple settings, this approach recovers the Bayes optimal estimator, illustrating the soundness of our approach. We also demonstrate our method's effectiveness on various tasks, including image processing and magnetic resonance imaging
    corecore