101 research outputs found

    Online Super-Resolution For Fibre-Bundle-Based Confocal Laser Endomicroscopy

    Get PDF
    Probe-based Confocal Laser Endomicroscopy (pCLE) produces microscopic images enabling real-time in vivo optical biopsy. However, the miniaturisation of the optical hardware, specifically the reliance on an optical fibre bundle as an imaging guide, fundamentally limits image quality by producing artefacts, noise, and relatively low contrast and resolution. The reconstruction approaches in clinical pCLE products do not fully alleviate these problems. Consequently, image quality remains a barrier that curbs the full potential of pCLE. Enhancing the image quality of pCLE in real-time remains a challenge. The research in this thesis is a response to this need. I have developed dedicated online super-resolution methods that account for the physics of the image acquisition process. These methods have the potential to replace existing reconstruction algorithms without interfering with the fibre design or the hardware of the device. In this thesis, novel processing pipelines are proposed for enhancing the image quality of pCLE. First, I explored a learning-based super-resolution method that relies on mapping from the low to the high-resolution space. Due to the lack of high-resolution pCLE, I proposed to simulate high-resolution data and use it as a ground truth model that is based on the pCLE acquisition physics. However, pCLE images are reconstructed from irregularly distributed fibre signals, and grid-based Convolutional Neural Networks are not designed to take irregular data as input. To alleviate this problem, I designed a new trainable layer that embeds Nadaraya- Watson regression. Finally, I proposed a novel blind super-resolution approach by deploying unsupervised zero-shot learning accompanied by a down-sampling kernel crafted for pCLE. I evaluated these new methods in two ways: a robust image quality assessment and a perceptual quality test assessed by clinical experts. The results demonstrate that the proposed super-resolution pipelines are superior to the current reconstruction algorithm in terms of image quality and clinician preference

    Learned Multi-View Texture Super-Resolution

    Full text link
    We present a super-resolution method capable of creating a high-resolution texture map for a virtual 3D object from a set of lower-resolution images of that object. Our architecture unifies the concepts of (i) multi-view super-resolution based on the redundancy of overlapping views and (ii) single-view super-resolution based on a learned prior of high-resolution (HR) image structure. The principle of multi-view super-resolution is to invert the image formation process and recover the latent HR texture from multiple lower-resolution projections. We map that inverse problem into a block of suitably designed neural network layers, and combine it with a standard encoder-decoder network for learned single-image super-resolution. Wiring the image formation model into the network avoids having to learn perspective mapping from textures to images, and elegantly handles a varying number of input views. Experiments demonstrate that the combination of multi-view observations and learned prior yields improved texture maps.Comment: 11 pages, 5 figures, 2019 International Conference on 3D Vision (3DV

    Recent Advances in Image Restoration with Applications to Real World Problems

    Get PDF
    In the past few decades, imaging hardware has improved tremendously in terms of resolution, making widespread usage of images in many diverse applications on Earth and planetary missions. However, practical issues associated with image acquisition are still affecting image quality. Some of these issues such as blurring, measurement noise, mosaicing artifacts, low spatial or spectral resolution, etc. can seriously affect the accuracy of the aforementioned applications. This book intends to provide the reader with a glimpse of the latest developments and recent advances in image restoration, which includes image super-resolution, image fusion to enhance spatial, spectral resolution, and temporal resolutions, and the generation of synthetic images using deep learning techniques. Some practical applications are also included

    Variational and learning models for image and time series inverse problems

    Get PDF
    Inverse problems are at the core of many challenging applications. Variational and learning models provide estimated solutions of inverse problems as the outcome of specific reconstruction maps. In the variational approach, the result of the reconstruction map is the solution of a regularized minimization problem encoding information on the acquisition process and prior knowledge on the solution. In the learning approach, the reconstruction map is a parametric function whose parameters are identified by solving a minimization problem depending on a large set of data. In this thesis, we go beyond this apparent dichotomy between variational and learning models and we show they can be harmoniously merged in unified hybrid frameworks preserving their main advantages. We develop several highly efficient methods based on both these model-driven and data-driven strategies, for which we provide a detailed convergence analysis. The arising algorithms are applied to solve inverse problems involving images and time series. For each task, we show the proposed schemes improve the performances of many other existing methods in terms of both computational burden and quality of the solution. In the first part, we focus on gradient-based regularized variational models which are shown to be effective for segmentation purposes and thermal and medical image enhancement. We consider gradient sparsity-promoting regularized models for which we develop different strategies to estimate the regularization strength. Furthermore, we introduce a novel gradient-based Plug-and-Play convergent scheme considering a deep learning based denoiser trained on the gradient domain. In the second part, we address the tasks of natural image deblurring, image and video super resolution microscopy and positioning time series prediction, through deep learning based methods. We boost the performances of supervised, such as trained convolutional and recurrent networks, and unsupervised deep learning strategies, such as Deep Image Prior, by penalizing the losses with handcrafted regularization terms

    ROBUST DEEP LEARNING METHODS FOR SOLVING INVERSE PROBLEMS IN MEDICAL IMAGING

    Get PDF
    The medical imaging field has a long history of incorporating machine learning algorithms to address inverse problems in image acquisition and analysis. With the impressive successes of deep neural networks on natural images, we seek to answer the obvious question: do these successes also transfer to the medical image domain? The answer may seem straightforward on the surface. Tasks like image-to-image transformation, segmentation, detection, etc., have direct applications for medical images. For example, metal artifact reduction for Computed Tomography (CT) and reconstruction from undersampled k-space signal for Magnetic Resonance (MR) imaging can be formulated as an image-to-image transformation; lesion/tumor detection and segmentation are obvious applications for higher level vision tasks. While these tasks may be similar in formulation, many practical constraints and requirements exist in solving these tasks for medical images. Patient data is highly sensitive and usually only accessible from individual institutions. This creates constraints on the available groundtruth, dataset size, and computational resources in these institutions to train performant models. Due to the mission-critical nature in healthcare applications, requirements such as performance robustness and speed are also stringent. As such, the big-data, dense-computation, supervised learning paradigm in mainstream deep learning is often insufficient to address these situations. In this dissertation, we investigate ways to benefit from the powerful representational capacity of deep neural networks while still satisfying the above-mentioned constraints and requirements. The first part of this dissertation focuses on adapting supervised learning to account for variations such as different medical image modality, image quality, architecture designs, tasks, etc. The second part of this dissertation focuses on improving model robustness on unseen data through domain adaptation, which ameliorates performance degradation due to distribution shifts. The last part of this dissertation focuses on self-supervised learning and learning from synthetic data with a focus in tomographic imaging; this is essential in many situations where the desired groundtruth may not be accessible

    Bayesian plug & play methods for inverse problems in imaging.

    Get PDF
    Thèse de Doctorat de Mathématiques Appliquées (Université de Paris)Tesis de Doctorado en Ingeniería Eléctrica (Universidad de la República)This thesis deals with Bayesian methods for solving ill-posed inverse problems in imaging with learnt image priors. The first part of this thesis (Chapter 3) concentrates on two particular problems, namely joint denoising and decompression and multi-image super-resolution. After an extensive study of the noise statistics for these problem in the transformed (wavelet or Fourier) domain, we derive two novel algorithms to solve this particular inverse problem. One of them is based on a multi-scale self-similarity prior and can be seen as a transform-domain generalization of the celebrated non-local bayes algorithm to the case of non-Gaussian noise. The second one uses a neural-network denoiser to implicitly encode the image prior, and a splitting scheme to incorporate this prior into an optimization algorithm to find a MAP-like estimator. The second part of this thesis concentrates on the Variational AutoEncoder (VAE) model and some of its variants that show its capabilities to explicitly capture the probability distribution of high-dimensional datasets such as images. Based on these VAE models, we propose two ways to incorporate them as priors for general inverse problems in imaging : • The first one (Chapter 4) computes a joint (space-latent) MAP estimator named Joint Posterior Maximization using an Autoencoding Prior (JPMAP). We show theoretical and experimental evidence that the proposed objective function satisfies a weak bi-convexity property which is sufficient to guarantee that our optimization scheme converges to a stationary point. Experimental results also show the higher quality of the solutions obtained by our JPMAP approach with respect to other non-convex MAP approaches which more often get stuck in spurious local optima. • The second one (Chapter 5) develops a Gibbs-like posterior sampling algorithm for the exploration of posterior distributions of inverse problems using multiple chains and a VAE as image prior. We showhowto use those samples to obtain MMSE estimates and their corresponding uncertainty.Cette thèse traite des méthodes bayésiennes pour résoudre des problèmes inverses mal posés en imagerie avec des distributions a priori d’images apprises. La première partie de cette thèse (Chapitre 3) se concentre sur deux problèmes partic-uliers, à savoir le débruitage et la décompression conjoints et la super-résolutionmulti-images. Après une étude approfondie des statistiques de bruit pour ces problèmes dans le domaine transformé (ondelettes ou Fourier), nous dérivons deuxnouveaux algorithmes pour résoudre ce problème inverse particulie. L’un d’euxest basé sur une distributions a priori d’auto-similarité multi-échelle et peut êtrevu comme une généralisation du célèbre algorithme de Non-Local Bayes au cas dubruit non gaussien. Le second utilise un débruiteur de réseau de neurones pourcoder implicitement la distribution a priori, et un schéma de division pour incor-porer cet distribution dans un algorithme d’optimisation pour trouver un estima-teur de type MAP. La deuxième partie de cette thèse se concentre sur le modèle Variational Auto Encoder (VAE) et certaines de ses variantes qui montrent ses capacités à capturer explicitement la distribution de probabilité d’ensembles de données de grande dimension tels que les images. Sur la base de ces modèles VAE, nous proposons deuxmanières de les incorporer comme distribution a priori pour les problèmes inverses généraux en imagerie: •Le premier (Chapitre 4) calcule un estimateur MAP conjoint (espace-latent) nommé Joint Posterior Maximization using an Autoencoding Prior (JPMAP). Nous montrons des preuves théoriques et expérimentales que la fonction objectif proposée satisfait une propriété de bi-convexité faible qui est suffisante pour garantir que notre schéma d’optimisation converge vers un pointstationnaire. Les résultats expérimentaux montrent également la meilleurequalité des solutions obtenues par notre approche JPMAP par rapport à d’autresapproches MAP non convexes qui restent le plus souvent bloquées dans desminima locaux. •Le second (Chapitre 5) développe un algorithme d’échantillonnage a poste-riori de type Gibbs pour l’exploration des distributions a posteriori de problèmes inverses utilisant des chaînes multiples et un VAE comme distribution a priori. Nous montrons comment utiliser ces échantillons pour obtenir desestimations MMSE et leur incertitude correspondante.En esta tesis se estudian métodos bayesianos para resolver problemas inversos mal condicionados en imágenes usando distribuciones a priori entrenadas. La primera parte de esta tesis (Capítulo 3) se concentra en dos problemas particulares, a saber, el de eliminación de ruido y descompresión conjuntos, y el de superresolución a partir de múltiples imágenes. Después de un extenso estudio de las estadísticas del ruido para estos problemas en el dominio transformado (wavelet o Fourier),derivamos dos algoritmos nuevos para resolver este problema inverso en particular. Uno de ellos se basa en una distribución a priori de autosimilitud multiescala y puede verse como una generalización al dominio wavelet del célebre algoritmo Non-Local Bayes para el caso de ruido no Gaussiano. El segundo utiliza un algoritmo de eliminación de ruido basado en una red neuronal para codificar implícitamente la distribución a priori de las imágenes y un esquema de relajación para incorporar esta distribución en un algoritmo de optimización y así encontrar un estimador similar al MAP. La segunda parte de esta tesis se concentra en el modelo Variational AutoEncoder (VAE) y algunas de sus variantes que han mostrado capacidad para capturar explícitamente la distribución de probabilidad de conjuntos de datos en alta dimensión como las imágenes. Basándonos en estos modelos VAE, proponemos dos formas de incorporarlos como distribución a priori para problemas inversos genéricos en imágenes : •El primero (Capítulo 4) calcula un estimador MAP conjunto (espacio imagen y latente) llamado Joint Posterior Maximization using an Autoencoding Prior (JPMAP). Mostramos evidencia teórica y experimental de que la función objetivo propuesta satisface una propiedad de biconvexidad débil que es suficiente para garantizar que nuestro esquema de optimización converge a un punto estacionario. Los resultados experimentales también muestran la mayor calidad de las soluciones obtenidas por nuestro enfoque JPMAP con respecto a otros enfoques MAP no convexos que a menudo se atascan en mínimos locales espurios. •El segundo (Capítulo 5) desarrolla un algoritmo de muestreo tipo Gibbs parala exploración de la distribución a posteriori de problemas inversos utilizando múltiples cadenas y un VAE como distribución a priori. Mostramos cómo usar esas muestras para obtener estimaciones de MMSE y su correspondiente incertidumbr

    MRI Artefact Augmentation: Robust Deep Learning Systems and Automated Quality Control

    Get PDF
    Quality control (QC) of magnetic resonance imaging (MRI) is essential to establish whether a scan or dataset meets a required set of standards. In MRI, many potential artefacts must be identified so that problematic images can either be excluded or accounted for in further image processing or analysis. To date, the gold standard for the identification of these issues is visual inspection by experts. A primary source of MRI artefacts is caused by patient movement, which can affect clinical diagnosis and impact the accuracy of Deep Learning systems. In this thesis, I present a method to simulate motion artefacts from artefact-free images to augment convolutional neural networks (CNNs), increasing training appearance variability and robustness to motion artefacts. I show that models trained with artefact augmentation generalise better and are more robust to real-world artefacts, with negligible cost to performance on clean data. I argue that it is often better to optimise frameworks end-to-end with artefact augmentation rather than learning to retrospectively remove artefacts, thus enforcing robustness to artefacts at the feature level representation of the data. The labour-intensive and subjective nature of QC has increased interest in automated methods. To address this, I approach MRI quality estimation as the uncertainty in performing a downstream task, using probabilistic CNNs to predict segmentation uncertainty as a function of the input data. Extending this framework, I introduce a novel decoupled uncertainty model, enabling separate uncertainty predictions for different types of image degradation. Training with an extended k-space artefact augmentation pipeline, the model provides informative measures of uncertainty on problematic real-world scans classified by QC raters and enables sources of segmentation uncertainty to be identified. Suitable quality for algorithmic processing may differ from an image's perceptual quality. Exploring this, I pose MRI visual quality assessment as an image restoration task. Using Bayesian CNNs to recover clean images from noisy data, I show that the uncertainty indicates the possible recoverability of an image. A multi-task network combining uncertainty-aware artefact recovery with tissue segmentation highlights the distinction between visual and algorithmic quality, which has the impact that, depending on the downstream task, less data should be discarded for purely visual quality reasons

    Synthetic Aperture Radar (SAR) Meets Deep Learning

    Get PDF
    This reprint focuses on the application of the combination of synthetic aperture radars and depth learning technology. It aims to further promote the development of SAR image intelligent interpretation technology. A synthetic aperture radar (SAR) is an important active microwave imaging sensor, whose all-day and all-weather working capacity give it an important place in the remote sensing community. Since the United States launched the first SAR satellite, SAR has received much attention in the remote sensing community, e.g., in geological exploration, topographic mapping, disaster forecast, and traffic monitoring. It is valuable and meaningful, therefore, to study SAR-based remote sensing applications. In recent years, deep learning represented by convolution neural networks has promoted significant progress in the computer vision community, e.g., in face recognition, the driverless field and Internet of things (IoT). Deep learning can enable computational models with multiple processing layers to learn data representations with multiple-level abstractions. This can greatly improve the performance of various applications. This reprint provides a platform for researchers to handle the above significant challenges and present their innovative and cutting-edge research results when applying deep learning to SAR in various manuscript types, e.g., articles, letters, reviews and technical reports

    Deep Learning Methods for Remote Sensing

    Get PDF
    Remote sensing is a field where important physical characteristics of an area are exacted using emitted radiation generally captured by satellite cameras, sensors onboard aerial vehicles, etc. Captured data help researchers develop solutions to sense and detect various characteristics such as forest fires, flooding, changes in urban areas, crop diseases, soil moisture, etc. The recent impressive progress in artificial intelligence (AI) and deep learning has sparked innovations in technologies, algorithms, and approaches and led to results that were unachievable until recently in multiple areas, among them remote sensing. This book consists of sixteen peer-reviewed papers covering new advances in the use of AI for remote sensing
    • …
    corecore