27,446 research outputs found
Bayesian super-resolution with application to radar target recognition
This thesis is concerned with methods to facilitate automatic target recognition using images generated from a group of associated radar systems. Target
recognition algorithms require access to a database of previously recorded or
synthesized radar images for the targets of interest, or a database of features
based on those images. However, the resolution of a new image acquired under
non-ideal conditions may not be as good as that of the images used to generate
the database. Therefore it is proposed to use super-resolution techniques to
match the resolution of new images with the resolution of database images.
A comprehensive review of the literature is given for super-resolution when
used either on its own, or in conjunction with target recognition. A new superresolution algorithm is developed that is based on numerical Markov chain
Monte Carlo Bayesian statistics. This algorithm allows uncertainty in the superresolved image to be taken into account in the target recognition process. It
is shown that the Bayesian approach improves the probability of correct target
classification over standard super-resolution techniques.
The new super-resolution algorithm is demonstrated using a simple synthetically generated data set and is compared to other similar algorithms. A variety
of effects that degrade super-resolution performance, such as defocus, are analyzed and techniques to compensate for these are presented. Performance of the
super-resolution algorithm is then tested as part of a Bayesian target recognition
framework using measured radar data
Bayesian Image Quality Transfer with CNNs: Exploring Uncertainty in dMRI Super-Resolution
In this work, we investigate the value of uncertainty modeling in 3D
super-resolution with convolutional neural networks (CNNs). Deep learning has
shown success in a plethora of medical image transformation problems, such as
super-resolution (SR) and image synthesis. However, the highly ill-posed nature
of such problems results in inevitable ambiguity in the learning of networks.
We propose to account for intrinsic uncertainty through a per-patch
heteroscedastic noise model and for parameter uncertainty through approximate
Bayesian inference in the form of variational dropout. We show that the
combined benefits of both lead to the state-of-the-art performance SR of
diffusion MR brain images in terms of errors compared to ground truth. We
further show that the reduced error scores produce tangible benefits in
downstream tractography. In addition, the probabilistic nature of the methods
naturally confers a mechanism to quantify uncertainty over the super-resolved
output. We demonstrate through experiments on both healthy and pathological
brains the potential utility of such an uncertainty measure in the risk
assessment of the super-resolved images for subsequent clinical use.Comment: Accepted paper at MICCAI 201
Micro-CT Synthesis and Inner Ear Super Resolution via Generative Adversarial Networks and Bayesian Inference
Existing medical image super-resolution methods rely on pairs of low- and
high- resolution images to learn a mapping in a fully supervised manner.
However, such image pairs are often not available in clinical practice. In this
paper, we address super-resolution problem in a real-world scenario using
unpaired data and synthesize linearly \textbf{eight times} higher resolved
Micro-CT images of temporal bone structure, which is embedded in the inner ear.
We explore cycle-consistency generative adversarial networks for
super-resolution task and equip the translation approach with Bayesian
inference. We further introduce \emph{Hu Moment distance} the evaluation metric
to quantify the shape of the temporal bone. We evaluate our method on a public
inner ear CT dataset and have seen both visual and quantitative improvement
over state-of-the-art deep-learning-based methods. In addition, we perform a
multi-rater visual evaluation experiment and find that trained experts
consistently rate the proposed method the highest quality scores among all
methods. Furthermore, we are able to quantify uncertainty in the unpaired
translation task and the uncertainty map can provide structural information of
the temporal bone.Comment: final version in ISBI 202
Bayesian Image Reconstruction using Deep Generative Models
Machine learning models are commonly trained end-to-end and in a supervised
setting, using paired (input, output) data. Examples include recent
super-resolution methods that train on pairs of (low-resolution,
high-resolution) images. However, these end-to-end approaches require
re-training every time there is a distribution shift in the inputs (e.g., night
images vs daylight) or relevant latent variables (e.g., camera blur or hand
motion). In this work, we leverage state-of-the-art (SOTA) generative models
(here StyleGAN2) for building powerful image priors, which enable application
of Bayes' theorem for many downstream reconstruction tasks. Our method,
Bayesian Reconstruction through Generative Models (BRGM), uses a single
pre-trained generator model to solve different image restoration tasks, i.e.,
super-resolution and in-painting, by combining it with different forward
corruption models. We keep the weights of the generator model fixed, and
reconstruct the image by estimating the Bayesian maximum a-posteriori (MAP)
estimate over the input latent vector that generated the reconstructed image.
We further use variational inference to approximate the posterior distribution
over the latent vectors, from which we sample multiple solutions. We
demonstrate BRGM on three large and diverse datasets: (i) 60,000 images from
the Flick Faces High Quality dataset (ii) 240,000 chest X-rays from MIMIC III
and (iii) a combined collection of 5 brain MRI datasets with 7,329 scans.
Across all three datasets and without any dataset-specific hyperparameter
tuning, our simple approach yields performance competitive with current
task-specific state-of-the-art methods on super-resolution and in-painting,
while being more generalisable and without requiring any training. Our source
code and pre-trained models are available online:
https://razvanmarinescu.github.io/brgm/.Comment: 27 pages, 17 figures, 5 table
Generalized Expectation Maximization Framework for Blind Image Super Resolution
Learning-based methods for blind single image super resolution (SISR) conduct
the restoration by a learned mapping between high-resolution (HR) images and
their low-resolution (LR) counterparts degraded with arbitrary blur kernels.
However, these methods mostly require an independent step to estimate the blur
kernel, leading to error accumulation between steps. We propose an end-to-end
learning framework for the blind SISR problem, which enables image restoration
within a unified Bayesian framework with either full- or semi-supervision. The
proposed method, namely SREMN, integrates learning techniques into the
generalized expectation-maximization (GEM) algorithm and infers HR images from
the maximum likelihood estimation (MLE). Extensive experiments show the
superiority of the proposed method with comparison to existing work and novelty
in semi-supervised learning
Large Scale Variational Inference and Experimental Design for Sparse Generalized Linear Models
Many problems of low-level computer vision and image processing, such as
denoising, deconvolution, tomographic reconstruction or super-resolution, can
be addressed by maximizing the posterior distribution of a sparse linear model
(SLM). We show how higher-order Bayesian decision-making problems, such as
optimizing image acquisition in magnetic resonance scanners, can be addressed
by querying the SLM posterior covariance, unrelated to the density's mode. We
propose a scalable algorithmic framework, with which SLM posteriors over full,
high-resolution images can be approximated for the first time, solving a
variational optimization problem which is convex iff posterior mode finding is
convex. These methods successfully drive the optimization of sampling
trajectories for real-world magnetic resonance imaging through Bayesian
experimental design, which has not been attempted before. Our methodology
provides new insight into similarities and differences between sparse
reconstruction and approximate Bayesian inference, and has important
implications for compressive sensing of real-world images.Comment: 34 pages, 6 figures, technical report (submitted
Bayesian region selection for adaptive dictionary-based Super-Resolution
The performance of dictionary-based super-resolution (SR) strongly depends on the
contents of the training dataset. Nevertheless, many dictionary-based SR methods randomly select patches from of a larger set of training images to build their dictionaries
[
8
,
14
,
19
,
20
], thus relying on patches being diverse enough. This paper describes
a dictionary building method for SR based on adaptively selecting an optimal subset of
patches out of the training images. Each training image is divided into sub-image entities,
named regions, of such a size that texture consistency is preserved and high-frequency
(HF) energy is present. For each input patch to super-resolve, the best-fitting region is
found through a Bayesian selection. In order to handle the high number of regions in
the training dataset, a local Naive Bayes Nearest Neighbor (NBNN) approach is used.
Trained with this adapted subset of patches, sparse coding SR is applied to recover the
high-resolution image. Experimental results demonstrate that using our adaptive algo-
rithm produces an improvement in SR performance with respect to non-adaptive training.Peer ReviewedPostprint (published version
Bayesian Dictionary Learning for Single and Coupled Feature Spaces
Over-complete bases offer the flexibility to represent much wider range of signals with more elementary basis atoms than signal dimension. The use of over-complete dictionaries for sparse representation has been a new trend recently and has increasingly become recognized as providing high performance for applications such as denoise, image super-resolution, inpaiting, compression, blind source separation and linear unmixing. This dissertation studies the dictionary learning for single or coupled feature spaces and its application in image restoration tasks. A Bayesian strategy using a beta process prior is applied to solve both problems.
Firstly, we illustrate how to generalize the existing beta process dictionary learning method (BP) to learn dictionary for single feature space. The advantage of this approach is that the number of dictionary atoms and their relative importance may be inferred non-parametrically.
Next, we propose a new beta process joint dictionary learning method (BP-JDL) for coupled feature spaces, where the learned dictionaries also reflect the relationship between the two spaces. Compared to previous couple feature spaces dictionary learning algorithms, our algorithm not only provides dictionaries that customized to each feature space, but also adds more consistent and accurate mapping between the two feature spaces. This is due to the unique property of the beta process model that the sparse representation can be decomposed to values and dictionary atom indicators. The proposed algorithm is able to learn sparse representations that correspond to the same dictionary atoms with the same sparsity but different values in coupled feature spaces, thus bringing consistent and accurate mapping between coupled feature spaces.
Two applications, single image super-resolution and inverse halftoning, are chosen to evaluate the performance of the proposed Bayesian approach. In both cases, the Bayesian approach, either for single feature space or coupled feature spaces, outperforms state-of-the-art methods in comparative domains
- …