246 research outputs found
Deep AutoRegressive Networks
We introduce a deep, generative autoencoder capable of learning hierarchies
of distributed representations from data. Successive deep stochastic hidden
layers are equipped with autoregressive connections, which enable the model to
be sampled from quickly and exactly via ancestral sampling. We derive an
efficient approximate parameter estimation method based on the minimum
description length (MDL) principle, which can be seen as maximising a
variational lower bound on the log-likelihood, with a feedforward neural
network implementing approximate inference. We demonstrate state-of-the-art
generative performance on a number of classic data sets: several UCI data sets,
MNIST and Atari 2600 games.Comment: Appears in Proceedings of the 31st International Conference on
Machine Learning (ICML), Beijing, China, 201
WKGM: Weight-K-space Generative Model for Parallel Imaging Reconstruction
Deep learning based parallel imaging (PI) has made great progresses in recent
years to accelerate magnetic resonance imaging (MRI). Nevertheless, it still
has some limitations, such as the robustness and flexibility of existing
methods have great deficiency. In this work, we propose a method to explore the
k-space domain learning via robust generative modeling for flexible
calibration-less PI reconstruction, coined weight-k-space generative model
(WKGM). Specifically, WKGM is a generalized k-space domain model, where the
k-space weighting technology and high-dimensional space augmentation design are
efficiently incorporated for score-based generative model training, resulting
in good and robust reconstructions. In addition, WKGM is flexible and thus can
be synergistically combined with various traditional k-space PI models, which
can make full use of the correlation between multi-coil data and
realizecalibration-less PI. Even though our model was trained on only 500
images, experimental results with varying sampling patterns and acceleration
factors demonstrate that WKGM can attain state-of-the-art reconstruction
results with the well-learned k-space generative prior.Comment: 11pages, 12 figure
JPEG Artifact Correction using Denoising Diffusion Restoration Models
Diffusion models can be used as learned priors for solving various inverse
problems. However, most existing approaches are restricted to linear inverse
problems, limiting their applicability to more general cases. In this paper, we
build upon Denoising Diffusion Restoration Models (DDRM) and propose a method
for solving some non-linear inverse problems. We leverage the pseudo-inverse
operator used in DDRM and generalize this concept for other measurement
operators, which allows us to use pre-trained unconditional diffusion models
for applications such as JPEG artifact correction. We empirically demonstrate
the effectiveness of our approach across various quality factors, attaining
performance levels that are on par with state-of-the-art methods trained
specifically for the JPEG restoration task.Comment: Presented at NeurIPS 2022 Workshop on Score-Based Methods. Code:
https://github.com/bahjat-kawar/ddrm-jpe
A Deep Learning Approach to Structured Signal Recovery
In this paper, we develop a new framework for sensing and recovering
structured signals. In contrast to compressive sensing (CS) systems that employ
linear measurements, sparse representations, and computationally complex
convex/greedy algorithms, we introduce a deep learning framework that supports
both linear and mildly nonlinear measurements, that learns a structured
representation from training data, and that efficiently computes a signal
estimate. In particular, we apply a stacked denoising autoencoder (SDA), as an
unsupervised feature learner. SDA enables us to capture statistical
dependencies between the different elements of certain signals and improve
signal recovery performance as compared to the CS approach
- …