1,048 research outputs found
Geometry-Aware Neighborhood Search for Learning Local Models for Image Reconstruction
Local learning of sparse image models has proven to be very effective to
solve inverse problems in many computer vision applications. To learn such
models, the data samples are often clustered using the K-means algorithm with
the Euclidean distance as a dissimilarity metric. However, the Euclidean
distance may not always be a good dissimilarity measure for comparing data
samples lying on a manifold. In this paper, we propose two algorithms for
determining a local subset of training samples from which a good local model
can be computed for reconstructing a given input test sample, where we take
into account the underlying geometry of the data. The first algorithm, called
Adaptive Geometry-driven Nearest Neighbor search (AGNN), is an adaptive scheme
which can be seen as an out-of-sample extension of the replicator graph
clustering method for local model learning. The second method, called
Geometry-driven Overlapping Clusters (GOC), is a less complex nonadaptive
alternative for training subset selection. The proposed AGNN and GOC methods
are evaluated in image super-resolution, deblurring and denoising applications
and shown to outperform spectral clustering, soft clustering, and geodesic
distance based subset selection in most settings.Comment: 15 pages, 10 figures and 5 table
A Compressive Multi-Mode Superresolution Display
Compressive displays are an emerging technology exploring the co-design of
new optical device configurations and compressive computation. Previously,
research has shown how to improve the dynamic range of displays and facilitate
high-quality light field or glasses-free 3D image synthesis. In this paper, we
introduce a new multi-mode compressive display architecture that supports
switching between 3D and high dynamic range (HDR) modes as well as a new
super-resolution mode. The proposed hardware consists of readily-available
components and is driven by a novel splitting algorithm that computes the pixel
states from a target high-resolution image. In effect, the display pixels
present a compressed representation of the target image that is perceived as a
single, high resolution image.Comment: Technical repor
Enhancing SDO/HMI images using deep learning
The Helioseismic and Magnetic Imager (HMI) provides continuum images and
magnetograms with a cadence better than one per minute. It has been
continuously observing the Sun 24 hours a day for the past 7 years. The obvious
trade-off between full disk observations and spatial resolution makes HMI not
enough to analyze the smallest-scale events in the solar atmosphere. Our aim is
to develop a new method to enhance HMI data, simultaneously deconvolving and
super-resolving images and magnetograms. The resulting images will mimic
observations with a diffraction-limited telescope twice the diameter of HMI.
Our method, which we call Enhance, is based on two deep fully convolutional
neural networks that input patches of HMI observations and output deconvolved
and super-resolved data. The neural networks are trained on synthetic data
obtained from simulations of the emergence of solar active regions. We have
obtained deconvolved and supper-resolved HMI images. To solve this ill-defined
problem with infinite solutions we have used a neural network approach to add
prior information from the simulations. We test Enhance against Hinode data
that has been degraded to a 28 cm diameter telescope showing very good
consistency. The code is open source.Comment: 13 pages, 10 figures. Accepted for publication in Astronomy &
Astrophysic
A Framework for Fast Image Deconvolution with Incomplete Observations
In image deconvolution problems, the diagonalization of the underlying
operators by means of the FFT usually yields very large speedups. When there
are incomplete observations (e.g., in the case of unknown boundaries), standard
deconvolution techniques normally involve non-diagonalizable operators,
resulting in rather slow methods, or, otherwise, use inexact convolution
models, resulting in the occurrence of artifacts in the enhanced images. In
this paper, we propose a new deconvolution framework for images with incomplete
observations that allows us to work with diagonalized convolution operators,
and therefore is very fast. We iteratively alternate the estimation of the
unknown pixels and of the deconvolved image, using, e.g., an FFT-based
deconvolution method. This framework is an efficient, high-quality alternative
to existing methods of dealing with the image boundaries, such as edge
tapering. It can be used with any fast deconvolution method. We give an example
in which a state-of-the-art method that assumes periodic boundary conditions is
extended, through the use of this framework, to unknown boundary conditions.
Furthermore, we propose a specific implementation of this framework, based on
the alternating direction method of multipliers (ADMM). We provide a proof of
convergence for the resulting algorithm, which can be seen as a "partial" ADMM,
in which not all variables are dualized. We report experimental comparisons
with other primal-dual methods, where the proposed one performed at the level
of the state of the art. Four different kinds of applications were tested in
the experiments: deconvolution, deconvolution with inpainting, superresolution,
and demosaicing, all with unknown boundaries.Comment: IEEE Trans. Image Process., to be published. 15 pages, 11 figures.
MATLAB code available at
https://github.com/alfaiate/DeconvolutionIncompleteOb
- …