3,281 research outputs found
A Stochastic Majorize-Minimize Subspace Algorithm for Online Penalized Least Squares Estimation
Stochastic approximation techniques play an important role in solving many
problems encountered in machine learning or adaptive signal processing. In
these contexts, the statistics of the data are often unknown a priori or their
direct computation is too intensive, and they have thus to be estimated online
from the observed signals. For batch optimization of an objective function
being the sum of a data fidelity term and a penalization (e.g. a sparsity
promoting function), Majorize-Minimize (MM) methods have recently attracted
much interest since they are fast, highly flexible, and effective in ensuring
convergence. The goal of this paper is to show how these methods can be
successfully extended to the case when the data fidelity term corresponds to a
least squares criterion and the cost function is replaced by a sequence of
stochastic approximations of it. In this context, we propose an online version
of an MM subspace algorithm and we study its convergence by using suitable
probabilistic tools. Simulation results illustrate the good practical
performance of the proposed algorithm associated with a memory gradient
subspace, when applied to both non-adaptive and adaptive filter identification
problems
An Efficient Algorithm for Video Super-Resolution Based On a Sequential Model
In this work, we propose a novel procedure for video super-resolution, that
is the recovery of a sequence of high-resolution images from its low-resolution
counterpart. Our approach is based on a "sequential" model (i.e., each
high-resolution frame is supposed to be a displaced version of the preceding
one) and considers the use of sparsity-enforcing priors. Both the recovery of
the high-resolution images and the motion fields relating them is tackled. This
leads to a large-dimensional, non-convex and non-smooth problem. We propose an
algorithmic framework to address the latter. Our approach relies on fast
gradient evaluation methods and modern optimization techniques for
non-differentiable/non-convex problems. Unlike some other previous works, we
show that there exists a provably-convergent method with a complexity linear in
the problem dimensions. We assess the proposed optimization method on {several
video benchmarks and emphasize its good performance with respect to the state
of the art.}Comment: 37 pages, SIAM Journal on Imaging Sciences, 201
Multigrid Backprojection Super-Resolution and Deep Filter Visualization
We introduce a novel deep-learning architecture for image upscaling by large
factors (e.g. 4x, 8x) based on examples of pristine high-resolution images. Our
target is to reconstruct high-resolution images from their downscale versions.
The proposed system performs a multi-level progressive upscaling, starting from
small factors (2x) and updating for higher factors (4x and 8x). The system is
recursive as it repeats the same procedure at each level. It is also residual
since we use the network to update the outputs of a classic upscaler. The
network residuals are improved by Iterative Back-Projections (IBP) computed in
the features of a convolutional network. To work in multiple levels we extend
the standard back-projection algorithm using a recursion analogous to
Multi-Grid algorithms commonly used as solvers of large systems of linear
equations. We finally show how the network can be interpreted as a standard
upsampling-and-filter upscaler with a space-variant filter that adapts to the
geometry. This approach allows us to visualize how the network learns to
upscale. Finally, our system reaches state of the art quality for models with
relatively few number of parameters.Comment: Spotlight paper in the Thirty-Third AAAI Conference on Artificial
Intelligence (AAAI-19
On the optimality of a new class of 2D recursive filters
summary:The purpose of this paper is to prove the minimum variance property of a new class of 2D, recursive, finite-dimensional filters. The filtering algorithms are derived from general basic assumptions underlying the stochastic modelling of an image as a 2D gaussian random field. An appealing feature of the proposed algorithms is that the image pixels are estimated one at a time; this makes it possible to save computation time and memory requirement with respect to the filtering procedures based on strip processing. Experimental results show the effectiveness of the new filtering schemes
WARP: Wavelets with adaptive recursive partitioning for multi-dimensional data
Effective identification of asymmetric and local features in images and other
data observed on multi-dimensional grids plays a critical role in a wide range
of applications including biomedical and natural image processing. Moreover,
the ever increasing amount of image data, in terms of both the resolution per
image and the number of images processed per application, requires algorithms
and methods for such applications to be computationally efficient. We develop a
new probabilistic framework for multi-dimensional data to overcome these
challenges through incorporating data adaptivity into discrete wavelet
transforms, thereby allowing them to adapt to the geometric structure of the
data while maintaining the linear computational scalability. By exploiting a
connection between the local directionality of wavelet transforms and recursive
dyadic partitioning on the grid points of the observation, we obtain the
desired adaptivity through adding to the traditional Bayesian wavelet
regression framework an additional layer of Bayesian modeling on the space of
recursive partitions over the grid points. We derive the corresponding
inference recipe in the form of a recursive representation of the exact
posterior, and develop a class of efficient recursive message passing
algorithms for achieving exact Bayesian inference with a computational
complexity linear in the resolution and sample size of the images. While our
framework is applicable to a range of problems including multi-dimensional
signal processing, compression, and structural learning, we illustrate its work
and evaluate its performance in the context of 2D and 3D image reconstruction
using real images from the ImageNet database. We also apply the framework to
analyze a data set from retinal optical coherence tomography
Interpolating autoregressive processes: a bound on the restoration error
An upper bound is obtained for the restoration error variance of a sample restoration method for autoregressive processes that was presented by A.J.E.M. Janssen et al. (ibid., vol.ASSP-34, p.317-30, Apr. 1986). The upper bound derived is lower if the autoregressive process has poles close to the unit circle of the complex plane. This situation corresponds to a peaky signal spectrum. The bound is valid for the case in which one sample is unknown in a realization of an autoregressive process of arbitrary finite orde
- …