598 research outputs found
Empirical Bayes and Full Bayes for Signal Estimation
We consider signals that follow a parametric distribution where the parameter
values are unknown. To estimate such signals from noisy measurements in scalar
channels, we study the empirical performance of an empirical Bayes (EB)
approach and a full Bayes (FB) approach. We then apply EB and FB to solve
compressed sensing (CS) signal estimation problems by successively denoising a
scalar Gaussian channel within an approximate message passing (AMP) framework.
Our numerical results show that FB achieves better performance than EB in
scalar channel denoising problems when the signal dimension is small. In the CS
setting, the signal dimension must be large enough for AMP to work well; for
large signal dimensions, AMP has similar performance with FB and EB.Comment: This work was presented at the Information Theory and Application
workshop (ITA), San Diego, CA, Feb. 201
Kernel Belief Propagation
We propose a nonparametric generalization of belief propagation, Kernel
Belief Propagation (KBP), for pairwise Markov random fields. Messages are
represented as functions in a reproducing kernel Hilbert space (RKHS), and
message updates are simple linear operations in the RKHS. KBP makes none of the
assumptions commonly required in classical BP algorithms: the variables need
not arise from a finite domain or a Gaussian distribution, nor must their
relations take any particular parametric form. Rather, the relations between
variables are represented implicitly, and are learned nonparametrically from
training data. KBP has the advantage that it may be used on any domain where
kernels are defined (Rd, strings, groups), even where explicit parametric
models are not known, or closed form expressions for the BP updates do not
exist. The computational cost of message updates in KBP is polynomial in the
training data size. We also propose a constant time approximate message update
procedure by representing messages using a small number of basis functions. In
experiments, we apply KBP to image denoising, depth prediction from still
images, and protein configuration prediction: KBP is faster than competing
classical and nonparametric approaches (by orders of magnitude, in some cases),
while providing significantly more accurate results
An Overview of Multi-Processor Approximate Message Passing
Approximate message passing (AMP) is an algorithmic framework for solving
linear inverse problems from noisy measurements, with exciting applications
such as reconstructing images, audio, hyper spectral images, and various other
signals, including those acquired in compressive signal acquisiton systems. The
growing prevalence of big data systems has increased interest in large-scale
problems, which may involve huge measurement matrices that are unsuitable for
conventional computing systems. To address the challenge of large-scale
processing, multiprocessor (MP) versions of AMP have been developed. We provide
an overview of two such MP-AMP variants. In row-MP-AMP, each computing node
stores a subset of the rows of the matrix and processes corresponding
measurements. In column- MP-AMP, each node stores a subset of columns, and is
solely responsible for reconstructing a portion of the signal. We will discuss
pros and cons of both approaches, summarize recent research results for each,
and explain when each one may be a viable approach. Aspects that are
highlighted include some recent results on state evolution for both MP-AMP
algorithms, and the use of data compression to reduce communication in the MP
network
Recovery from Linear Measurements with Complexity-Matching Universal Signal Estimation
We study the compressed sensing (CS) signal estimation problem where an input
signal is measured via a linear matrix multiplication under additive noise.
While this setup usually assumes sparsity or compressibility in the input
signal during recovery, the signal structure that can be leveraged is often not
known a priori. In this paper, we consider universal CS recovery, where the
statistics of a stationary ergodic signal source are estimated simultaneously
with the signal itself. Inspired by Kolmogorov complexity and minimum
description length, we focus on a maximum a posteriori (MAP) estimation
framework that leverages universal priors to match the complexity of the
source. Our framework can also be applied to general linear inverse problems
where more measurements than in CS might be needed. We provide theoretical
results that support the algorithmic feasibility of universal MAP estimation
using a Markov chain Monte Carlo implementation, which is computationally
challenging. We incorporate some techniques to accelerate the algorithm while
providing comparable and in many cases better reconstruction quality than
existing algorithms. Experimental results show the promise of universality in
CS, particularly for low-complexity sources that do not exhibit standard
sparsity or compressibility.Comment: 29 pages, 8 figure
- …