8,270 research outputs found
Rate-Distortion via Markov Chain Monte Carlo
We propose an approach to lossy source coding, utilizing ideas from Gibbs
sampling, simulated annealing, and Markov Chain Monte Carlo (MCMC). The idea is
to sample a reconstruction sequence from a Boltzmann distribution associated
with an energy function that incorporates the distortion between the source and
reconstruction, the compressibility of the reconstruction, and the point sought
on the rate-distortion curve. To sample from this distribution, we use a `heat
bath algorithm': Starting from an initial candidate reconstruction (say the
original source sequence), at every iteration, an index i is chosen and the
i-th sequence component is replaced by drawing from the conditional probability
distribution for that component given all the rest. At the end of this process,
the encoder conveys the reconstruction to the decoder using universal lossless
compression. The complexity of each iteration is independent of the sequence
length and only linearly dependent on a certain context parameter (which grows
sub-logarithmically with the sequence length). We show that the proposed
algorithms achieve optimum rate-distortion performance in the limits of large
number of iterations, and sequence length, when employed on any stationary
ergodic source. Experimentation shows promising initial results. Employing our
lossy compressors on noisy data, with appropriately chosen distortion measure
and level, followed by a simple de-randomization operation, results in a family
of denoisers that compares favorably (both theoretically and in practice) with
other MCMC-based schemes, and with the Discrete Universal Denoiser (DUDE).Comment: 35 pages, 16 figures, Submitted to IEEE Transactions on Information
Theor
Pricing High-Dimensional American Options Using Local Consistency Conditions
We investigate a new method for pricing high-dimensional American options. The method is of finite difference type but is also related to Monte Carlo techniques in that it involves a representative sampling of the underlying variables.An approximating Markov chain is built using this sampling and linear programming is used to satisfy local consistency conditions at each point related to the infinitesimal generator or transition density.The algorithm for constructing the matrix can be parallelised easily; moreover once it has been obtained it can be reused to generate quick solutions for a large class of related problems.We provide pricing results for geometric average options in up to ten dimensions, and compare these with accurate benchmarks.option pricing;inequality;markov chains
Recovery from Linear Measurements with Complexity-Matching Universal Signal Estimation
We study the compressed sensing (CS) signal estimation problem where an input
signal is measured via a linear matrix multiplication under additive noise.
While this setup usually assumes sparsity or compressibility in the input
signal during recovery, the signal structure that can be leveraged is often not
known a priori. In this paper, we consider universal CS recovery, where the
statistics of a stationary ergodic signal source are estimated simultaneously
with the signal itself. Inspired by Kolmogorov complexity and minimum
description length, we focus on a maximum a posteriori (MAP) estimation
framework that leverages universal priors to match the complexity of the
source. Our framework can also be applied to general linear inverse problems
where more measurements than in CS might be needed. We provide theoretical
results that support the algorithmic feasibility of universal MAP estimation
using a Markov chain Monte Carlo implementation, which is computationally
challenging. We incorporate some techniques to accelerate the algorithm while
providing comparable and in many cases better reconstruction quality than
existing algorithms. Experimental results show the promise of universality in
CS, particularly for low-complexity sources that do not exhibit standard
sparsity or compressibility.Comment: 29 pages, 8 figure
- β¦