44,395 research outputs found
A vector quantization approach to universal noiseless coding and quantization
A two-stage code is a block code in which each block of data is coded in two stages: the first stage codes the identity of a block code among a collection of codes, and the second stage codes the data using the identified code. The collection of codes may be noiseless codes, fixed-rate quantizers, or variable-rate quantizers. We take a vector quantization approach to two-stage coding, in which the first stage code can be regarded as a vector quantizer that “quantizes” the input data of length n to one of a fixed collection of block codes. We apply the generalized Lloyd algorithm to the first-stage quantizer, using induced measures of rate and distortion, to design locally optimal two-stage codes. On a source of medical images, two-stage variable-rate vector quantizers designed in this way outperform standard (one-stage) fixed-rate vector quantizers by over 9 dB. The tail of the operational distortion-rate function of the first-stage quantizer determines the optimal rate of convergence of the redundancy of a universal sequence of two-stage codes. We show that there exist two-stage universal noiseless codes, fixed-rate quantizers, and variable-rate quantizers whose per-letter rate and distortion redundancies converge to zero as (k/2)n -1 log n, when the universe of sources has finite dimension k. This extends the achievability part of Rissanen's theorem from universal noiseless codes to universal quantizers. Further, we show that the redundancies converge as O(n-1) when the universe of sources is countable, and as O(n-1+ϵ) when the universe of sources is infinite-dimensional, under appropriate conditions
On the Equivalence between Herding and Conditional Gradient Algorithms
We show that the herding procedure of Welling (2009) takes exactly the form
of a standard convex optimization algorithm--namely a conditional gradient
algorithm minimizing a quadratic moment discrepancy. This link enables us to
invoke convergence results from convex optimization and to consider faster
alternatives for the task of approximating integrals in a reproducing kernel
Hilbert space. We study the behavior of the different variants through
numerical simulations. The experiments indicate that while we can improve over
herding on the task of approximating integrals, the original herding algorithm
tends to approach more often the maximum entropy distribution, shedding more
light on the learning bias behind herding
A globally convergent matricial algorithm for multivariate spectral estimation
In this paper, we first describe a matricial Newton-type algorithm designed
to solve the multivariable spectrum approximation problem. We then prove its
global convergence. Finally, we apply this approximation procedure to
multivariate spectral estimation, and test its effectiveness through
simulation. Simulation shows that, in the case of short observation records,
this method may provide a valid alternative to standard multivariable
identification techniques such as MATLAB's PEM and MATLAB's N4SID
- …