8,161 research outputs found
Discrete denoising of heterogenous two-dimensional data
We consider discrete denoising of two-dimensional data with characteristics
that may be varying abruptly between regions.
Using a quadtree decomposition technique and space-filling curves, we extend
the recently developed S-DUDE (Shifting Discrete Universal DEnoiser), which was
tailored to one-dimensional data, to the two-dimensional case. Our scheme
competes with a genie that has access, in addition to the noisy data, also to
the underlying noiseless data, and can employ different two-dimensional
sliding window denoisers along distinct regions obtained by a quadtree
decomposition with leaves, in a way that minimizes the overall loss. We
show that, regardless of what the underlying noiseless data may be, the
two-dimensional S-DUDE performs essentially as well as this genie, provided
that the number of distinct regions satisfies , where is the total
size of the data. The resulting algorithm complexity is still linear in both
and , as in the one-dimensional case. Our experimental results show that
the two-dimensional S-DUDE can be effective when the characteristics of the
underlying clean image vary across different regions in the data.Comment: 16 pages, submitted to IEEE Transactions on Information Theor
Scanning and Sequential Decision Making for Multi-Dimensional Data - Part I: the Noiseless Case
We investigate the problem of scanning and prediction ("scandiction", for
short) of multidimensional data arrays. This problem arises in several aspects
of image and video processing, such as predictive coding, for example, where an
image is compressed by coding the error sequence resulting from scandicting it.
Thus, it is natural to ask what is the optimal method to scan and predict a
given image, what is the resulting minimum prediction loss, and whether there
exist specific scandiction schemes which are universal in some sense.
Specifically, we investigate the following problems: First, modeling the data
array as a random field, we wish to examine whether there exists a scandiction
scheme which is independent of the field's distribution, yet asymptotically
achieves the same performance as if this distribution was known. This question
is answered in the affirmative for the set of all spatially stationary random
fields and under mild conditions on the loss function. We then discuss the
scenario where a non-optimal scanning order is used, yet accompanied by an
optimal predictor, and derive bounds on the excess loss compared to optimal
scanning and prediction.
This paper is the first part of a two-part paper on sequential decision
making for multi-dimensional data. It deals with clean, noiseless data arrays.
The second part deals with noisy data arrays, namely, with the case where the
decision maker observes only a noisy version of the data, yet it is judged with
respect to the original, clean data.Comment: 46 pages, 2 figures. Revised version: title changed, section 1
revised, section 3.1 added, a few minor/technical corrections mad
Harmonious Hilbert curves and other extradimensional space-filling curves
This paper introduces a new way of generalizing Hilbert's two-dimensional
space-filling curve to arbitrary dimensions. The new curves, called harmonious
Hilbert curves, have the unique property that for any d' < d, the d-dimensional
curve is compatible with the d'-dimensional curve with respect to the order in
which the curves visit the points of any d'-dimensional axis-parallel space
that contains the origin. Similar generalizations to arbitrary dimensions are
described for several variants of Peano's curve (the original Peano curve, the
coil curve, the half-coil curve, and the Meurthe curve). The d-dimensional
harmonious Hilbert curves and the Meurthe curves have neutral orientation: as
compared to the curve as a whole, arbitrary pieces of the curve have each of d!
possible rotations with equal probability. Thus one could say these curves are
`statistically invariant' under rotation---unlike the Peano curves, the coil
curves, the half-coil curves, and the familiar generalization of Hilbert curves
by Butz and Moore.
In addition, prompted by an application in the construction of R-trees, this
paper shows how to construct a 2d-dimensional generalized Hilbert or Peano
curve that traverses the points of a certain d-dimensional diagonally placed
subspace in the order of a given d-dimensional generalized Hilbert or Peano
curve.
Pseudocode is provided for comparison operators based on the curves presented
in this paper.Comment: 40 pages, 10 figures, pseudocode include
Correlation functions of integrable models: a description of the ABACUS algorithm
Recent developments in the theory of integrable models have provided the
means of calculating dynamical correlation functions of some important
observables in systems such as Heisenberg spin chains and one-dimensional
atomic gases. This article explicitly describes how such calculations are
generally implemented in the ABACUS C++ library, emphasizing the universality
in treatment of different cases coming as a consequence of unifying features
within the Bethe Ansatz.Comment: 30 pages, 8 figures, Proceedings of the CRM (Montreal) workshop on
Integrable Quantum Systems and Solvable Statistical Mechanics Model
Mesh-based video coding for low bit-rate communications
In this paper, a new method for low bit-rate content-adaptive mesh-based video coding is proposed. Intra-frame coding of this method employs feature map extraction for node distribution at specific threshold levels to achieve higher density placement of initial nodes for regions that contain high frequency features and conversely sparse placement of initial nodes for smooth regions. Insignificant nodes are largely removed using a subsequent node elimination scheme. The Hilbert scan is then applied before quantization and entropy coding to reduce amount of transmitted information. For moving images, both node position and color parameters of only a subset of nodes may change from frame to frame. It is sufficient to transmit only these changed parameters. The proposed method is well-suited for video coding at very low bit rates, as processing results demonstrate that it provides good subjective and objective image quality at a lower number of required bits
- …