48,291 research outputs found
Maximizing entropy of image models for 2-D constrained coding
This paper considers estimating and maximizing the entropy of two-dimensional (2-D) fields with application to 2-D constrained coding. We consider Markov random fields (MRF), which have a non-causal description, and the special case of Pickard random fields (PRF). The PRF are 2-D causal finite context models, which define stationary probability distributions on finite rectangles and thus allow for calculation of the entropy. We consider two binary constraints and revisit the hard square constraint given by forbidding neighboring 1s and provide novel results for the constraint that no uniform 2 £ 2 squares contains all 0s or all 1s. The maximum values of the entropy for the constraints are estimated and binary PRF satisfying the constraint are characterized and optimized w.r.t. the entropy. The maximum binary PRF entropy is 0.839 bits/symbol for the no uniform squares constraint. The entropy of the Markov random field defined by the 2-D constraint is estimated to be (upper bounded by) 0.8570 bits/symbol using the iterative technique of Belief Propagation on 2 £ 2 finite lattices. Based on combinatorial bounding techniques the maximum entropy for the constraint was determined to be 0.848
Minimum Conditional Description Length Estimation for Markov Random Fields
In this paper we discuss a method, which we call Minimum Conditional
Description Length (MCDL), for estimating the parameters of a subset of sites
within a Markov random field. We assume that the edges are known for the entire
graph . Then, for a subset , we estimate the parameters
for nodes and edges in as well as for edges incident to a node in , by
finding the exponential parameter for that subset that yields the best
compression conditioned on the values on the boundary . Our
estimate is derived from a temporally stationary sequence of observations on
the set . We discuss how this method can also be applied to estimate a
spatially invariant parameter from a single configuration, and in so doing,
derive the Maximum Pseudo-Likelihood (MPL) estimate.Comment: Information Theory and Applications (ITA) workshop, February 201
Counting solutions from finite samplings
We formulate the solution counting problem within the framework of inverse
Ising problem and use fast belief propagation equations to estimate the entropy
whose value provides an estimate on the true one. We test this idea on both
diluted models (random 2-SAT and 3-SAT problems) and fully-connected model
(binary perceptron), and show that when the constraint density is small, this
estimate can be very close to the true value. The information stored by the
salamander retina under the natural movie stimuli can also be estimated and our
result is consistent with that obtained by Monte Carlo method. Of particular
significance is sizes of other metastable states for this real neuronal network
are predicted.Comment: 9 pages, 4 figures and 1 table, further discussions adde
Scanning and Sequential Decision Making for Multidimensional Data -- Part II: The Noisy Case
We consider the problem of sequential decision making for random fields corrupted by noise. In this scenario, the decision maker observes a noisy version of the data, yet judged with respect to the clean data. In particular, we first consider the problem of scanning and sequentially filtering noisy random fields. In this case, the sequential filter is given the freedom to choose the path over which it traverses the random field (e.g., noisy image or video sequence), thus it is natural to ask what is the best achievable performance and how sensitive this performance is to the choice of the scan. We formally define the problem of scanning and filtering, derive a bound on the best achievable performance, and quantify the excess loss occurring when nonoptimal scanners are used, compared to optimal scanning and filtering. We then discuss the problem of scanning and prediction for noisy random fields. This setting is a natural model for applications such as restoration and coding of noisy images. We formally define the problem of scanning and prediction of a noisy multidimensional array and relate the optimal performance to the clean scandictability defined by Merhav and Weissman. Moreover, bounds on the excess loss due to suboptimal scans are derived, and a universal prediction algorithm is suggested. This paper is the second part of a two-part paper. The first paper dealt with scanning and sequential decision making on noiseless data arrays
Asymptotic theory of semiparametric -estimators for stochastic processes with applications to ergodic diffusions and time series
This paper generalizes a part of the theory of -estimation which has been
developed mainly in the context of modern empirical processes to the case of
stochastic processes, typically, semimartingales. We present a general theorem
to derive the asymptotic behavior of the solution to an estimating equation
with an abstract nuisance
parameter when the compensator of is random. As its application,
we consider the estimation problem in an ergodic diffusion process model where
the drift coefficient contains an unknown, finite-dimensional parameter
and the diffusion coefficient is indexed by a nuisance parameter
from an infinite-dimensional space. An example for the nuisance parameter space
is a class of smooth functions. We establish the asymptotic normality and
efficiency of a -estimator for the drift coefficient. As another
application, we present a similar result also in an ergodic time series model.Comment: Published in at http://dx.doi.org/10.1214/09-AOS693 the Annals of
Statistics (http://www.imstat.org/aos/) by the Institute of Mathematical
Statistics (http://www.imstat.org
Wavelet-Based Entropy Measures to Characterize Two-Dimensional Fractional Brownian Fields
The aim of this work was to extend the results of Perez et al. (Physica A (2006), 365 (2), 282–288) to the two-dimensional (2D) fractional Brownian field. In particular, we defined Shannon entropy using the wavelet spectrum from which the Hurst exponent is estimated by the regression of the logarithm of the square coefficients over the levels of resolutions. Using the same methodology. we also defined two other entropies in 2D: Tsallis and the Rényi entropies. A simulation study was performed for showing the ability of the method to characterize 2D (in this case, α = 2) self-similar processes
Maximum Entropy Production Principle for Stock Returns
In our previous studies we have investigated the structural complexity of
time series describing stock returns on New York's and Warsaw's stock
exchanges, by employing two estimators of Shannon's entropy rate based on
Lempel-Ziv and Context Tree Weighting algorithms, which were originally used
for data compression. Such structural complexity of the time series describing
logarithmic stock returns can be used as a measure of the inherent (model-free)
predictability of the underlying price formation processes, testing the
Efficient-Market Hypothesis in practice. We have also correlated the estimated
predictability with the profitability of standard trading algorithms, and found
that these do not use the structure inherent in the stock returns to any
significant degree. To find a way to use the structural complexity of the stock
returns for the purpose of predictions we propose the Maximum Entropy
Production Principle as applied to stock returns, and test it on the two
mentioned markets, inquiring into whether it is possible to enhance prediction
of stock returns based on the structural complexity of these and the mentioned
principle.Comment: 14 pages, 5 figure
- …