76,272 research outputs found
A comparative study for estimating the parameters of the second order moving average process
EnMoving Average process is a representation of a time series written as a finite linear combination of uncorrelated random variables. Our main interest is to compare a classical estimation method; namely Exact Maximum Likelihood Estimation (EMLE) with the Generalized Maximum Entropy (GME) approach for estimating the parameters of the second order moving average processes. In this paper, in applying EMLE we have to find the exact likelihood function through deriving the probability density function of the series. Differentiating the function with respect to the parameters, we can obtain the exact maximum likelihood estimates. On the other hand, the idea of GME is to write the unknown parameters and error terms as the expected value of some proper probability distributions defined over some supports. We carry a simulation study to compare between the presented estimation techniques
Estimation of Kumaraswamy Distribution Parameters Using the Principle of Maximum Entropy
This paper proposes using maximum entropy approach to estimate the parameters of the Kumaraswamy distribution subject to moment constraints. Kumaraswamy [7] introduced the double pounded probability density function which was originally used to model hydrological phenomena. It was mentioned that this probability density function is applicable to bounded natural phenomena which have values on two sides. The distribution share several properties with the beta distribution and it has the extra advantages that is possesses a closed form distribution function, but it remained unknown to most statisticians until it was developed by Jones [6] as a beta-type distribution with some tractability advantages in particular as it has fairly simple quantile function and it has explicit formula for L-Moment. Using the principle of maximum entropy to propose new estimators for the Kumaraswamy parameters and compared with maximum likelihood and Bayesian estimation methods. A simulation study is performed to investigate the performance of the estimators in terms of their mean square errors and their efficiency
Similarity based smoothing in language modeling
In this paper, we improve our previously proposed Similarity Based Smoothing (SBS) algorithm. The idea of the SBS is to map words or part of sentences to an Euclidean space, and approximate the language model in that space. The bottleneck of the original algorithm was to train a regularized logistic regression model, which was incapable to deal with real world data. We replace the logistic regression by regularized maximum entropy estimation and a Gaussian mixture approach to model the language in the Euclidean space, showing other possibilities to use the main idea of SBS. We show that the regularized maximum entropy model is flexible enough to handle conditional probability density estimation, thus enable parallel computation tasks with significantly decreased iteration steps. The experimental results demonstrate the success of our method, we achieve 14% improvement on a reail world corpus
Bayesian entropy estimation applied to non-gaussian robust image segmentation
We introduce a new approach for robust image segmentation combining two strategies within a Bayesian framework. The first one is to use a Markov random field (MRF) which allows to introduce prior information with the purpose of image edges preservation. The second strategy comes from the fact that the probability density function (pdf) of the likelihood function is non-Gaussian or unknown, so it should be approximated by an estimated version, which is obtained by using the classical non-parametric or kernel density estimation. This lead us to the definition of a new maximum a posteriori (MAP) estimator based on the minimization of the entropy of the estimated pdf of the likelihood function and the MRF at the same time, named MAP entropy estimator (MAPEE). Some experiments were made for different kind of images degraded with impulsive noise (salt & pepper) and the segmentation results are very satisfactory and promising
New approach of entropy estimation for robust image segmentation
In this work we introduce a new approach for robust image segmentation. The idea is to combine two strategies
within a Bayesian framework. The first one is to use a Márkov Random Field (MRF), which allows to introduce prior information with the purpose of preserve the edges in the image. The second strategy comes from the fact that the probability density function (pdf) of the likelihood function is non Gaussian or unknown, so it should be approximated by an estimated version, and for this, it is used the classical non-parametric or kernel density estimation. This two strategies together lead us to the definition of a new maximum a posteriori (MAP) estimator based on the minimization of the entropy of the estimated pdf of the likelihood function and the MRF at the same time, named MAP entropy estimator (MAPEE). Some experiments were made for different kind of images degraded with impulsive noise and the segmentation results are very satisfactory and promising
Wavelet Domain Image Separation
In this paper, we consider the problem of blind signal and image separation
using a sparse representation of the images in the wavelet domain. We consider
the problem in a Bayesian estimation framework using the fact that the
distribution of the wavelet coefficients of real world images can naturally be
modeled by an exponential power probability density function. The Bayesian
approach which has been used with success in blind source separation gives also
the possibility of including any prior information we may have on the mixing
matrix elements as well as on the hyperparameters (parameters of the prior laws
of the noise and the sources). We consider two cases: first the case where the
wavelet coefficients are assumed to be i.i.d. and second the case where we
model the correlation between the coefficients of two adjacent scales by a
first order Markov chain. This paper only reports on the first case, the second
case results will be reported in a near future. The estimation computations are
done via a Monte Carlo Markov Chain (MCMC) procedure. Some simulations show the
performances of the proposed method. Keywords: Blind source separation,
wavelets, Bayesian estimation, MCMC Hasting-Metropolis algorithm.Comment: Presented at MaxEnt2002, the 22nd International Workshop on Bayesian
and Maximum Entropy methods (Aug. 3-9, 2002, Moscow, Idaho, USA). To appear
in Proceedings of American Institute of Physic
Characteristic lengths and maximum entropy estimation from probe signals in the ellipsoidal bubble regime
The bubble size, surface and volume distributions in two and three phase flows are essential to determine energy and
mass transfer processes. The traditional approaches commonly use a conditional probability density function of chordlengths
to calculate the bubble size distribution, when the bubble size, shape and velocity are known. However, the
approach used in this paper obtains the above distributions from statistical relations, requiring only the moments inferred
from the measurements given by a sampling probe. Using image analysis of bubbles injected in a water tank, and placing
an ideal probe on the image, a sample of bubble diameter, shape factor and velocity angle are obtained. The samples of the
bubble chord-length are synthetically generated from these variables. Thus, we propose a semi-parametric approach based
on the maximum entropy (MaxEnt) distribution estimation subjected to a number of moment constraints avoiding the use
of the complex backward transformation. Therefore, the method allows us to obtain the distributions in close form. The
probability density functions of the most important length scales (D,D20,D30,D32), obtained applying the semi-parametric
approach proposed here in the ellipsoidal bubble regime, are compared with experimental measurementsPublicad
- …