2,653 research outputs found

    Universal Coding on Infinite Alphabets: Exponentially Decreasing Envelopes

    Full text link
    This paper deals with the problem of universal lossless coding on a countable infinite alphabet. It focuses on some classes of sources defined by an envelope condition on the marginal distribution, namely exponentially decreasing envelope classes with exponent α\alpha. The minimax redundancy of exponentially decreasing envelope classes is proved to be equivalent to 14αlogelog2n\frac{1}{4 \alpha \log e} \log^2 n. Then a coding strategy is proposed, with a Bayes redundancy equivalent to the maximin redundancy. At last, an adaptive algorithm is provided, whose redundancy is equivalent to the minimax redundanc

    Fast rates for noisy clustering

    Get PDF
    The effect of errors in variables in empirical minimization is investigated. Given a loss ll and a set of decision rules G\mathcal{G}, we prove a general upper bound for an empirical minimization based on a deconvolution kernel and a noisy sample Zi=Xi+ϵi,i=1,...,nZ_i=X_i+\epsilon_i,i=1,...,n. We apply this general upper bound to give the rate of convergence for the expected excess risk in noisy clustering. A recent bound from \citet{levrard} proves that this rate is O(1/n)\mathcal{O}(1/n) in the direct case, under Pollard's regularity assumptions. Here the effect of noisy measurements gives a rate of the form O(1/nγγ+2β)\mathcal{O}(1/n^{\frac{\gamma}{\gamma+2\beta}}), where γ\gamma is the H\"older regularity of the density of XX whereas β\beta is the degree of illposedness

    A Bernstein-Von Mises Theorem for discrete probability distributions

    Full text link
    We investigate the asymptotic normality of the posterior distribution in the discrete setting, when model dimension increases with sample size. We consider a probability mass function θ0\theta_0 on \mathbbm{N}\setminus \{0\} and a sequence of truncation levels (kn)n(k_n)_n satisfying kn3ninfiknθ0(i).k_n^3\leq n\inf_{i\leq k_n}\theta_0(i). Let θ^\hat{\theta} denote the maximum likelihood estimate of (θ0(i))ikn(\theta_0(i))_{i\leq k_n} and let Δn(θ0)\Delta_n(\theta_0) denote the knk_n-dimensional vector which ii-th coordinate is defined by \sqrt{n} (\hat{\theta}_n(i)-\theta_0(i)) for 1ikn.1\leq i\leq k_n. We check that under mild conditions on θ0\theta_0 and on the sequence of prior probabilities on the knk_n-dimensional simplices, after centering and rescaling, the variation distance between the posterior distribution recentered around θ^n\hat{\theta}_n and rescaled by n\sqrt{n} and the knk_n-dimensional Gaussian distribution N(Δn(θ0),I1(θ0))\mathcal{N}(\Delta_n(\theta_0),I^{-1}(\theta_0)) converges in probability to 0.0. This theorem can be used to prove the asymptotic normality of Bayesian estimators of Shannon and R\'{e}nyi entropies. The proofs are based on concentration inequalities for centered and non-centered Chi-square (Pearson) statistics. The latter allow to establish posterior concentration rates with respect to Fisher distance rather than with respect to the Hellinger distance as it is commonplace in non-parametric Bayesian statistics.Comment: Published in at http://dx.doi.org/10.1214/08-EJS262 the Electronic Journal of Statistics (http://www.i-journals.org/ejs/) by the Institute of Mathematical Statistics (http://www.imstat.org

    Incremental Learning of Nonparametric Bayesian Mixture Models

    Get PDF
    Clustering is a fundamental task in many vision applications. To date, most clustering algorithms work in a batch setting and training examples must be gathered in a large group before learning can begin. Here we explore incremental clustering, in which data can arrive continuously. We present a novel incremental model-based clustering algorithm based on nonparametric Bayesian methods, which we call Memory Bounded Variational Dirichlet Process (MB-VDP). The number of clusters are determined flexibly by the data and the approach can be used to automatically discover object categories. The computational requirements required to produce model updates are bounded and do not grow with the amount of data processed. The technique is well suited to very large datasets, and we show that our approach outperforms existing online alternatives for learning nonparametric Bayesian mixture models
    corecore