108 research outputs found

    Spatially homogeneous Maxwellian molecules in a neighborhood of the equilibrium

    Full text link
    This note deals with the long-time behavior of the solution to the spatially homogeneous Boltzmann equation for Maxwellian molecules, when the initial datum belongs to a suitable neighborhood of the Maxwellian equilibrium. In particulary, it contains a quantification of the rate of exponential convergence, obtained by simple arguments

    The role of the central limit theorem in discovering sharp rates of convergence to equilibrium for the solution of the Kac equation

    Full text link
    In Dolera, Gabetta and Regazzini [Ann. Appl. Probab. 19 (2009) 186-201] it is proved that the total variation distance between the solution f(,t)f(\cdot,t) of Kac's equation and the Gaussian density (0,σ2)(0,\sigma^2) has an upper bound which goes to zero with an exponential rate equal to -1/4 as t+t\to+\infty. In the present paper, we determine a lower bound which decreases exponentially to zero with this same rate, provided that a suitable symmetrized form of f0f_0 has nonzero fourth cumulant κ4\kappa_4. Moreover, we show that upper bounds like Cˉδe(1/4)tρδ(t)\bar{C}_{\delta}e^{-({1/4})t}\rho_{\delta}(t) are valid for some ρδ\rho_{\delta} vanishing at infinity when Rv4+δf0(v)dv<+\int_{\mathbb{R}}|v|^{4+\delta}f_0(v)\,dv<+\infty for some δ\delta in [0,2[[0,2[ and κ4=0\kappa_4=0. Generalizations of this statement are presented, together with some remarks about non-Gaussian initial conditions which yield the insuperable barrier of -1 for the rate of convergence.Comment: Published in at http://dx.doi.org/10.1214/09-AAP623 the Annals of Applied Probability (http://www.imstat.org/aap/) by the Institute of Mathematical Statistics (http://www.imstat.org

    A Berry-Esseen theorem for Pitman's α\alpha-diversity

    Full text link
    This paper is concerned with the study of the random variable KnK_n denoting the number of distinct elements in a random sample (X1,,Xn)(X_1, \dots, X_n) of exchangeable random variables driven by the two parameter Poisson-Dirichlet distribution, PD(α,θ)PD(\alpha,\theta). For α(0,1)\alpha\in(0,1), Theorem 3.8 in \cite{Pit(06)} shows that Knnαa.s.Sα,θ\frac{K_n}{n^{\alpha}}\stackrel{\text{a.s.}}{\longrightarrow} S_{\alpha,\theta} as n+n\rightarrow+\infty. Here, Sα,θS_{\alpha,\theta} is a random variable distributed according to the so-called scaled Mittag-Leffler distribution. Our main result states that \sup_{x \geq 0} \Big| \ppsf\Big[\frac{K_n}{n^{\alpha}} \leq x \Big] - \ppsf[S_{\alpha,\theta} \leq x] \Big| \leq \frac{C(\alpha, \theta)}{n^{\alpha}} holds with an explicit constant C(α,θ)C(\alpha, \theta). The key ingredients of the proof are a novel probabilistic representation of KnK_n as compound distribution and new, refined versions of certain quantitative bounds for the Poisson approximation and the compound Poisson distribution

    Uniform rates of the Glivenko-Cantelli convergence and their use in approximating Bayesian inferences

    Full text link
    This paper deals with the problem of quantifying the approximation a probability measure by means of an empirical (in a wide sense) random probability measure, depending on the first n terms of a sequence of random elements. In Section 2, one studies the range of oscillation near zero of the Wasserstein distance ^{(p)}_{\pms} between \pfrak_0 and \hat{\pfrak}_n, assuming that the \xitil_i's are i.i.d. with \pfrak_0 as common law. Theorem 2.3 deals with the case in which \pfrak_0 is fixed as a generic element of the space of all probability measures on (\rd, \mathscr{B}(\rd)) and \hat{\pfrak}_n coincides with the empirical measure. In Theorem 2.4 (Theorem 2.5, respectively) \pfrak_0 is a d-dimensional Gaussian distribution (an element of a distinguished type of statistical exponential family, respectively) and \hat{\pfrak}_n is another dd-dimensional Gaussian distribution with estimated mean and covariance matrix (another element of the same family with an estimated parameter, respectively). These new results improve on allied recent works (see, e.g., [31]) since they also provide uniform bounds with respect to nn, meaning that the finiteness of the p-moment of the random variable \sup_{n \geq 1} b_n ^{(p)}_{\pms}(\pfrak_0, \hat{\pfrak}_n) is proved for some suitable diverging sequence b_n of positive numbers. In Section 3, under the hypothesis that the \xitil_i's are exchangeable, one studies the range of the random oscillation near zero of the Wasserstein distance between the conditional distribution--also called posterior--of the directing measure of the sequence, given \xitil_1, \dots, \xitil_n, and the point mass at \hat{\pfrak}_n. In a similar vein, a bound for the approximation of predictive distributions is given. Finally, Theorems from 3.3 to 3.5 reconsider Theorems from 2.3 to 2.5, respectively, according to a Bayesian perspective

    Frequentistic approximations to Bayesian prevision of exchangeable random elements

    Full text link
    Given a sequence \xi_1, \xi_2,... of X-valued, exchangeable random elements, let q(\xi^(n)) and p_m(\xi^(n)) stand for posterior and predictive distribution, respectively, given \xi^(n) = (\xi_1,..., \xi_n). We provide an upper bound for limsup b_n d_[[X]](q(\xi^(n)), \delta_\empiricn) and limsup b_n d_[X^m](p_m(\xi^(n)), \empiricn^m), where \empiricn is the empirical measure, b_n is a suitable sequence of positive numbers increasing to +\infty, d_[[X]] and d_[X^m] denote distinguished weak probability distances on [[X]] and [X^m], respectively, with the proviso that [S] denotes the space of all probability measures on S. A characteristic feature of our work is that the aforesaid bounds are established under the law of the \xi_n's, unlike the more common literature on Bayesian consistency, where they are studied with respect to product measures (p_0)^\infty, as p_0 varies among the admissible determinations of a random probability measure

    A Bayesian nonparametric approach to count-min sketch under power-law data streams

    Get PDF
    The count-min sketch (CMS) is a randomized data structure that provides estimates of tokens’ frequencies in a large data stream using a compressed representation of the data by random hashing. In this paper, we rely on a recent Bayesian nonparametric (BNP) view on the CMS to develop a novel learning-augmented CMS under powerlaw data streams. We assume that tokens in the stream are drawn from an unknown discrete distribution, which is endowed with a normalized inverse Gaussian process (NIGP) prior. Then, using distributional properties of the NIGP, we compute the posterior distribution of a token’s frequency in the stream, given the hashed data, and in turn corresponding BNP estimates. Applications to synthetic and real data show that our approach achieves a remarkable performance in the estimation of low-frequency tokens. This is known to be a desirable feature in the context of natural language processing, where it is indeed common in the context of the power-law behaviour of the data

    De Finetti's theorem: rate of convergence in Kolmogorov distance

    Get PDF
    This paper provides a quantitative version of de Finetti law of large numbers. Given an infinite sequence {Xn}n1\{X_n\}_{n \geq 1} of exchangeable Bernoulli variables, it is well-known that 1ni=1nXia.s.Y\frac{1}{n} \sum_{i = 1}^n X_i \stackrel{a.s.}{\longrightarrow} Y, for a suitable random variable YY taking values in [0,1][0,1]. Here, we consider the rate of convergence in law of 1ni=1nXi\frac{1}{n} \sum_{i = 1}^n X_i towards YY, with respect to the Kolmogorov distance. After showing that any rate of the type of 1/nα1/n^{\alpha} can be obtained for any α(0,1]\alpha \in (0,1], we find a sufficient condition on the probability distribution of YY for the achievement of the optimal rate of convergence, that is 1/n1/n. Our main result improve on existing literature: in particular, with respect to \cite{MPS}, we study a stronger metric while, with respect to \cite{Mna}, we weaken the regularity hypothesis on the probability distribution of YY
    corecore