364 research outputs found

    Maximal uniform convergence rates in parametric estimation problems

    Get PDF
    This paper considers parametric estimation problems with independent, identically nonregularly distributed data. It focuses on rate efficiency, in the sense of maximal possible convergence rates of stochastically bounded estimators, as an optimality criterion, largely unexplored in parametric estimation. Under mild conditions, the Hellinger metric, defined on the space of parametric probability measures, is shown to be an essentially universally applicable tool to determine maximal possible convergence rates. These rates are shown to be attainable in general classes of parametric estimation problems

    A REMARK ON THE CENTRAL LIMIT THEOREM

    Full text link

    On choosing and bounding probability metrics

    Get PDF
    When studying convergence of measures, an important issue is the choice of probability metric. In this review, we provide a summary and some new results concerning bounds among ten important probability metrics/distances that are used by statisticians and probabilists. We focus on these metrics because they are either well-known, commonly used, or admit practical bounding techniques. We summarize these relationships in a handy reference diagram, and also give examples to show how rates of convergence can depend on the metric chosen.Comment: To appear, International Statistical Review. Related work at http://www.math.hmc.edu/~su/papers.htm

    Maximum likelihood drift estimation for a threshold diffusion

    Get PDF
    We study the maximum likelihood estimator of the drift parameters of a stochastic differential equation, with both drift and diffusion coefficients constant on the positive and negative axis, yet discontinuous at zero. This threshold diffusion is called drifted Oscillating Brownian motion.For this continuously observed diffusion, the maximum likelihood estimator coincide with a quasi-likelihood estimator with constant diffusion term. We show that this estimator is the limit, as observations become dense in time, of the (quasi)-maximum likelihood estimator based on discrete observations. In long time, the asymptotic behaviors of the positive and negative occupation times rule the ones of the estimators. Differently from most known results in the literature, we do not restrict ourselves to the ergodic framework: indeed, depending on the signs of the drift, the process may be ergodic, transient or null recurrent. For each regime, we establish whether or not the estimators are consistent; if they are, we prove the convergence in long time of the properly rescaled difference of the estimators towards a normal or mixed normal distribution. These theoretical results are backed by numerical simulations

    Optimal quantum estimation in spin systems at criticality

    Full text link
    It is a general fact that the coupling constant of an interacting many-body Hamiltonian do not correspond to any observable and one has to infer its value by an indirect measurement. For this purpose, quantum systems at criticality can be considered as a resource to improve the ultimate quantum limits to precision of the estimation procedure. In this paper, we consider the one-dimensional quantum Ising model as a paradigmatic example of many-body system exhibiting criticality, and derive the optimal quantum estimator of the coupling constant varying size and temperature. We find the optimal external field, which maximizes the quantum Fisher information of the coupling constant, both for few spins and in the thermodynamic limit, and show that at the critical point a precision improvement of order LL is achieved. We also show that the measurement of the total magnetization provides optimal estimation for couplings larger than a threshold value, which itself decreases with temperature.Comment: 8 pages, 4 figure

    Comparison between the Cramer-Rao and the mini-max approaches in quantum channel estimation

    Full text link
    In a unified viewpoint in quantum channel estimation, we compare the Cramer-Rao and the mini-max approaches, which gives the Bayesian bound in the group covariant model. For this purpose, we introduce the local asymptotic mini-max bound, whose maximum is shown to be equal to the asymptotic limit of the mini-max bound. It is shown that the local asymptotic mini-max bound is strictly larger than the Cramer-Rao bound in the phase estimation case while the both bounds coincide when the minimum mean square error decreases with the order O(1/n). We also derive a sufficient condition for that the minimum mean square error decreases with the order O(1/n).Comment: In this revision, some unlcear parts are clarifie
    corecore