47 research outputs found

    Information-Type Divergence When The Likelihood Ratios Are Bounded

    No full text
    . The so-called OE-divergence is an important characteristic describing "dissimilarity" of two probability distributions. Many traditional measures of separation used in mathematical statistics and information theory, some of which are mentioned in the note, correspond to particular choices of this divergence. An upper bound on a OE--divergence between two probability distributions is derived when the likelihood ratio is bounded. The usefulness of this sharp bound is illustrated by several examples of familiar OE--divergences. An extension of this inequality to OE--divergences between a finite number of probability distributions with pairwise bounded likelihood ratios is also given. 1 Information-type divergences Let OE be a convex function defined on the positive half-line, and let F and G be two different probability distributions such that F is absolutely continuous with respect to G. The so-called OE-divergence between F and G is defined as OE(F jG) = Z OE / dF dG ! dG = ..

    The Rates Of Convergence Of Bayes Estimators In Change-Point Analysis

    No full text
    In the asymptotic setting of the change-point estimation problem the limiting behavior of Bayes procedures for the zero-one loss function is studied. The limiting distribution of the difference between the Bayes estimator and the parameter is derived. An explicit formula for the limit of the minimum Bayes risk for the geometric prior distribution is obtained from Spitzer's formula, and the rates of convergence in these limiting relations are determined. Key words and phrases: Bayes risk, change-point problem, convergence rate, geometric distribution, maximum likelihood estimator, Spitzer's formula, zero-one loss function. 1. Asymptotic Behavior of the Bayes Estimator Under Zero-One Loss Function Assume that the observed data is formed by the random subsample (x 0 ; \Delta \Delta \Delta ; x ), which is observed first and is coming from distribution F , and by (x +1 ; \Delta \Delta \Delta ; xn+1 ) from distribution G; G 6= F . In other terms x = (x 0 ; x 1 ; \Delta \Delta \Delta ; x ..

    DECISION-MAKING WITH AUGMENTED ACTION SPACES

    No full text
    Abstract: In a multiple decision problem one has to choose the "correct" distribution out of a number of different distributions for an observation x. When x is a random sample, it is known that the minimum Bayes risk decays at exponential rate, which coincides with that of the minimax risk, and is determined by an information-type divergence between these distributions. There are situations when it is desirable to allow new possible decisions. For example, if the data x does not provide enough support to any of the models, one may want to allow a "no-decision" or "rejection" option. Another example of such a situation is the confidence estimation problem where the "correct" decisions correspond to one-point sets, and new non-standard actions are formed by subsets of the parameter space consisting of at least two elements. In the version of the multiple decision problem with augmented action space, we derive the optimal exponential rate of the minimum Bayes risk, and show that it coincides with the mentioned information-type divergence in the classical multiple decision problem. However, the component of the Bayes risk corresponding to the error occurring when the decision belongs to the standard action space may decrease at a faster exponential rate. In a binomial example the accuracy of two asymptotic formulas for the risks containing oscillating (diverging) factors is compared

    Approximate Entropy for Testing Randomness

    No full text
    In this paper a new concept of approximate entropy is modified and applied to the problem of testing for randomness a string of binary bits. This concept has been introduced in a series of papers by S. Pincus and co-authors. The corresponding statistic is designed to measure the degree of randomness of observed sequences. It is based on incremental contrasts of empirical entropies based on the frequencies of different patterns in the sequence. Sequences with large approximate entropy must have substantial fluctuation or irregularity. Alternatively, small values of this characteristic imply strong regularity, or lack of randomness, in a sequence. Pincus and Kalman (1997) evaluated approximate entropies for binary and decimal expansions of e; ß; p 2 and p 3 with the surprising conclusion that the expansion of p 3 demonstrated much more irregularity than that of ß. Tractable small sample distributions are hardly available, and testing randomness is based, as a rule, on fairly long..

    Asymptotic variance estimation in multivariate distributions

    No full text
    A version of an asymptotic estimation problem of the unknown variance in a multivariate location-scale parameter family is studied under a general loss function. The asymptotic inadmissibility of the traditional estimator is established. In a particular case we derive an admissible improvement on this estimator.Variance estimation quadratic polynomial in normal means generalized Bayes estimator Brewster-Zidek estimator admissibility

    The Rates Of Convergence Of Bayes Estimators In Change-Point Analysis

    No full text
    In the asymptotic setting of the change-point estimation problem the limiting behavior of Bayes procedures for the zero-one loss function is studied. The limiting distribution of the difference between the Bayes estimator and the parameter is derived. An explicit formula for the limit of the minimum Bayes risk for the geometric prior distribution is obtained from Spitzer's formula, and the rates of convergence in these limiting relations are determined. Key words and phrases: Bayes risk, change-point problem, convergence rate, geometric distribution, maximum likelihood estimator, Spitzer's formula, zero-one loss function. 1. Asymptotic Behavior of the Bayes Estimator Under Zero-One Loss Function Assume that the observed data is formed by the random subsample (x 0 ; \Delta \Delta \Delta ; x ), which is observed first and is coming from distribution F , and by (x +1 ; \Delta \Delta \Delta ; xn+1 ) from distribution G; G 6= F . In other terms x = (x 0 ; x 1 ; \Delta \Delta \Delta ; x ..

    Restricted likelihood representation and decision-theoretic aspects of meta-analysis

    No full text

    Estimating common vector parameters in interlaboratory studies

    No full text
    The primary goal of this work is to extend two methods of random effects models to multiparameter situation. These methods comprise the DerSimonian-Laird estimator, stemming from meta-analysis, and the Mandel-Paule algorithm widely used in interlaboratory studies. The maximum likelihood estimators are also discussed. Two methods of assessing the uncertainty of these estimators are given.Collaborative study Consensus mean DerSimonian-Laird estimator Mandel-Paule algorithm Meta-analysis Random effects model (Restricted) maximum likelihood estimator
    corecore