1,145 research outputs found

    Entropy and information in neural spike trains: Progress on the sampling problem

    Full text link
    The major problem in information theoretic analysis of neural responses and other biological data is the reliable estimation of entropy--like quantities from small samples. We apply a recently introduced Bayesian entropy estimator to synthetic data inspired by experiments, and to real experimental spike trains. The estimator performs admirably even very deep in the undersampled regime, where other techniques fail. This opens new possibilities for the information theoretic analysis of experiments, and may be of general interest as an example of learning from limited data.Comment: 7 pages, 4 figures; referee suggested changes, accepted versio

    Nonparametric Bayesian estimation of a H\"older continuous diffusion coefficient

    Get PDF
    We consider a nonparametric Bayesian approach to estimate the diffusion coefficient of a stochastic differential equation given discrete time observations over a fixed time interval. As a prior on the diffusion coefficient, we employ a histogram-type prior with piecewise constant realisations on bins forming a partition of the time interval. Specifically, these constants are realizations of independent inverse Gamma distributed randoma variables. We justify our approach by deriving the rate at which the corresponding posterior distribution asymptotically concentrates around the data-generating diffusion coefficient. This posterior contraction rate turns out to be optimal for estimation of a H\"older-continuous diffusion coefficient with smoothness parameter 0<λ≤1.0<\lambda\leq 1. Our approach is straightforward to implement, as the posterior distributions turn out to be inverse Gamma again, and leads to good practical results in a wide range of simulation examples. Finally, we apply our method on exchange rate data sets

    J. K. Ghosh's contribution to statistics: A brief outline

    Get PDF
    Professor Jayanta Kumar Ghosh has contributed massively to various areas of Statistics over the last five decades. Here, we survey some of his most important contributions. In roughly chronological order, we discuss his major results in the areas of sequential analysis, foundations, asymptotics, and Bayesian inference. It is seen that he progressed from thinking about data points, to thinking about data summarization, to the limiting cases of data summarization in as they relate to parameter estimation, and then to more general aspects of modeling including prior and model selection.Comment: Published in at http://dx.doi.org/10.1214/074921708000000011 the IMS Collections (http://www.imstat.org/publications/imscollections.htm) by the Institute of Mathematical Statistics (http://www.imstat.org

    Coherent frequentism

    Full text link
    By representing the range of fair betting odds according to a pair of confidence set estimators, dual probability measures on parameter space called frequentist posteriors secure the coherence of subjective inference without any prior distribution. The closure of the set of expected losses corresponding to the dual frequentist posteriors constrains decisions without arbitrarily forcing optimization under all circumstances. This decision theory reduces to those that maximize expected utility when the pair of frequentist posteriors is induced by an exact or approximate confidence set estimator or when an automatic reduction rule is applied to the pair. In such cases, the resulting frequentist posterior is coherent in the sense that, as a probability distribution of the parameter of interest, it satisfies the axioms of the decision-theoretic and logic-theoretic systems typically cited in support of the Bayesian posterior. Unlike the p-value, the confidence level of an interval hypothesis derived from such a measure is suitable as an estimator of the indicator of hypothesis truth since it converges in sample-space probability to 1 if the hypothesis is true or to 0 otherwise under general conditions.Comment: The confidence-measure theory of inference and decision is explicitly extended to vector parameters of interest. The derivation of upper and lower confidence levels from valid and nonconservative set estimators is formalize

    Semiparametric posterior limits

    Full text link
    We review the Bayesian theory of semiparametric inference following Bickel and Kleijn (2012) and Kleijn and Knapik (2013). After an overview of efficiency in parametric and semiparametric estimation problems, we consider the Bernstein-von Mises theorem (see, e.g., Le Cam and Yang (1990)) and generalize it to (LAN) regular and (LAE) irregular semiparametric estimation problems. We formulate a version of the semiparametric Bernstein-von Mises theorem that does not depend on least-favourable submodels, thus bypassing the most restrictive condition in the presentation of Bickel and Kleijn (2012). The results are applied to the (regular) estimation of the linear coefficient in partial linear regression (with a Gaussian nuisance prior) and of the kernel bandwidth in a model of normal location mixtures (with a Dirichlet nuisance prior), as well as the (irregular) estimation of the boundary of the support of a monotone family of densities (with a Gaussian nuisance prior).Comment: 47 pp., 1 figure, submitted for publication. arXiv admin note: substantial text overlap with arXiv:1007.017

    Asymptotic Redundancies for Universal Quantum Coding

    Full text link
    Clarke and Barron have recently shown that the Jeffreys' invariant prior of Bayesian theory yields the common asymptotic (minimax and maximin) redundancy of universal data compression in a parametric setting. We seek a possible analogue of this result for the two-level {\it quantum} systems. We restrict our considerations to prior probability distributions belonging to a certain one-parameter family, q(u)q(u), −∞<u<1-\infty < u < 1. Within this setting, we are able to compute exact redundancy formulas, for which we find the asymptotic limits. We compare our quantum asymptotic redundancy formulas to those derived by naively applying the classical counterparts of Clarke and Barron, and find certain common features. Our results are based on formulas we obtain for the eigenvalues and eigenvectors of 2n×2n2^n \times 2^n (Bayesian density) matrices, ζn(u)\zeta_{n}(u). These matrices are the weighted averages (with respect to q(u)q(u)) of all possible tensor products of nn identical 2×22 \times 2 density matrices, representing the two-level quantum systems. We propose a form of {\it universal} coding for the situation in which the density matrix describing an ensemble of quantum signal states is unknown. A sequence of nn signals would be projected onto the dominant eigenspaces of \ze_n(u)
    • …
    corecore