852,261 research outputs found

    Efficient information theoretic inference for conditional moment restrictions

    Get PDF
    The generalized method of moments estimator may be substantially biased in finite samples, especially so when there are large numbers of unconditional moment conditions. This paper develops a class of first order equivalent semi-parametric efficient estimators and tests for conditional moment restrictions models based on a local or kernel-weighted version of the Cressie-Read power divergence family of discrepancies. This approach is similar in spirit to the empirical likelihood methods of Kitamura, Tripathi and Ahn (2004) and Tripathi and Kitamura (2003). These efficient local methods avoid the necessity of explicit estimation of the conditional Jacobian and variance matrices of the conditional moment restrictions and provide empirical conditional probabilities for the observations.Conditional Moment Restrictions, Local Cressie-Read Minimum Discrepancy, GMM, Semi-Parametric Efficiency

    On the occurrence times of componentwise maxima and bias in likelihood inference for multivariate max-stable distributions

    Get PDF
    Full likelihood-based inference for high-dimensional multivariate extreme value distributions, or max-stable processes, is feasible when incorporating occurrence times of the maxima; without this information, dd-dimensional likelihood inference is usually precluded due to the large number of terms in the likelihood. However, some studies have noted bias when performing high-dimensional inference that incorporates such event information, particularly when dependence is weak. We elucidate this phenomenon, showing that for unbiased inference in moderate dimensions, dimension dd should be of a magnitude smaller than the square root of the number of vectors over which one takes the componentwise maximum. A bias reduction technique is suggested and illustrated on the extreme value logistic model.Comment: 7 page

    Bayesian inference with information content model check for Langevin equations

    Get PDF
    The Bayesian data analysis framework has been proven to be a systematic and effective method of parameter inference and model selection for stochastic processes. In this work we introduce an information content model check which may serve as a goodness-of-fit, like the chi-square procedure, to complement conventional Bayesian analysis. We demonstrate this extended Bayesian framework on a system of Langevin equations, where coordinate dependent mobilities and measurement noise hinder the normal mean squared displacement approach.Comment: 10 pages, 7 figures, REVTeX, minor revision

    Statistical Mechanics of High-Dimensional Inference

    Full text link
    To model modern large-scale datasets, we need efficient algorithms to infer a set of PP unknown model parameters from NN noisy measurements. What are fundamental limits on the accuracy of parameter inference, given finite signal-to-noise ratios, limited measurements, prior information, and computational tractability requirements? How can we combine prior information with measurements to achieve these limits? Classical statistics gives incisive answers to these questions as the measurement density Ī±=NPā†’āˆž\alpha = \frac{N}{P}\rightarrow \infty. However, these classical results are not relevant to modern high-dimensional inference problems, which instead occur at finite Ī±\alpha. We formulate and analyze high-dimensional inference as a problem in the statistical physics of quenched disorder. Our analysis uncovers fundamental limits on the accuracy of inference in high dimensions, and reveals that widely cherished inference algorithms like maximum likelihood (ML) and maximum-a posteriori (MAP) inference cannot achieve these limits. We further find optimal, computationally tractable algorithms that can achieve these limits. Intriguingly, in high dimensions, these optimal algorithms become computationally simpler than MAP and ML, while still outperforming them. For example, such optimal algorithms can lead to as much as a 20% reduction in the amount of data to achieve the same performance relative to MAP. Moreover, our analysis reveals simple relations between optimal high dimensional inference and low dimensional scalar Bayesian inference, insights into the nature of generalization and predictive power in high dimensions, information theoretic limits on compressed sensing, phase transitions in quadratic inference, and connections to central mathematical objects in convex optimization theory and random matrix theory.Comment: See http://ganguli-gang.stanford.edu/pdf/HighDimInf.Supp.pdf for supplementary materia
    • ā€¦
    corecore