8,088 research outputs found

    Higher Order Estimating Equations for High-dimensional Models

    Full text link
    We introduce a new method of estimation of parameters in semiparametric and nonparametric models. The method is based on estimating equations that are UU-statistics in the observations. The UU-statistics are based on higher order influence functions that extend ordinary linear influence functions of the parameter of interest, and represent higher derivatives of this parameter. For parameters for which the representation cannot be perfect the method leads to a bias-variance trade-off, and results in estimators that converge at a slower than n\sqrt n-rate. In a number of examples the resulting rate can be shown to be optimal. We are particularly interested in estimating parameters in models with a nuisance parameter of high dimension or low regularity, where the parameter of interest cannot be estimated at n\sqrt n-rate, but we also consider efficient n\sqrt n-estimation using novel nonlinear estimators. The general approach is applied in detail to the example of estimating a mean response when the response is not always observed

    Semiparametric theory

    Full text link
    In this paper we give a brief review of semiparametric theory, using as a running example the common problem of estimating an average causal effect. Semiparametric models allow at least part of the data-generating process to be unspecified and unrestricted, and can often yield robust estimators that nonetheless behave similarly to those based on parametric likelihood assumptions, e.g., fast rates of convergence to normal limiting distributions. We discuss the basics of semiparametric theory, focusing on influence functions.Comment: arXiv admin note: text overlap with arXiv:1510.0474

    Revisiting maximum-a-posteriori estimation in log-concave models

    Get PDF
    Maximum-a-posteriori (MAP) estimation is the main Bayesian estimation methodology in imaging sciences, where high dimensionality is often addressed by using Bayesian models that are log-concave and whose posterior mode can be computed efficiently by convex optimisation. Despite its success and wide adoption, MAP estimation is not theoretically well understood yet. The prevalent view in the community is that MAP estimation is not proper Bayesian estimation in a decision-theoretic sense because it does not minimise a meaningful expected loss function (unlike the minimum mean squared error (MMSE) estimator that minimises the mean squared loss). This paper addresses this theoretical gap by presenting a decision-theoretic derivation of MAP estimation in Bayesian models that are log-concave. A main novelty is that our analysis is based on differential geometry, and proceeds as follows. First, we use the underlying convex geometry of the Bayesian model to induce a Riemannian geometry on the parameter space. We then use differential geometry to identify the so-called natural or canonical loss function to perform Bayesian point estimation in that Riemannian manifold. For log-concave models, this canonical loss is the Bregman divergence associated with the negative log posterior density. We then show that the MAP estimator is the only Bayesian estimator that minimises the expected canonical loss, and that the posterior mean or MMSE estimator minimises the dual canonical loss. We also study the question of MAP and MSSE estimation performance in large scales and establish a universal bound on the expected canonical error as a function of dimension, offering new insights into the good performance observed in convex problems. These results provide a new understanding of MAP and MMSE estimation in log-concave settings, and of the multiple roles that convex geometry plays in imaging problems.Comment: Accepted for publication in SIAM Imaging Science

    Locally stationary long memory estimation

    Get PDF
    There exists a wide literature on modelling strongly dependent time series using a longmemory parameter d, including more recent work on semiparametric wavelet estimation. As a generalization of these latter approaches, in this work we allow the long-memory parameter d to be varying over time. We embed our approach into the framework of locally stationary processes. We show weak consistency and a central limit theorem for our log-regression wavelet estimator of the time-dependent d in a Gaussian context. Both simulations and a real data example complete our work on providing a fairly general approach
    • …
    corecore