2,268 research outputs found

    Selective machine learning of doubly robust functionals

    Full text link
    While model selection is a well-studied topic in parametric and nonparametric regression or density estimation, selection of possibly high-dimensional nuisance parameters in semiparametric problems is far less developed. In this paper, we propose a selective machine learning framework for making inferences about a finite-dimensional functional defined on a semiparametric model, when the latter admits a doubly robust estimating function and several candidate machine learning algorithms are available for estimating the nuisance parameters. We introduce two new selection criteria for bias reduction in estimating the functional of interest, each based on a novel definition of pseudo-risk for the functional that embodies the double robustness property and thus is used to select the pair of learners that is nearest to fulfilling this property. We establish an oracle property for a multi-fold cross-validation version of the new selection criteria which states that our empirical criteria perform nearly as well as an oracle with a priori knowledge of the pseudo-risk for each pair of candidate learners. We also describe a smooth approximation to the selection criteria which allows for valid post-selection inference. Finally, we apply the approach to model selection of a semiparametric estimator of average treatment effect given an ensemble of candidate machine learners to account for confounding in an observational study

    Mean Estimation from One-Bit Measurements

    Full text link
    We consider the problem of estimating the mean of a symmetric log-concave distribution under the constraint that only a single bit per sample from this distribution is available to the estimator. We study the mean squared error as a function of the sample size (and hence the number of bits). We consider three settings: first, a centralized setting, where an encoder may release nn bits given a sample of size nn, and for which there is no asymptotic penalty for quantization; second, an adaptive setting in which each bit is a function of the current observation and previously recorded bits, where we show that the optimal relative efficiency compared to the sample mean is precisely the efficiency of the median; lastly, we show that in a distributed setting where each bit is only a function of a local sample, no estimator can achieve optimal efficiency uniformly over the parameter space. We additionally complement our results in the adaptive setting by showing that \emph{one} round of adaptivity is sufficient to achieve optimal mean-square error

    Pinsker estimators for local helioseismology

    Full text link
    A major goal of helioseismology is the three-dimensional reconstruction of the three velocity components of convective flows in the solar interior from sets of wave travel-time measurements. For small amplitude flows, the forward problem is described in good approximation by a large system of convolution equations. The input observations are highly noisy random vectors with a known dense covariance matrix. This leads to a large statistical linear inverse problem. Whereas for deterministic linear inverse problems several computationally efficient minimax optimal regularization methods exist, only one minimax-optimal linear estimator exists for statistical linear inverse problems: the Pinsker estimator. However, it is often computationally inefficient because it requires a singular value decomposition of the forward operator or it is not applicable because of an unknown noise covariance matrix, so it is rarely used for real-world problems. These limitations do not apply in helioseismology. We present a simplified proof of the optimality properties of the Pinsker estimator and show that it yields significantly better reconstructions than traditional inversion methods used in helioseismology, i.e.\ Regularized Least Squares (Tikhonov regularization) and SOLA (approximate inverse) methods. Moreover, we discuss the incorporation of the mass conservation constraint in the Pinsker scheme using staggered grids. With this improvement we can reconstruct not only horizontal, but also vertical velocity components that are much smaller in amplitude
    • …
    corecore