19,087 research outputs found

    Automatic Debiased Machine Learning of Causal and Structural Effects

    Full text link
    Many causal and structural effects depend on regressions. Examples include average treatment effects, policy effects, average derivatives, regression decompositions, economic average equivalent variation, and parameters of economic structural models. The regressions may be high dimensional. Plugging machine learners into identifying equations can lead to poor inference due to bias and/or model selection. This paper gives automatic debiasing for estimating equations and valid asymptotic inference for the estimators of effects of interest. The debiasing is automatic in that its construction uses the identifying equations without the full form of the bias correction and is performed by machine learning. Novel results include convergence rates for Lasso and Dantzig learners of the bias correction, primitive conditions for asymptotic inference for important examples, and general conditions for GMM. A variety of regression learners and identifying equations are covered. Automatic debiased machine learning (Auto-DML) is applied to estimating the average treatment effect on the treated for the NSW job training data and to estimating demand elasticities from Nielsen scanner data while allowing preferences to be correlated with prices and income

    Group Lasso estimation of high-dimensional covariance matrices

    Get PDF
    In this paper, we consider the Group Lasso estimator of the covariance matrix of a stochastic process corrupted by an additive noise. We propose to estimate the covariance matrix in a high-dimensional setting under the assumption that the process has a sparse representation in a large dictionary of basis functions. Using a matrix regression model, we propose a new methodology for high-dimensional covariance matrix estimation based on empirical contrast regularization by a group Lasso penalty. Using such a penalty, the method selects a sparse set of basis functions in the dictionary used to approximate the process, leading to an approximation of the covariance matrix into a low dimensional space. Consistency of the estimator is studied in Frobenius and operator norms and an application to sparse PCA is proposed

    Sparsity and cosparsity for audio declipping: a flexible non-convex approach

    Get PDF
    This work investigates the empirical performance of the sparse synthesis versus sparse analysis regularization for the ill-posed inverse problem of audio declipping. We develop a versatile non-convex heuristics which can be readily used with both data models. Based on this algorithm, we report that, in most cases, the two models perform almost similarly in terms of signal enhancement. However, the analysis version is shown to be amenable for real time audio processing, when certain analysis operators are considered. Both versions outperform state-of-the-art methods in the field, especially for the severely saturated signals
    • …
    corecore