2,969 research outputs found

    A Statistical Perspective on Randomized Sketching for Ordinary Least-Squares

    Full text link
    We consider statistical as well as algorithmic aspects of solving large-scale least-squares (LS) problems using randomized sketching algorithms. For a LS problem with input data (X,Y)∈Rn×p×Rn(X, Y) \in \mathbb{R}^{n \times p} \times \mathbb{R}^n, sketching algorithms use a sketching matrix, S∈Rr×nS\in\mathbb{R}^{r \times n} with r≪nr \ll n. Then, rather than solving the LS problem using the full data (X,Y)(X,Y), sketching algorithms solve the LS problem using only the sketched data (SX,SY)(SX, SY). Prior work has typically adopted an algorithmic perspective, in that it has made no statistical assumptions on the input XX and YY, and instead it has been assumed that the data (X,Y)(X,Y) are fixed and worst-case (WC). Prior results show that, when using sketching matrices such as random projections and leverage-score sampling algorithms, with p<r≪np < r \ll n, the WC error is the same as solving the original problem, up to a small constant. From a statistical perspective, we typically consider the mean-squared error performance of randomized sketching algorithms, when data (X,Y)(X, Y) are generated according to a statistical model Y=Xβ+ϵY = X \beta + \epsilon, where ϵ\epsilon is a noise process. We provide a rigorous comparison of both perspectives leading to insights on how they differ. To do this, we first develop a framework for assessing algorithmic and statistical aspects of randomized sketching methods. We then consider the statistical prediction efficiency (PE) and the statistical residual efficiency (RE) of the sketched LS estimator; and we use our framework to provide upper bounds for several types of random projection and random sampling sketching algorithms. Among other results, we show that the RE can be upper bounded when p<r≪np < r \ll n while the PE typically requires the sample size rr to be substantially larger. Lower bounds developed in subsequent results show that our upper bounds on PE can not be improved.Comment: 27 pages, 5 figure

    Mining gold from implicit models to improve likelihood-free inference

    Full text link
    Simulators often provide the best description of real-world phenomena. However, they also lead to challenging inverse problems because the density they implicitly define is often intractable. We present a new suite of simulation-based inference techniques that go beyond the traditional Approximate Bayesian Computation approach, which struggles in a high-dimensional setting, and extend methods that use surrogate models based on neural networks. We show that additional information, such as the joint likelihood ratio and the joint score, can often be extracted from simulators and used to augment the training data for these surrogate models. Finally, we demonstrate that these new techniques are more sample efficient and provide higher-fidelity inference than traditional methods.Comment: Code available at https://github.com/johannbrehmer/simulator-mining-example . v2: Fixed typos. v3: Expanded discussion, added Lotka-Volterra example. v4: Improved clarit

    The Augmented Synthetic Control Method

    Full text link
    The synthetic control method (SCM) is a popular approach for estimating the impact of a treatment on a single unit in panel data settings. The "synthetic control" is a weighted average of control units that balances the treated unit's pre-treatment outcomes as closely as possible. A critical feature of the original proposal is to use SCM only when the fit on pre-treatment outcomes is excellent. We propose Augmented SCM as an extension of SCM to settings where such pre-treatment fit is infeasible. Analogous to bias correction for inexact matching, Augmented SCM uses an outcome model to estimate the bias due to imperfect pre-treatment fit and then de-biases the original SCM estimate. Our main proposal, which uses ridge regression as the outcome model, directly controls pre-treatment fit while minimizing extrapolation from the convex hull. This estimator can also be expressed as a solution to a modified synthetic controls problem that allows negative weights on some donor units. We bound the estimation error of this approach under different data generating processes, including a linear factor model, and show how regularization helps to avoid over-fitting to noise. We demonstrate gains from Augmented SCM with extensive simulation studies and apply this framework to estimate the impact of the 2012 Kansas tax cuts on economic growth. We implement the proposed method in the new augsynth R package
    • …
    corecore