2,969 research outputs found
A Statistical Perspective on Randomized Sketching for Ordinary Least-Squares
We consider statistical as well as algorithmic aspects of solving large-scale
least-squares (LS) problems using randomized sketching algorithms. For a LS
problem with input data , sketching algorithms use a sketching matrix, with . Then, rather than solving the LS problem using the
full data , sketching algorithms solve the LS problem using only the
sketched data . Prior work has typically adopted an algorithmic
perspective, in that it has made no statistical assumptions on the input
and , and instead it has been assumed that the data are fixed and
worst-case (WC). Prior results show that, when using sketching matrices such as
random projections and leverage-score sampling algorithms, with ,
the WC error is the same as solving the original problem, up to a small
constant. From a statistical perspective, we typically consider the
mean-squared error performance of randomized sketching algorithms, when data
are generated according to a statistical model , where is a noise process. We provide a rigorous
comparison of both perspectives leading to insights on how they differ. To do
this, we first develop a framework for assessing algorithmic and statistical
aspects of randomized sketching methods. We then consider the statistical
prediction efficiency (PE) and the statistical residual efficiency (RE) of the
sketched LS estimator; and we use our framework to provide upper bounds for
several types of random projection and random sampling sketching algorithms.
Among other results, we show that the RE can be upper bounded when while the PE typically requires the sample size to be substantially
larger. Lower bounds developed in subsequent results show that our upper bounds
on PE can not be improved.Comment: 27 pages, 5 figure
Mining gold from implicit models to improve likelihood-free inference
Simulators often provide the best description of real-world phenomena.
However, they also lead to challenging inverse problems because the density
they implicitly define is often intractable. We present a new suite of
simulation-based inference techniques that go beyond the traditional
Approximate Bayesian Computation approach, which struggles in a
high-dimensional setting, and extend methods that use surrogate models based on
neural networks. We show that additional information, such as the joint
likelihood ratio and the joint score, can often be extracted from simulators
and used to augment the training data for these surrogate models. Finally, we
demonstrate that these new techniques are more sample efficient and provide
higher-fidelity inference than traditional methods.Comment: Code available at
https://github.com/johannbrehmer/simulator-mining-example . v2: Fixed typos.
v3: Expanded discussion, added Lotka-Volterra example. v4: Improved clarit
The Augmented Synthetic Control Method
The synthetic control method (SCM) is a popular approach for estimating the
impact of a treatment on a single unit in panel data settings. The "synthetic
control" is a weighted average of control units that balances the treated
unit's pre-treatment outcomes as closely as possible. A critical feature of the
original proposal is to use SCM only when the fit on pre-treatment outcomes is
excellent. We propose Augmented SCM as an extension of SCM to settings where
such pre-treatment fit is infeasible. Analogous to bias correction for inexact
matching, Augmented SCM uses an outcome model to estimate the bias due to
imperfect pre-treatment fit and then de-biases the original SCM estimate. Our
main proposal, which uses ridge regression as the outcome model, directly
controls pre-treatment fit while minimizing extrapolation from the convex hull.
This estimator can also be expressed as a solution to a modified synthetic
controls problem that allows negative weights on some donor units. We bound the
estimation error of this approach under different data generating processes,
including a linear factor model, and show how regularization helps to avoid
over-fitting to noise. We demonstrate gains from Augmented SCM with extensive
simulation studies and apply this framework to estimate the impact of the 2012
Kansas tax cuts on economic growth. We implement the proposed method in the new
augsynth R package
- …