51,277 research outputs found
A Statistical Perspective on Randomized Sketching for Ordinary Least-Squares
We consider statistical as well as algorithmic aspects of solving large-scale
least-squares (LS) problems using randomized sketching algorithms. For a LS
problem with input data , sketching algorithms use a sketching matrix, with . Then, rather than solving the LS problem using the
full data , sketching algorithms solve the LS problem using only the
sketched data . Prior work has typically adopted an algorithmic
perspective, in that it has made no statistical assumptions on the input
and , and instead it has been assumed that the data are fixed and
worst-case (WC). Prior results show that, when using sketching matrices such as
random projections and leverage-score sampling algorithms, with ,
the WC error is the same as solving the original problem, up to a small
constant. From a statistical perspective, we typically consider the
mean-squared error performance of randomized sketching algorithms, when data
are generated according to a statistical model , where is a noise process. We provide a rigorous
comparison of both perspectives leading to insights on how they differ. To do
this, we first develop a framework for assessing algorithmic and statistical
aspects of randomized sketching methods. We then consider the statistical
prediction efficiency (PE) and the statistical residual efficiency (RE) of the
sketched LS estimator; and we use our framework to provide upper bounds for
several types of random projection and random sampling sketching algorithms.
Among other results, we show that the RE can be upper bounded when while the PE typically requires the sample size to be substantially
larger. Lower bounds developed in subsequent results show that our upper bounds
on PE can not be improved.Comment: 27 pages, 5 figure
Efficient Benchmarking of Algorithm Configuration Procedures via Model-Based Surrogates
The optimization of algorithm (hyper-)parameters is crucial for achieving
peak performance across a wide range of domains, ranging from deep neural
networks to solvers for hard combinatorial problems. The resulting algorithm
configuration (AC) problem has attracted much attention from the machine
learning community. However, the proper evaluation of new AC procedures is
hindered by two key hurdles. First, AC benchmarks are hard to set up. Second
and even more significantly, they are computationally expensive: a single run
of an AC procedure involves many costly runs of the target algorithm whose
performance is to be optimized in a given AC benchmark scenario. One common
workaround is to optimize cheap-to-evaluate artificial benchmark functions
(e.g., Branin) instead of actual algorithms; however, these have different
properties than realistic AC problems. Here, we propose an alternative
benchmarking approach that is similarly cheap to evaluate but much closer to
the original AC problem: replacing expensive benchmarks by surrogate benchmarks
constructed from AC benchmarks. These surrogate benchmarks approximate the
response surface corresponding to true target algorithm performance using a
regression model, and the original and surrogate benchmark share the same
(hyper-)parameter space. In our experiments, we construct and evaluate
surrogate benchmarks for hyperparameter optimization as well as for AC problems
that involve performance optimization of solvers for hard combinatorial
problems, drawing training data from the runs of existing AC procedures. We
show that our surrogate benchmarks capture overall important characteristics of
the AC scenarios, such as high- and low-performing regions, from which they
were derived, while being much easier to use and orders of magnitude cheaper to
evaluate
A Primer on Causality in Data Science
Many questions in Data Science are fundamentally causal in that our objective
is to learn the effect of some exposure, randomized or not, on an outcome
interest. Even studies that are seemingly non-causal, such as those with the
goal of prediction or prevalence estimation, have causal elements, including
differential censoring or measurement. As a result, we, as Data Scientists,
need to consider the underlying causal mechanisms that gave rise to the data,
rather than simply the pattern or association observed in those data. In this
work, we review the 'Causal Roadmap' of Petersen and van der Laan (2014) to
provide an introduction to some key concepts in causal inference. Similar to
other causal frameworks, the steps of the Roadmap include clearly stating the
scientific question, defining of the causal model, translating the scientific
question into a causal parameter, assessing the assumptions needed to express
the causal parameter as a statistical estimand, implementation of statistical
estimators including parametric and semi-parametric methods, and interpretation
of our findings. We believe that using such a framework in Data Science will
help to ensure that our statistical analyses are guided by the scientific
question driving our research, while avoiding over-interpreting our results. We
focus on the effect of an exposure occurring at a single time point and
highlight the use of targeted maximum likelihood estimation (TMLE) with Super
Learner.Comment: 26 pages (with references); 4 figure
Parallelizing RRT on large-scale distributed-memory architectures
This paper addresses the problem of parallelizing the Rapidly-exploring Random Tree (RRT) algorithm on large-scale distributed-memory architectures, using the Message Passing Interface. We compare three parallel versions of RRT based on classical parallelization schemes. We evaluate them on different motion planning problems and analyze the various factors influencing their performance
Uplift Modeling with Multiple Treatments and General Response Types
Randomized experiments have been used to assist decision-making in many
areas. They help people select the optimal treatment for the test population
with certain statistical guarantee. However, subjects can show significant
heterogeneity in response to treatments. The problem of customizing treatment
assignment based on subject characteristics is known as uplift modeling,
differential response analysis, or personalized treatment learning in
literature. A key feature for uplift modeling is that the data is unlabeled. It
is impossible to know whether the chosen treatment is optimal for an individual
subject because response under alternative treatments is unobserved. This
presents a challenge to both the training and the evaluation of uplift models.
In this paper we describe how to obtain an unbiased estimate of the key
performance metric of an uplift model, the expected response. We present a new
uplift algorithm which creates a forest of randomized trees. The trees are
built with a splitting criterion designed to directly optimize their uplift
performance based on the proposed evaluation method. Both the evaluation method
and the algorithm apply to arbitrary number of treatments and general response
types. Experimental results on synthetic data and industry-provided data show
that our algorithm leads to significant performance improvement over other
applicable methods
- …