60,318 research outputs found
Provable Deterministic Leverage Score Sampling
We explain theoretically a curious empirical phenomenon: "Approximating a
matrix by deterministically selecting a subset of its columns with the
corresponding largest leverage scores results in a good low-rank matrix
surrogate". To obtain provable guarantees, previous work requires randomized
sampling of the columns with probabilities proportional to their leverage
scores.
In this work, we provide a novel theoretical analysis of deterministic
leverage score sampling. We show that such deterministic sampling can be
provably as accurate as its randomized counterparts, if the leverage scores
follow a moderately steep power-law decay. We support this power-law assumption
by providing empirical evidence that such decay laws are abundant in real-world
data sets. We then demonstrate empirically the performance of deterministic
leverage score sampling, which many times matches or outperforms the
state-of-the-art techniques.Comment: 20th ACM SIGKDD Conference on Knowledge Discovery and Data Minin
Selection Procedures for Order Statistics in Empirical Economic Studies
In a presentation to the American Economics Association, McCloskey (1998) argued that "statistical significance is bankrupt" and that economists' time would be "better spent on finding out How Big Is Big". This brief survey is devoted to methods of determining "How Big Is Big". It is concerned with a rich body of literature called selection procedures, which are statistical methods that allow inference on order statistics and which enable empiricists to attach confidence levels to statements about the relative magnitudes of population parameters (i.e. How Big Is Big). Despite their prolonged existence and common use in other fields, selection procedures have gone relatively unnoticed in the field of economics, and, perhaps, their use is long overdue. The purpose of this paper is to provide a brief survey of selection procedures as an introduction to economists and econometricians and to illustrate their use in economics by discussing a few potential applications. Both simulated and empirical examples are provided.Ranking and selection, multiple comparisons, hypothesis testing
Non-uniform Feature Sampling for Decision Tree Ensembles
We study the effectiveness of non-uniform randomized feature selection in
decision tree classification. We experimentally evaluate two feature selection
methodologies, based on information extracted from the provided dataset:
\emph{leverage scores-based} and \emph{norm-based} feature selection.
Experimental evaluation of the proposed feature selection techniques indicate
that such approaches might be more effective compared to naive uniform feature
selection and moreover having comparable performance to the random forest
algorithm [3]Comment: 7 pages, 7 figures, 1 tabl
Sharp analysis of low-rank kernel matrix approximations
We consider supervised learning problems within the positive-definite kernel
framework, such as kernel ridge regression, kernel logistic regression or the
support vector machine. With kernels leading to infinite-dimensional feature
spaces, a common practical limiting difficulty is the necessity of computing
the kernel matrix, which most frequently leads to algorithms with running time
at least quadratic in the number of observations n, i.e., O(n^2). Low-rank
approximations of the kernel matrix are often considered as they allow the
reduction of running time complexities to O(p^2 n), where p is the rank of the
approximation. The practicality of such methods thus depends on the required
rank p. In this paper, we show that in the context of kernel ridge regression,
for approximations based on a random subset of columns of the original kernel
matrix, the rank p may be chosen to be linear in the degrees of freedom
associated with the problem, a quantity which is classically used in the
statistical analysis of such methods, and is often seen as the implicit number
of parameters of non-parametric estimators. This result enables simple
algorithms that have sub-quadratic running time complexity, but provably
exhibit the same predictive performance than existing algorithms, for any given
problem instance, and not only for worst-case situations
- …