1,671 research outputs found

    Algorithmic and Statistical Perspectives on Large-Scale Data Analysis

    Full text link
    In recent years, ideas from statistics and scientific computing have begun to interact in increasingly sophisticated and fruitful ways with ideas from computer science and the theory of algorithms to aid in the development of improved worst-case algorithms that are useful for large-scale scientific and Internet data analysis problems. In this chapter, I will describe two recent examples---one having to do with selecting good columns or features from a (DNA Single Nucleotide Polymorphism) data matrix, and the other having to do with selecting good clusters or communities from a data graph (representing a social or information network)---that drew on ideas from both areas and that may serve as a model for exploiting complementary algorithmic and statistical perspectives in order to solve applied large-scale data analysis problems.Comment: 33 pages. To appear in Uwe Naumann and Olaf Schenk, editors, "Combinatorial Scientific Computing," Chapman and Hall/CRC Press, 201

    Approximate Computation and Implicit Regularization for Very Large-scale Data Analysis

    Full text link
    Database theory and database practice are typically the domain of computer scientists who adopt what may be termed an algorithmic perspective on their data. This perspective is very different than the more statistical perspective adopted by statisticians, scientific computers, machine learners, and other who work on what may be broadly termed statistical data analysis. In this article, I will address fundamental aspects of this algorithmic-statistical disconnect, with an eye to bridging the gap between these two very different approaches. A concept that lies at the heart of this disconnect is that of statistical regularization, a notion that has to do with how robust is the output of an algorithm to the noise properties of the input data. Although it is nearly completely absent from computer science, which historically has taken the input data as given and modeled algorithms discretely, regularization in one form or another is central to nearly every application domain that applies algorithms to noisy data. By using several case studies, I will illustrate, both theoretically and empirically, the nonobvious fact that approximate computation, in and of itself, can implicitly lead to statistical regularization. This and other recent work suggests that, by exploiting in a more principled way the statistical properties implicit in worst-case algorithms, one can in many cases satisfy the bicriteria of having algorithms that are scalable to very large-scale databases and that also have good inferential or predictive properties.Comment: To appear in the Proceedings of the 2012 ACM Symposium on Principles of Database Systems (PODS 2012

    A Man’s Right to Choose His Surname in Marriage: A Proposal

    Get PDF
    [...] a brief history of marital and naming practices will outline how these two concepts have shifted to a primarily private issue today, as compared with the Middle Ages, when they were primarily public issues highly concerned with property matters. [...] naming involves important issues in the construction of one\u27s identity

    EEOC and Jefferson v. Area Erectors, Inc.

    Get PDF

    A Statistical Perspective on Randomized Sketching for Ordinary Least-Squares

    Full text link
    We consider statistical as well as algorithmic aspects of solving large-scale least-squares (LS) problems using randomized sketching algorithms. For a LS problem with input data (X,Y)∈Rn×p×Rn(X, Y) \in \mathbb{R}^{n \times p} \times \mathbb{R}^n, sketching algorithms use a sketching matrix, S∈Rr×nS\in\mathbb{R}^{r \times n} with r≪nr \ll n. Then, rather than solving the LS problem using the full data (X,Y)(X,Y), sketching algorithms solve the LS problem using only the sketched data (SX,SY)(SX, SY). Prior work has typically adopted an algorithmic perspective, in that it has made no statistical assumptions on the input XX and YY, and instead it has been assumed that the data (X,Y)(X,Y) are fixed and worst-case (WC). Prior results show that, when using sketching matrices such as random projections and leverage-score sampling algorithms, with p<r≪np < r \ll n, the WC error is the same as solving the original problem, up to a small constant. From a statistical perspective, we typically consider the mean-squared error performance of randomized sketching algorithms, when data (X,Y)(X, Y) are generated according to a statistical model Y=Xβ+ϵY = X \beta + \epsilon, where ϵ\epsilon is a noise process. We provide a rigorous comparison of both perspectives leading to insights on how they differ. To do this, we first develop a framework for assessing algorithmic and statistical aspects of randomized sketching methods. We then consider the statistical prediction efficiency (PE) and the statistical residual efficiency (RE) of the sketched LS estimator; and we use our framework to provide upper bounds for several types of random projection and random sampling sketching algorithms. Among other results, we show that the RE can be upper bounded when p<r≪np < r \ll n while the PE typically requires the sample size rr to be substantially larger. Lower bounds developed in subsequent results show that our upper bounds on PE can not be improved.Comment: 27 pages, 5 figure
    • …
    corecore