6 research outputs found

    The xyz algorithm for fast interaction search in high-dimensional data

    No full text
    When performing regression on a data set with p variables, it is often of interest to go beyond using main linear effects and include interactions as products between individual variables. For small-scale problems, these interactions can be computed explicitly but this leads to a computational complexity of at least O(p2) if done naively. This cost can be prohibitive if p is very large. We introduce a new randomised algorithm that is able to discover interactions with high probability and under mild conditions has a runtime that is subquadratic in p. We show that strong interactions can be discovered in almost linear time, whilst finding weaker interactions requires O(pα) operations for 1<α<2 depending on their strength. The underlying idea is to transform interaction search into a closest pair problem which can be solved efficiently in subquadratic time. The algorithm is called xyz and is implemented in the language R. We demonstrate its efficiency for application to genome-wide association studies, where more than 1011 interactions can be screened in under 280 seconds with a single-core 1.2 GHz CPU.ISSN:1532-4435ISSN:1533-792

    Right singular vector projection graphs: fast high dimensional covariance matrix estimation under latent confounding

    No full text
    We consider the problem of estimating a high dimensional p×p covariance matrix Σ, given n observations of confounded data with covariance Σ + ΓΓT, where Γ is an unknown p×q matrix of latent factor loadings. We propose a simple and scalable estimator based on the projection onto the right singular vectors of the observed data matrix, which we call right singular vector projection (RSVP). Our theoretical analysis of this method reveals that, in contrast with approaches based on the removal of principal components, RSVP can cope well with settings where the smallest eigenvalue of ΓTΓ is relatively close to the largest eigenvalue of Σ, as well as when the eigenvalues of ΓTΓ are diverging fast. RSVP does not require knowledge or estimation of the number of latent factors q, but it recovers Σ only up to an unknown positive scale factor. We argue that this suffices in many applications, e.g. if an estimate of the correlation matrix is desired. We also show that, by using subsampling, we can further improve the performance of the method. We demonstrate the favourable performance of RSVP through simulation experiments and an analysis of gene expression data sets collated by the GTEX consortium.ISSN:1369-7412ISSN:0035-9246ISSN:1467-986
    corecore