18,635 research outputs found

    Accelerating Permutation Testing in Voxel-wise Analysis through Subspace Tracking: A new plugin for SnPM

    Get PDF
    Permutation testing is a non-parametric method for obtaining the max null distribution used to compute corrected pp-values that provide strong control of false positives. In neuroimaging, however, the computational burden of running such an algorithm can be significant. We find that by viewing the permutation testing procedure as the construction of a very large permutation testing matrix, TT, one can exploit structural properties derived from the data and the test statistics to reduce the runtime under certain conditions. In particular, we see that TT is low-rank plus a low-variance residual. This makes TT a good candidate for low-rank matrix completion, where only a very small number of entries of TT (∼0.35%\sim0.35\% of all entries in our experiments) have to be computed to obtain a good estimate. Based on this observation, we present RapidPT, an algorithm that efficiently recovers the max null distribution commonly obtained through regular permutation testing in voxel-wise analysis. We present an extensive validation on a synthetic dataset and four varying sized datasets against two baselines: Statistical NonParametric Mapping (SnPM13) and a standard permutation testing implementation (referred as NaivePT). We find that RapidPT achieves its best runtime performance on medium sized datasets (50≤n≤20050 \leq n \leq 200), with speedups of 1.5x - 38x (vs. SnPM13) and 20x-1000x (vs. NaivePT). For larger datasets (n≥200n \geq 200) RapidPT outperforms NaivePT (6x - 200x) on all datasets, and provides large speedups over SnPM13 when more than 10000 permutations (2x - 15x) are needed. The implementation is a standalone toolbox and also integrated within SnPM13, able to leverage multi-core architectures when available.Comment: 36 pages, 16 figure

    Speeding up Permutation Testing in Neuroimaging

    Full text link
    Multiple hypothesis testing is a significant problem in nearly all neuroimaging studies. In order to correct for this phenomena, we require a reliable estimate of the Family-Wise Error Rate (FWER). The well known Bonferroni correction method, while simple to implement, is quite conservative, and can substantially under-power a study because it ignores dependencies between test statistics. Permutation testing, on the other hand, is an exact, non-parametric method of estimating the FWER for a given α\alpha-threshold, but for acceptably low thresholds the computational burden can be prohibitive. In this paper, we show that permutation testing in fact amounts to populating the columns of a very large matrix P{\bf P}. By analyzing the spectrum of this matrix, under certain conditions, we see that P{\bf P} has a low-rank plus a low-variance residual decomposition which makes it suitable for highly sub--sampled --- on the order of 0.5%0.5\% --- matrix completion methods. Based on this observation, we propose a novel permutation testing methodology which offers a large speedup, without sacrificing the fidelity of the estimated FWER. Our evaluations on four different neuroimaging datasets show that a computational speedup factor of roughly 50×50\times can be achieved while recovering the FWER distribution up to very high accuracy. Further, we show that the estimated α\alpha-threshold is also recovered faithfully, and is stable.Comment: NIPS 1

    Dense Error Correction for Low-Rank Matrices via Principal Component Pursuit

    Get PDF
    We consider the problem of recovering a low-rank matrix when some of its entries, whose locations are not known a priori, are corrupted by errors of arbitrarily large magnitude. It has recently been shown that this problem can be solved efficiently and effectively by a convex program named Principal Component Pursuit (PCP), provided that the fraction of corrupted entries and the rank of the matrix are both sufficiently small. In this paper, we extend that result to show that the same convex program, with a slightly improved weighting parameter, exactly recovers the low-rank matrix even if "almost all" of its entries are arbitrarily corrupted, provided the signs of the errors are random. We corroborate our result with simulations on randomly generated matrices and errors.Comment: Submitted to ISIT 201

    D3-instantons, Mock Theta Series and Twistors

    Get PDF
    The D-instanton corrected hypermultiplet moduli space of type II string theory compactified on a Calabi-Yau threefold is known in the type IIA picture to be determined in terms of the generalized Donaldson-Thomas invariants, through a twistorial construction. At the same time, in the mirror type IIB picture, and in the limit where only D3-D1-D(-1)-instanton corrections are retained, it should carry an isometric action of the S-duality group SL(2,Z). We prove that this is the case in the one-instanton approximation, by constructing a holomorphic action of SL(2,Z) on the linearized twistor space. Using the modular invariance of the D4-D2-D0 black hole partition function, we show that the standard Darboux coordinates in twistor space have modular anomalies controlled by period integrals of a Siegel-Narain theta series, which can be canceled by a contact transformation generated by a holomorphic mock theta series.Comment: 42 pages; discussion of isometries is amended; misprints correcte

    Involution and Constrained Dynamics I: The Dirac Approach

    Full text link
    We study the theory of systems with constraints from the point of view of the formal theory of partial differential equations. For finite-dimensional systems we show that the Dirac algorithm completes the equations of motion to an involutive system. We discuss the implications of this identification for field theories and argue that the involution analysis is more general and flexible than the Dirac approach. We also derive intrinsic expressions for the number of degrees of freedom.Comment: 28 pages, latex, no figure

    Rank Minimization over Finite Fields: Fundamental Limits and Coding-Theoretic Interpretations

    Full text link
    This paper establishes information-theoretic limits in estimating a finite field low-rank matrix given random linear measurements of it. These linear measurements are obtained by taking inner products of the low-rank matrix with random sensing matrices. Necessary and sufficient conditions on the number of measurements required are provided. It is shown that these conditions are sharp and the minimum-rank decoder is asymptotically optimal. The reliability function of this decoder is also derived by appealing to de Caen's lower bound on the probability of a union. The sufficient condition also holds when the sensing matrices are sparse - a scenario that may be amenable to efficient decoding. More precisely, it is shown that if the n\times n-sensing matrices contain, on average, \Omega(nlog n) entries, the number of measurements required is the same as that when the sensing matrices are dense and contain entries drawn uniformly at random from the field. Analogies are drawn between the above results and rank-metric codes in the coding theory literature. In fact, we are also strongly motivated by understanding when minimum rank distance decoding of random rank-metric codes succeeds. To this end, we derive distance properties of equiprobable and sparse rank-metric codes. These distance properties provide a precise geometric interpretation of the fact that the sparse ensemble requires as few measurements as the dense one. Finally, we provide a non-exhaustive procedure to search for the unknown low-rank matrix.Comment: Accepted to the IEEE Transactions on Information Theory; Presented at IEEE International Symposium on Information Theory (ISIT) 201
    • …
    corecore