32,415 research outputs found

    Dimension Extractors and Optimal Decompression

    Full text link
    A *dimension extractor* is an algorithm designed to increase the effective dimension -- i.e., the amount of computational randomness -- of an infinite binary sequence, in order to turn a "partially random" sequence into a "more random" sequence. Extractors are exhibited for various effective dimensions, including constructive, computable, space-bounded, time-bounded, and finite-state dimension. Using similar techniques, the Kucera-Gacs theorem is examined from the perspective of decompression, by showing that every infinite sequence S is Turing reducible to a Martin-Loef random sequence R such that the asymptotic number of bits of R needed to compute n bits of S, divided by n, is precisely the constructive dimension of S, which is shown to be the optimal ratio of query bits to computed bits achievable with Turing reductions. The extractors and decompressors that are developed lead directly to new characterizations of some effective dimensions in terms of optimal decompression by Turing reductions.Comment: This report was combined with a different conference paper "Every Sequence is Decompressible from a Random One" (cs.IT/0511074, at http://dx.doi.org/10.1007/11780342_17), and both titles were changed, with the conference paper incorporated as section 5 of this new combined paper. The combined paper was accepted to the journal Theory of Computing Systems, as part of a special issue of invited papers from the second conference on Computability in Europe, 200

    Constructive Dimension and Turing Degrees

    Full text link
    This paper examines the constructive Hausdorff and packing dimensions of Turing degrees. The main result is that every infinite sequence S with constructive Hausdorff dimension dim_H(S) and constructive packing dimension dim_P(S) is Turing equivalent to a sequence R with dim_H(R) <= (dim_H(S) / dim_P(S)) - epsilon, for arbitrary epsilon > 0. Furthermore, if dim_P(S) > 0, then dim_P(R) >= 1 - epsilon. The reduction thus serves as a *randomness extractor* that increases the algorithmic randomness of S, as measured by constructive dimension. A number of applications of this result shed new light on the constructive dimensions of Turing degrees. A lower bound of dim_H(S) / dim_P(S) is shown to hold for the Turing degree of any sequence S. A new proof is given of a previously-known zero-one law for the constructive packing dimension of Turing degrees. It is also shown that, for any regular sequence S (that is, dim_H(S) = dim_P(S)) such that dim_H(S) > 0, the Turing degree of S has constructive Hausdorff and packing dimension equal to 1. Finally, it is shown that no single Turing reduction can be a universal constructive Hausdorff dimension extractor, and that bounded Turing reductions cannot extract constructive Hausdorff dimension. We also exhibit sequences on which weak truth-table and bounded Turing reductions differ in their ability to extract dimension.Comment: The version of this paper appearing in Theory of Computing Systems, 45(4):740-755, 2009, had an error in the proof of Theorem 2.4, due to insufficient care with the choice of delta. This version modifies that proof to fix the error

    Three Puzzles on Mathematics, Computation, and Games

    Full text link
    In this lecture I will talk about three mathematical puzzles involving mathematics and computation that have preoccupied me over the years. The first puzzle is to understand the amazing success of the simplex algorithm for linear programming. The second puzzle is about errors made when votes are counted during elections. The third puzzle is: are quantum computers possible?Comment: ICM 2018 plenary lecture, Rio de Janeiro, 36 pages, 7 Figure

    Enabling Factor Analysis on Thousand-Subject Neuroimaging Datasets

    Full text link
    The scale of functional magnetic resonance image data is rapidly increasing as large multi-subject datasets are becoming widely available and high-resolution scanners are adopted. The inherent low-dimensionality of the information in this data has led neuroscientists to consider factor analysis methods to extract and analyze the underlying brain activity. In this work, we consider two recent multi-subject factor analysis methods: the Shared Response Model and Hierarchical Topographic Factor Analysis. We perform analytical, algorithmic, and code optimization to enable multi-node parallel implementations to scale. Single-node improvements result in 99x and 1812x speedups on these two methods, and enables the processing of larger datasets. Our distributed implementations show strong scaling of 3.3x and 5.5x respectively with 20 nodes on real datasets. We also demonstrate weak scaling on a synthetic dataset with 1024 subjects, on up to 1024 nodes and 32,768 cores

    Mutual Dimension

    Get PDF
    We define the lower and upper mutual dimensions mdim(x:y)mdim(x:y) and Mdim(x:y)Mdim(x:y) between any two points xx and yy in Euclidean space. Intuitively these are the lower and upper densities of the algorithmic information shared by xx and yy. We show that these quantities satisfy the main desiderata for a satisfactory measure of mutual algorithmic information. Our main theorem, the data processing inequality for mutual dimension, says that, if f:RmRnf:\mathbb{R}^m \rightarrow \mathbb{R}^n is computable and Lipschitz, then the inequalities mdim(f(x):y)mdim(x:y)mdim(f(x):y) \leq mdim(x:y) and Mdim(f(x):y)Mdim(x:y)Mdim(f(x):y) \leq Mdim(x:y) hold for all xRmx \in \mathbb{R}^m and yRty \in \mathbb{R}^t. We use this inequality and related inequalities that we prove in like fashion to establish conditions under which various classes of computable functions on Euclidean space preserve or otherwise transform mutual dimensions between points.Comment: This article is 29 pages and has been submitted to ACM Transactions on Computation Theory. A preliminary version of part of this material was reported at the 2013 Symposium on Theoretical Aspects of Computer Science in Kiel, German
    corecore