485,334 research outputs found

    Mutual Dimension

    Get PDF
    We define the lower and upper mutual dimensions mdim(x:y)mdim(x:y) and Mdim(x:y)Mdim(x:y) between any two points xx and yy in Euclidean space. Intuitively these are the lower and upper densities of the algorithmic information shared by xx and yy. We show that these quantities satisfy the main desiderata for a satisfactory measure of mutual algorithmic information. Our main theorem, the data processing inequality for mutual dimension, says that, if f:Rm→Rnf:\mathbb{R}^m \rightarrow \mathbb{R}^n is computable and Lipschitz, then the inequalities mdim(f(x):y)≤mdim(x:y)mdim(f(x):y) \leq mdim(x:y) and Mdim(f(x):y)≤Mdim(x:y)Mdim(f(x):y) \leq Mdim(x:y) hold for all x∈Rmx \in \mathbb{R}^m and y∈Rty \in \mathbb{R}^t. We use this inequality and related inequalities that we prove in like fashion to establish conditions under which various classes of computable functions on Euclidean space preserve or otherwise transform mutual dimensions between points.Comment: This article is 29 pages and has been submitted to ACM Transactions on Computation Theory. A preliminary version of part of this material was reported at the 2013 Symposium on Theoretical Aspects of Computer Science in Kiel, German

    Dimension Reduction by Mutual Information Discriminant Analysis

    Get PDF
    In the past few decades, researchers have proposed many discriminant analysis (DA) algorithms for the study of high-dimensional data in a variety of problems. Most DA algorithms for feature extraction are based on transformations that simultaneously maximize the between-class scatter and minimize the withinclass scatter matrices. This paper presents a novel DA algorithm for feature extraction using mutual information (MI). However, it is not always easy to obtain an accurate estimation for high-dimensional MI. In this paper, we propose an efficient method for feature extraction that is based on one-dimensional MI estimations. We will refer to this algorithm as mutual information discriminant analysis (MIDA). The performance of this proposed method was evaluated using UCI databases. The results indicate that MIDA provides robust performance over different data sets with different characteristics and that MIDA always performs better than, or at least comparable to, the best performing algorithms.Comment: 13pages, 3 tables, International Journal of Artificial Intelligence & Application

    Recurrence plot statistics and the effect of embedding

    Full text link
    Recurrence plots provide a graphical representation of the recurrent patterns in a timeseries, the quantification of which is a relatively new field. Here we derive analytical expressions which relate the values of key statistics, notably determinism and entropy of line length distribution, to the correlation sum as a function of embedding dimension. These expressions are obtained by deriving the transformation which generates an embedded recurrence plot from an unembedded plot. A single unembedded recurrence plot thus provides the statistics of all possible embedded recurrence plots. If the correlation sum scales exponentially with embedding dimension, we show that these statistics are determined entirely by the exponent of the exponential. This explains the results of Iwanski and Bradley (Chaos 8 [1998] 861-871) who found that certain recurrence plot statistics are apparently invariant to embedding dimension for certain low-dimensional systems. We also examine the relationship between the mutual information content of two timeseries and the common recurrent structure seen in their recurrence plots. This allows time-localized contributions to mutual information to be visualized. This technique is demonstrated using geomagnetic index data; we show that the AU and AL geomagnetic indices share half their information, and find the timescale on which mutual features appear

    A theoretical model of neuronal population coding of stimuli with both continuous and discrete dimensions

    Full text link
    In a recent study the initial rise of the mutual information between the firing rates of N neurons and a set of p discrete stimuli has been analytically evaluated, under the assumption that neurons fire independently of one another to each stimulus and that each conditional distribution of firing rates is gaussian. Yet real stimuli or behavioural correlates are high-dimensional, with both discrete and continuously varying features.Moreover, the gaussian approximation implies negative firing rates, which is biologically implausible. Here, we generalize the analysis to the case where the stimulus or behavioural correlate has both a discrete and a continuous dimension. In the case of large noise we evaluate the mutual information up to the quadratic approximation as a function of population size. Then we consider a more realistic distribution of firing rates, truncated at zero, and we prove that the resulting correction, with respect to the gaussian firing rates, can be expressed simply as a renormalization of the noise parameter. Finally, we demonstrate the effect of averaging the distribution across the discrete dimension, evaluating the mutual information only with respect to the continuously varying correlate.Comment: 20 pages, 10 figure

    Entanglement enhanced classical capacity of quantum communication channels with correlated noise in arbitrary dimensions

    Full text link
    We study the capacity of d-dimensional quantum channels with memory modeled by correlated noise. We show that, in agreement with previous results on Pauli qubit channels, there are situations where maximally entangled input states achieve higher values of mutual information than product states. Moreover, a strong dependence of this effect on the nature of the noise correlations as well as on the parity of the space dimension is found. We conjecture that when entanglement gives an advantage in terms of mutual information, maximally entangled states saturate the channel capacity.Comment: 10 pages, 5 figure
    • …
    corecore