170 research outputs found

    Learning and generation of long-range correlated sequences

    Full text link
    We study the capability to learn and to generate long-range, power-law correlated sequences by a fully connected asymmetric network. The focus is set on the ability of neural networks to extract statistical features from a sequence. We demonstrate that the average power-law behavior is learnable, namely, the sequence generated by the trained network obeys the same statistical behavior. The interplay between a correlated weight matrix and the sequence generated by such a network is explored. A weight matrix with a power-law correlation function along the vertical direction, gives rise to a sequence with a similar statistical behavior.Comment: 5 pages, 3 figures, accepted for publication in Physical Review

    Secure exchange of information by synchronization of neural networks

    Full text link
    A connection between the theory of neural networks and cryptography is presented. A new phenomenon, namely synchronization of neural networks is leading to a new method of exchange of secret messages. Numerical simulations show that two artificial networks being trained by Hebbian learning rule on their mutual outputs develop an antiparallel state of their synaptic weights. The synchronized weights are used to construct an ephemeral key exchange protocol for a secure transmission of secret data. It is shown that an opponent who knows the protocol and all details of any transmission of the data has no chance to decrypt the secret message, since tracking the weights is a hard problem compared to synchronization. The complexity of the generation of the secure channel is linear with the size of the network.Comment: 11 pages, 5 figure

    Statistical Mechanics of Learning: A Variational Approach for Real Data

    Full text link
    Using a variational technique, we generalize the statistical physics approach of learning from random examples to make it applicable to real data. We demonstrate the validity and relevance of our method by computing approximate estimators for generalization errors that are based on training data alone.Comment: 4 pages, 2 figure

    An information theoretic approach to statistical dependence: copula information

    Full text link
    We discuss the connection between information and copula theories by showing that a copula can be employed to decompose the information content of a multivariate distribution into marginal and dependence components, with the latter quantified by the mutual information. We define the information excess as a measure of deviation from a maximum entropy distribution. The idea of marginal invariant dependence measures is also discussed and used to show that empirical linear correlation underestimates the amplitude of the actual correlation in the case of non-Gaussian marginals. The mutual information is shown to provide an upper bound for the asymptotic empirical log-likelihood of a copula. An analytical expression for the information excess of T-copulas is provided, allowing for simple model identification within this family. We illustrate the framework in a financial data set.Comment: to appear in Europhysics Letter

    Perceptron capacity revisited: classification ability for correlated patterns

    Full text link
    In this paper, we address the problem of how many randomly labeled patterns can be correctly classified by a single-layer perceptron when the patterns are correlated with each other. In order to solve this problem, two analytical schemes are developed based on the replica method and Thouless-Anderson-Palmer (TAP) approach by utilizing an integral formula concerning random rectangular matrices. The validity and relevance of the developed methodologies are shown for one known result and two example problems. A message-passing algorithm to perform the TAP scheme is also presented

    Statistical Mechanics of Soft Margin Classifiers

    Full text link
    We study the typical learning properties of the recently introduced Soft Margin Classifiers (SMCs), learning realizable and unrealizable tasks, with the tools of Statistical Mechanics. We derive analytically the behaviour of the learning curves in the regime of very large training sets. We obtain exponential and power laws for the decay of the generalization error towards the asymptotic value, depending on the task and on general characteristics of the distribution of stabilities of the patterns to be learned. The optimal learning curves of the SMCs, which give the minimal generalization error, are obtained by tuning the coefficient controlling the trade-off between the error and the regularization terms in the cost function. If the task is realizable by the SMC, the optimal performance is better than that of a hard margin Support Vector Machine and is very close to that of a Bayesian classifier.Comment: 26 pages, 12 figures, submitted to Physical Review

    Optimal Resource Allocation in Random Networks with Transportation Bandwidths

    Full text link
    We apply statistical physics to study the task of resource allocation in random sparse networks with limited bandwidths for the transportation of resources along the links. Useful algorithms are obtained from recursive relations. Bottlenecks emerge when the bandwidths are small, causing an increase in the fraction of idle links. For a given total bandwidth per node, the efficiency of allocation increases with the network connectivity. In the high connectivity limit, we find a phase transition at a critical bandwidth, above which clusters of balanced nodes appear, characterised by a profile of homogenized resource allocation similar to the Maxwell's construction.Comment: 28 pages, 11 figure

    Lef1 regulates caveolin expression and caveolin dependent endocytosis, a process necessary for Wnt5a/Ror2 signaling during Xenopus gastrulation

    Get PDF
    The activation of distinct branches of the Wnt signaling network is essential for regulating early vertebrate development. Activation of the canonical Wnt/β-catenin pathway stimulates expression of β-catenin-Lef/Tcf regulated Wnt target genes and a regulatory network giving rise to the formation of the Spemann organizer. Non-canonical pathways, by contrast, mainly regulate cell polarization and migration, in particular convergent extension movements of the trunk mesoderm during gastrulation. By transcriptome analyses, we found caveolin1, caveolin3 and cavin1 to be regulated by Lef1 in the involuting mesoderm of Xenopus embryos at gastrula stages. We show that caveolins and caveolin dependent endocytosis are necessary for proper gastrulation, most likely by interfering with Wnt5a/Ror2 signaling. Wnt5a regulates the subcellular localization of receptor complexes, including Ror2 homodimers, Ror2/Fzd7 and Ror2/dsh heterodimers in an endocytosis dependent manner. Live-cell imaging revealed endocytosis of Ror2/caveolin1 complexes. In Xenopus explants, in the presence of Wnt5a, these receptor clusters remain stable exclusively at the basolateral side, suggesting that endocytosis of non-canonical Wnt/receptor complexes preferentially takes place at the apical membrane. In support of this blocking endocytosis with inhibitors prevents the effects of Wnt5a. Thus, target genes of Lef1 interfere with Wnt5a/Ror2 signaling to coordinate gastrulation movements

    Replica theory for learning curves for Gaussian processes on random graphs

    Full text link
    Statistical physics approaches can be used to derive accurate predictions for the performance of inference methods learning from potentially noisy data, as quantified by the learning curve defined as the average error versus number of training examples. We analyse a challenging problem in the area of non-parametric inference where an effectively infinite number of parameters has to be learned, specifically Gaussian process regression. When the inputs are vertices on a random graph and the outputs noisy function values, we show that replica techniques can be used to obtain exact performance predictions in the limit of large graphs. The covariance of the Gaussian process prior is defined by a random walk kernel, the discrete analogue of squared exponential kernels on continuous spaces. Conventionally this kernel is normalised only globally, so that the prior variance can differ between vertices; as a more principled alternative we consider local normalisation, where the prior variance is uniform
    • …
    corecore