3,535 research outputs found

    Secure exchange of information by synchronization of neural networks

    Full text link
    A connection between the theory of neural networks and cryptography is presented. A new phenomenon, namely synchronization of neural networks is leading to a new method of exchange of secret messages. Numerical simulations show that two artificial networks being trained by Hebbian learning rule on their mutual outputs develop an antiparallel state of their synaptic weights. The synchronized weights are used to construct an ephemeral key exchange protocol for a secure transmission of secret data. It is shown that an opponent who knows the protocol and all details of any transmission of the data has no chance to decrypt the secret message, since tracking the weights is a hard problem compared to synchronization. The complexity of the generation of the secure channel is linear with the size of the network.Comment: 11 pages, 5 figure

    Learning and predicting time series by neural networks

    Full text link
    Artificial neural networks which are trained on a time series are supposed to achieve two abilities: firstly to predict the series many time steps ahead and secondly to learn the rule which has produced the series. It is shown that prediction and learning are not necessarily related to each other. Chaotic sequences can be learned but not predicted while quasiperiodic sequences can be well predicted but not learned.Comment: 5 page

    Generation of unpredictable time series by a Neural Network

    Full text link
    A perceptron that learns the opposite of its own output is used to generate a time series. We analyse properties of the weight vector and the generated sequence, like the cycle length and the probability distribution of generated sequences. A remarkable suppression of the autocorrelation function is explained, and connections to the Bernasconi model are discussed. If a continuous transfer function is used, the system displays chaotic and intermittent behaviour, with the product of the learning rate and amplification as a control parameter.Comment: 11 pages, 14 figures; slightly expanded and clarified, mistakes corrected; accepted for publication in PR

    Statistical mechanical aspects of joint source-channel coding

    Full text link
    An MN-Gallager Code over Galois fields, qq, based on the Dynamical Block Posterior probabilities (DBP) for messages with a given set of autocorrelations is presented with the following main results: (a) for a binary symmetric channel the threshold, fcf_c, is extrapolated for infinite messages using the scaling relation for the median convergence time, tmed1/(fcf)t_{med} \propto 1/(f_c-f); (b) a degradation in the threshold is observed as the correlations are enhanced; (c) for a given set of autocorrelations the performance is enhanced as qq is increased; (d) the efficiency of the DBP joint source-channel coding is slightly better than the standard gzip compression method; (e) for a given entropy, the performance of the DBP algorithm is a function of the decay of the correlation function over large distances.Comment: 6 page

    Synchronization of chaotic networks with time-delayed couplings: An analytic study

    Full text link
    Networks of nonlinear units with time-delayed couplings can synchronize to a common chaotic trajectory. Although the delay time may be very large, the units can synchronize completely without time shift. For networks of coupled Bernoulli maps, analytic results are derived for the stability of the chaotic synchronization manifold. For a single delay time, chaos synchronization is related to the spectral gap of the coupling matrix. For networks with multiple delay times, analytic results are obtained from the theory of polynomials. Finally, the analytic results are compared with networks of iterated tent maps and Lang-Kobayashi equations which imitate the behaviour of networks of semiconductor lasers

    Multilayer neural networks with extensively many hidden units

    Full text link
    The information processing abilities of a multilayer neural network with a number of hidden units scaling as the input dimension are studied using statistical mechanics methods. The mapping from the input layer to the hidden units is performed by general symmetric Boolean functions whereas the hidden layer is connected to the output by either discrete or continuous couplings. Introducing an overlap in the space of Boolean functions as order parameter the storage capacity if found to scale with the logarithm of the number of implementable Boolean functions. The generalization behaviour is smooth for continuous couplings and shows a discontinuous transition to perfect generalization for discrete ones.Comment: 4 pages, 2 figure

    Finite size effects and error-free communication in Gaussian channels

    Get PDF
    The efficacy of a specially constructed Gallager-type error-correcting code to communication in a Gaussian channel is being examined. The construction is based on the introduction of complex matrices, used in both encoding and decoding, which comprise sub-matrices of cascading connection values. The finite size effects are estimated for comparing the results to the bounds set by Shannon. The critical noise level achieved for certain code-rates and infinitely large systems nearly saturates the bounds set by Shannon even when the connectivity used is low

    Nonlocal mechanism for cluster synchronization in neural circuits

    Full text link
    The interplay between the topology of cortical circuits and synchronized activity modes in distinct cortical areas is a key enigma in neuroscience. We present a new nonlocal mechanism governing the periodic activity mode: the greatest common divisor (GCD) of network loops. For a stimulus to one node, the network splits into GCD-clusters in which cluster neurons are in zero-lag synchronization. For complex external stimuli, the number of clusters can be any common divisor. The synchronized mode and the transients to synchronization pinpoint the type of external stimuli. The findings, supported by an information mixing argument and simulations of Hodgkin Huxley population dynamic networks with unidirectional connectivity and synaptic noise, call for reexamining sources of correlated activity in cortex and shorter information processing time scales.Comment: 8 pges, 6 figure

    Storage capacity of correlated perceptrons

    Full text link
    We consider an ensemble of KK single-layer perceptrons exposed to random inputs and investigate the conditions under which the couplings of these perceptrons can be chosen such that prescribed correlations between the outputs occur. A general formalism is introduced using a multi-perceptron costfunction that allows to determine the maximal number of random inputs as a function of the desired values of the correlations. Replica-symmetric results for K=2K=2 and K=3K=3 are compared with properties of two-layer networks of tree-structure and fixed Boolean function between hidden units and output. The results show which correlations in the hidden layer of multi-layer neural networks are crucial for the value of the storage capacity.Comment: 16 pages, Latex2

    Mean Field Behavior of Cluster Dynamics

    Full text link
    The dynamic behavior of cluster algorithms is analyzed in the classical mean field limit. Rigorous analytical results below TcT_c establish that the dynamic exponent has the value zsw=1z_{sw}=1 for the Swendsen-Wang algorithm and zuw=0z_{uw}=0 for the Wolff algorithm. An efficient Monte Carlo implementation is introduced, adapted for using these algorithms for fully connected graphs. Extensive simulations both above and below TcT_c demonstrate scaling and evaluate the finite-size scaling function by means of a rather impressive collapse of the data.Comment: Revtex, 9 pages with 7 figure
    corecore