9 research outputs found

    On the center of mass of Ising vectors

    Full text link
    We show that the center of mass of Ising vectors that obey some simple constraints, is again an Ising vector.Comment: 8 pages, 3 figures, LaTeX; Claims in connection with disordered systems have been withdrawn; More detailed description of the simulations; Inset added to figure

    Mutual learning in a tree parity machine and its application to cryptography

    Full text link
    Mutual learning of a pair of tree parity machines with continuous and discrete weight vectors is studied analytically. The analysis is based on a mapping procedure that maps the mutual learning in tree parity machines onto mutual learning in noisy perceptrons. The stationary solution of the mutual learning in the case of continuous tree parity machines depends on the learning rate where a phase transition from partial to full synchronization is observed. In the discrete case the learning process is based on a finite increment and a full synchronized state is achieved in a finite number of steps. The synchronization of discrete parity machines is introduced in order to construct an ephemeral key-exchange protocol. The dynamic learning of a third tree parity machine (an attacker) that tries to imitate one of the two machines while the two still update their weight vectors is also analyzed. In particular, the synchronization times of the naive attacker and the flipping attacker recently introduced in [1] are analyzed. All analytical results are found to be in good agreement with simulation results

    Training a perceptron in a discrete weight space

    Full text link
    On-line and batch learning of a perceptron in a discrete weight space, where each weight can take 2L+12 L+1 different values, are examined analytically and numerically. The learning algorithm is based on the training of the continuous perceptron and prediction following the clipped weights. The learning is described by a new set of order parameters, composed of the overlaps between the teacher and the continuous/clipped students. Different scenarios are examined among them on-line learning with discrete/continuous transfer functions and off-line Hebb learning. The generalization error of the clipped weights decays asymptotically as exp(Kα2)exp(-K \alpha^2)/exp(eλα)exp(-e^{|\lambda| \alpha}) in the case of on-line learning with binary/continuous activation functions, respectively, where α\alpha is the number of examples divided by N, the size of the input vector and KK is a positive constant that decays linearly with 1/L. For finite NN and LL, a perfect agreement between the discrete student and the teacher is obtained for αLln(NL)\alpha \propto \sqrt{L \ln(NL)}. A crossover to the generalization error 1/α\propto 1/\alpha, characterized continuous weights with binary output, is obtained for synaptic depth L>O(N)L > O(\sqrt{N}).Comment: 10 pages, 5 figs., submitted to PR
    corecore