research

Feed-Forward Chains of Recurrent Attractor Neural Networks Near Saturation

Abstract

We perform a stationary state replica analysis for a layered network of Ising spin neurons, with recurrent Hebbian interactions within each layer, in combination with strictly feed-forward Hebbian interactions between successive layers. This model interpolates between the fully recurrent and symmetric attractor network studied by Amit el al, and the strictly feed-forward attractor network studied by Domany et al. Due to the absence of detailed balance, it is as yet solvable only in the zero temperature limit. The built-in competition between two qualitatively different modes of operation, feed-forward (ergodic within layers) versus recurrent (non- ergodic within layers), is found to induce interesting phase transitions.Comment: 14 pages LaTex with 4 postscript figures submitted to J. Phys.

    Similar works

    Full text

    thumbnail-image

    Available Versions

    Last time updated on 01/04/2019