148 research outputs found

    Storage Capacity Diverges with Synaptic Efficiency in an Associative Memory Model with Synaptic Delay and Pruning

    Full text link
    It is known that storage capacity per synapse increases by synaptic pruning in the case of a correlation-type associative memory model. However, the storage capacity of the entire network then decreases. To overcome this difficulty, we propose decreasing the connecting rate while keeping the total number of synapses constant by introducing delayed synapses. In this paper, a discrete synchronous-type model with both delayed synapses and their prunings is discussed as a concrete example of the proposal. First, we explain the Yanai-Kim theory by employing the statistical neurodynamics. This theory involves macrodynamical equations for the dynamics of a network with serial delay elements. Next, considering the translational symmetry of the explained equations, we re-derive macroscopic steady state equations of the model by using the discrete Fourier transformation. The storage capacities are analyzed quantitatively. Furthermore, two types of synaptic prunings are treated analytically: random pruning and systematic pruning. As a result, it becomes clear that in both prunings, the storage capacity increases as the length of delay increases and the connecting rate of the synapses decreases when the total number of synapses is constant. Moreover, an interesting fact becomes clear: the storage capacity asymptotically approaches 2/π2/\pi due to random pruning. In contrast, the storage capacity diverges in proportion to the logarithm of the length of delay by systematic pruning and the proportion constant is 4/π4/\pi. These results theoretically support the significance of pruning following an overgrowth of synapses in the brain and strongly suggest that the brain prefers to store dynamic attractors such as sequences and limit cycles rather than equilibrium states.Comment: 27 pages, 14 figure

    A geometric learning algorithm for elementary perceptron and its convergence analysis

    Get PDF
    In this paper, the geometric learning algorithm (GLA) is proposed for an elementary perceptron which includes a single output neuron. The GLA is a modified version of the affine projection algorithm (APA) for adaptive filters. The weights update vector is determined geometrically towards the intersection of the k hyperplanes which are perpendicular to patterns to be classified. k is the order of the GLA. In the case of the APA, the target of the coefficients update is a single point which corresponds to the best identification of the unknown system. On the other hand, in the case of the GLA, the target of the weights update is an area, in which all the given patterns are classified correctly. Thus, their convergence conditions are different. In this paper, the convergence condition of the 1st order GLA for 2 patterns is theoretically derived. The new concept `the angle of the solution area\u27 is introduced. The computer simulation results support that this new concept is a good estimation of the convergence properties

    A recurrent neural network with serial delay elements for memorizing limit cycles

    Get PDF
    A recurrent neural network (RNN), in which each unit has serial delay elements, is proposed for memorizing limit cycles (LCs). This network is called DRNN in this paper. An LC consists of several basic patterns. The hysteresis information of LCs, realized on the connections from the delay elements to the units, is very efficient in the following reasons. First, the same basic patterns can be shared by different LCs. This make it possible to drastically increase the number of LCs, even though using a small number of the basic patterns. Second, noise performance, that is, probability of recalling the exact LC starting from the noisy LC, can be improved. The hysteresis information consists of two components, the order of the basic patterns included in an LC, and the cross-correlation among all the basic patterns. The former is highly dependent on the number of LCs, and the latter the number of all the basic patterns. In order to achieve good noise performance, a small number of the basic patterns is preferred. These properties of the DRNN are theoretically analyzed and confirmed through computer simulations. It is also confirmed that the DRNN is superior to the RNN without delay elements for memorizing LCs

    Probabilistic memory capacity of recurrent neural networks

    Get PDF
    金沢大学理工研究域電子情報学系In this paper, probabilistic memory capacity of recurrent neural networks(RNNs) is investigated. This probabilistic capacity is determined uniquely if the network architecture and the number of patterns to be memorized are fixed. It is independent from a learning method and the network dynamics. It provides the upper bound of the memory capacity by any learning algorithms in memorizing random patterns. It is assumed that the network consists of N units, which take two states. Thus, the total number of patterns is the Nth power of 2. The probabilities are obtained by discriminations whether the connection weights, which can store random M patterns at equilibrium states, exist or not. A theoretical way for this purpose is derived, and actual calculation is executed by the Monte Carlo method. The probabilistic memory capacity is very important in applying the RNNs to real fields, and in evaluating goodness of learning algorithms. As an example of a learning algorithm, the improved error correction learning is investigated, and its convergence probabilities are compared with the upper bound. A linear programming method can be effectively applied to this numerical analysis
    corecore