8,110 research outputs found

    Capacity of two-layer feedforward neural networks with binary weights

    Get PDF
    The low er and upper bounds for the information capacity of two-layer feedforward neural networks with binary interconnections, integer thresholds for the hidden units, and zero threshold for the output unit is obtained through two steps, First, through a constructive approach based on statistical analysis, it is shown that a specifically constructed (N -2L -1) network with N input units, 2L hidden units, and one output unit is capable of implementing, with almost probability one, any dichotomy of O(W/1n W) random samples drawn from some continuous distributions, where W is the total number of weights of the network, This quantity is then used as a lower bound for the information capacity C of all (N -2L -1) networks with binary weights, Second, an upper bound is obtained and shown to be O(W) by a simple counting argument. Therefore, we have Omega(W/ln W) less than or equal to C less than or equal to O(W)

    The VC-Dimension versus the Statistical Capacity of Multilayer Networks

    Get PDF
    A general relationship is developed between the VC-dimension and the statistical lower epsilon-capacity which shows that the VC-dimension can be lower bounded (in order) by the statistical lower epsilon-capacity of a network trained with random samples. This relationship explains quantitatively how generalization takes place after memorization, and relates the concept of generalization (consistency) with the capacity of the optimal classifier over a class of classifiers with the same structure and the capacity of the Bayesian classifier. Furthermore, it provides a general methodology to evaluate a lower bound for the VC-dimension of feedforward multilayer neural networks. This general methodology is applied to two types of networks which are important for hardware implementations: two layer (N - 2L - 1) networks with binary weights, integer thresholds for the hidden units and zero threshold for the output unit, and a single neuron ((N - 1) networks) with binary weigths and a zero threshold. Specifically, we obtain O(W/lnL)≀ d_2 ≀ O(W), and d_1 ~ O(N). Here W is the total number of weights of the (N - 2L - 1) networks. d_1 and d_2 represent the VC-dimensions for the (N - 1) and (N - 2L - 1) networks respectively

    An analog feedback associative memory

    Get PDF
    A method for the storage of analog vectors, i.e., vectors whose components are real-valued, is developed for the Hopfield continuous-time network. An important requirement is that each memory vector has to be an asymptotically stable (i.e. attractive) equilibrium of the network. Some of the limitations imposed by the continuous Hopfield model on the set of vectors that can be stored are pointed out. These limitations can be relieved by choosing a network containing visible as well as hidden units. An architecture consisting of several hidden layers and a visible layer, connected in a circular fashion, is considered. It is proved that the two-layer case is guaranteed to store any number of given analog vectors provided their number does not exceed 1 + the number of neurons in the hidden layer. A learning algorithm that correctly adjusts the locations of the equilibria and guarantees their asymptotic stability is developed. Simulation results confirm the effectiveness of the approach
    • 

    corecore