2,183 research outputs found

    Neural-network dedicated processor for solving competitive assignment problems

    Get PDF
    A neural-network processor for solving first-order competitive assignment problems consists of a matrix of N x M processing units, each of which corresponds to the pairing of a first number of elements of (R sub i) with a second number of elements (C sub j), wherein limits of the first number are programmed in row control superneurons, and limits of the second number are programmed in column superneurons as MIN and MAX values. The cost (weight) W sub ij of the pairings is programmed separately into each PU. For each row and column of PU's, a dedicated constraint superneuron insures that the number of active neurons within the associated row or column fall within a specified range. Annealing is provided by gradually increasing the PU gain for each row and column or increasing positive feedback to each PU, the latter being effective to increase hysteresis of each PU or by combining both of these techniques

    Solving constraint-satisfaction problems with distributed neocortical-like neuronal networks

    Get PDF
    Finding actions that satisfy the constraints imposed by both external inputs and internal representations is central to decision making. We demonstrate that some important classes of constraint satisfaction problems (CSPs) can be solved by networks composed of homogeneous cooperative-competitive modules that have connectivity similar to motifs observed in the superficial layers of neocortex. The winner-take-all modules are sparsely coupled by programming neurons that embed the constraints onto the otherwise homogeneous modular computational substrate. We show rules that embed any instance of the CSPs planar four-color graph coloring, maximum independent set, and Sudoku on this substrate, and provide mathematical proofs that guarantee these graph coloring problems will convergence to a solution. The network is composed of non-saturating linear threshold neurons. Their lack of right saturation allows the overall network to explore the problem space driven through the unstable dynamics generated by recurrent excitation. The direction of exploration is steered by the constraint neurons. While many problems can be solved using only linear inhibitory constraints, network performance on hard problems benefits significantly when these negative constraints are implemented by non-linear multiplicative inhibition. Overall, our results demonstrate the importance of instability rather than stability in network computation, and also offer insight into the computational role of dual inhibitory mechanisms in neural circuits.Comment: Accepted manuscript, in press, Neural Computation (2018

    An analog feedback associative memory

    Get PDF
    A method for the storage of analog vectors, i.e., vectors whose components are real-valued, is developed for the Hopfield continuous-time network. An important requirement is that each memory vector has to be an asymptotically stable (i.e. attractive) equilibrium of the network. Some of the limitations imposed by the continuous Hopfield model on the set of vectors that can be stored are pointed out. These limitations can be relieved by choosing a network containing visible as well as hidden units. An architecture consisting of several hidden layers and a visible layer, connected in a circular fashion, is considered. It is proved that the two-layer case is guaranteed to store any number of given analog vectors provided their number does not exceed 1 + the number of neurons in the hidden layer. A learning algorithm that correctly adjusts the locations of the equilibria and guarantees their asymptotic stability is developed. Simulation results confirm the effectiveness of the approach

    Optimizing the energy consumption of spiking neural networks for neuromorphic applications

    Full text link
    In the last few years, spiking neural networks have been demonstrated to perform on par with regular convolutional neural networks. Several works have proposed methods to convert a pre-trained CNN to a Spiking CNN without a significant sacrifice of performance. We demonstrate first that quantization-aware training of CNNs leads to better accuracy in SNNs. One of the benefits of converting CNNs to spiking CNNs is to leverage the sparse computation of SNNs and consequently perform equivalent computation at a lower energy consumption. Here we propose an efficient optimization strategy to train spiking networks at lower energy consumption, while maintaining similar accuracy levels. We demonstrate results on the MNIST-DVS and CIFAR-10 datasets

    A modified ART 1 algorithm more suitable for VLSI implementations

    Get PDF
    This paper presents a modification to the original ART 1 algorithm (Carpenter and Grossberg, 1987a, A massively parallel architecture for a self-organizing neural pattern recognition machine, Computer Vision, Graphics, and Image Processing, 37, 54–115) that is conceptually similar, can be implemented in hardware with less sophisticated building blocks, and maintains the computational capabilities of the originally proposed algorithm. This modified ART 1 algorithm (which we will call here ART 1m) is the result of hardware motivated simplifications investigated during the design of an actual ART 1 chip [Serrano-Gotarredona et al., 1994, Proc. 1994 IEEE Int. Conf. Neural Networks (Vol. 3, pp. 1912–1916); Serrano-Gotarredona and Linares-Barranco, 1996, IEEE Trans. VLSI Systems, (in press)]. The purpose of this paper is simply to justify theoretically that the modified algorithm preserves the computational properties of the original one and to study the difference in behavior between the two approaches

    Episodic associative memories taught by evolutionary algorithm

    Get PDF
    The paper analyses possibilities of episodic multi-winner multi-directional associative memory taught by evolutionary algorithm. This kind of memory allows to store and recall more complex associations, like episode-to-episode

    Effect of Input Noise and Output Node Stochastic on Wang's k WTA

    Get PDF
    • 

    corecore