2,014 research outputs found

    Unipolar terminal-attractor-based neural associative memory with adaptive threshold and perfect convergence

    Get PDF
    A perfectly convergent unipolar neural associative-memory system based on nonlinear dynamical terminal attractors is presented. With adaptive setting of the threshold values for the dynamic iteration for the unipolar binary neuron states with terminal attractors, perfect convergence is achieved. This achievement and correct retrieval are demonstrated by computer simulation. The simulations are completed (1) by exhaustive tests with all of the possible combinations of stored and test vectors in small-scale networks and (2) by Monte Carlo simulations with randomly generated stored and test vectors in large-scale networks with an M/N ratio of 4 (M is the number of stored vectors; N is the number of neurons < 256). An experiment with exclusive-oR logic operations with liquid-crystal-television spatial light modulators is used to show the feasibility of an optoelectronic implementation of the model. The behavior of terminal attractors in basins of energy space is illustrated by examples

    Recurrent backpropagation and the dynamical approach to adaptive neural computation

    Get PDF
    Error backpropagation in feedforward neural network models is a popular learning algorithm that has its roots in nonlinear estimation and optimization. It is being used routinely to calculate error gradients in nonlinear systems with hundreds of thousands of parameters. However, the classical architecture for backpropagation has severe restrictions. The extension of backpropagation to networks with recurrent connections will be reviewed. It is now possible to efficiently compute the error gradients for networks that have temporal dynamics, which opens applications to a host of problems in systems identification and control

    Computational neural learning formalisms for manipulator inverse kinematics

    Get PDF
    An efficient, adaptive neural learning paradigm for addressing the inverse kinematics of redundant manipulators is presented. The proposed methodology exploits the infinite local stability of terminal attractors - a new class of mathematical constructs which provide unique information processing capabilities to artificial neural systems. For robotic applications, synaptic elements of such networks can rapidly acquire the kinematic invariances embedded within the presented samples. Subsequently, joint-space configurations, required to follow arbitrary end-effector trajectories, can readily be computed. In a significant departure from prior neuromorphic learning algorithms, this methodology provides mechanisms for incorporating an in-training skew to handle kinematics and environmental constraints

    On the effects of firing memory in the dynamics of conjunctive networks

    Full text link
    Boolean networks are one of the most studied discrete models in the context of the study of gene expression. In order to define the dynamics associated to a Boolean network, there are several \emph{update schemes} that range from parallel or \emph{synchronous} to \emph{asynchronous.} However, studying each possible dynamics defined by different update schemes might not be efficient. In this context, considering some type of temporal delay in the dynamics of Boolean networks emerges as an alternative approach. In this paper, we focus in studying the effect of a particular type of delay called \emph{firing memory} in the dynamics of Boolean networks. Particularly, we focus in symmetric (non-directed) conjunctive networks and we show that there exist examples that exhibit attractors of non-polynomial period. In addition, we study the prediction problem consisting in determinate if some vertex will eventually change its state, given an initial condition. We prove that this problem is {\bf PSPACE}-complete

    Real time unsupervised learning of visual stimuli in neuromorphic VLSI systems

    Full text link
    Neuromorphic chips embody computational principles operating in the nervous system, into microelectronic devices. In this domain it is important to identify computational primitives that theory and experiments suggest as generic and reusable cognitive elements. One such element is provided by attractor dynamics in recurrent networks. Point attractors are equilibrium states of the dynamics (up to fluctuations), determined by the synaptic structure of the network; a `basin' of attraction comprises all initial states leading to a given attractor upon relaxation, hence making attractor dynamics suitable to implement robust associative memory. The initial network state is dictated by the stimulus, and relaxation to the attractor state implements the retrieval of the corresponding memorized prototypical pattern. In a previous work we demonstrated that a neuromorphic recurrent network of spiking neurons and suitably chosen, fixed synapses supports attractor dynamics. Here we focus on learning: activating on-chip synaptic plasticity and using a theory-driven strategy for choosing network parameters, we show that autonomous learning, following repeated presentation of simple visual stimuli, shapes a synaptic connectivity supporting stimulus-selective attractors. Associative memory develops on chip as the result of the coupled stimulus-driven neural activity and ensuing synaptic dynamics, with no artificial separation between learning and retrieval phases.Comment: submitted to Scientific Repor

    Dynamics of Neural Networks with Continuous Attractors

    Full text link
    We investigate the dynamics of continuous attractor neural networks (CANNs). Due to the translational invariance of their neuronal interactions, CANNs can hold a continuous family of stationary states. We systematically explore how their neutral stability facilitates the tracking performance of a CANN, which is believed to have wide applications in brain functions. We develop a perturbative approach that utilizes the dominant movement of the network stationary states in the state space. We quantify the distortions of the bump shape during tracking, and study their effects on the tracking performance. Results are obtained on the maximum speed for a moving stimulus to be trackable, and the reaction time to catch up an abrupt change in stimulus.Comment: 6 pages, 7 figures with 4 caption
    • …
    corecore