9,413 research outputs found

    Deep learning with asymmetric connections and Hebbian updates

    Get PDF
    We show that deep networks can be trained using Hebbian updates yielding similar performance to ordinary back-propagation on challenging image datasets. To overcome the unrealistic symmetry in connections between layers, implicit in back-propagation, the feedback weights are separate from the feedforward weights. The feedback weights are also updated with a local rule, the same as the feedforward weights - a weight is updated solely based on the product of activity of the units it connects. With fixed feedback weights as proposed in Lillicrap et. al (2016) performance degrades quickly as the depth of the network increases. If the feedforward and feedback weights are initialized with the same values, as proposed in Zipser and Rumelhart (1990), they remain the same throughout training thus precisely implementing back-propagation. We show that even when the weights are initialized differently and at random, and the algorithm is no longer performing back-propagation, performance is comparable on challenging datasets. We also propose a cost function whose derivative can be represented as a local Hebbian update on the last layer. Convolutional layers are updated with tied weights across space, which is not biologically plausible. We show that similar performance is achieved with untied layers, also known as locally connected layers, corresponding to the connectivity implied by the convolutional layers, but where weights are untied and updated separately. In the linear case we show theoretically that the convergence of the error to zero is accelerated by the update of the feedback weights

    Linear motor motion control using a learning feedforward controller

    Get PDF
    The design and realization of an online learning motion controller for a linear motor is presented, and its usefulness is evaluated. The controller consists of two components: (1) a model-based feedback component, and (2) a learning feedforward component. The feedback component is designed on the basis of a simple second-order linear model, which is known to have structural errors. In the design, an emphasis is placed on robustness. The learning feedforward component is a neural-network-based controller, comprised of a one-hidden-layer structure with second-order B-spline basis functions. Simulations and experimental evaluations show that, with little effort, a high-performance motion system can be obtained with this approach

    The Spatial Structure of Stimuli Shapes the Timescale of Correlations in Population Spiking Activity

    Get PDF
    Throughout the central nervous system, the timescale over which pairs of neural spike trains are correlated is shaped by stimulus structure and behavioral context. Such shaping is thought to underlie important changes in the neural code, but the neural circuitry responsible is largely unknown. In this study, we investigate a stimulus-induced shaping of pairwise spike train correlations in the electrosensory system of weakly electric fish. Simultaneous single unit recordings of principal electrosensory cells show that an increase in the spatial extent of stimuli increases correlations at short (~10 ms) timescales while simultaneously reducing correlations at long (~100 ms) timescales. A spiking network model of the first two stages of electrosensory processing replicates this correlation shaping, under the assumptions that spatially broad stimuli both saturate feedforward afferent input and recruit an open-loop inhibitory feedback pathway. Our model predictions are experimentally verified using both the natural heterogeneity of the electrosensory system and pharmacological blockade of descending feedback projections. For weak stimuli, linear response analysis of the spiking network shows that the reduction of long timescale correlation for spatially broad stimuli is similar to correlation cancellation mechanisms previously suggested to be operative in mammalian cortex. The mechanism for correlation shaping supports population-level filtering of irrelevant distractor stimuli, thereby enhancing the population response to relevant prey and conspecific communication inputs. © 2012 Litwin-Kumar et al

    Cortical region interactions and the functional role of apical dendrites

    Get PDF
    The basal and distal apical dendrites of pyramidal cells occupy distinct cortical layers and are targeted by axons originating in different cortical regions. Hence, apical and basal dendrites receive information from distinct sources. Physiological evidence suggests that this anatomically observed segregation of input sources may have functional significance. This possibility has been explored in various connectionist models that employ neurons with functionally distinct apical and basal compartments. A neuron in which separate sets of inputs can be integrated independently has the potential to operate in a variety of ways which are not possible for the conventional model of a neuron in which all inputs are treated equally. This article thus considers how functionally distinct apical and basal dendrites can contribute to the information processing capacities of single neurons and, in particular, how information from different cortical regions could have disparate affects on neural activity and learning

    Neural networks for aircraft control

    Get PDF
    Current research in Artificial Neural Networks indicates that networks offer some potential advantages in adaptation and fault tolerance. This research is directed at determining the possible applicability of neural networks to aircraft control. The first application will be to aircraft trim. Neural network node characteristics, network topology and operation, neural network learning and example histories using neighboring optimal control with a neural net are discussed
    • …
    corecore