79 research outputs found

    A new functional role for lateral inhibition in the striatum: Pavlovian conditioning

    Get PDF
    The striatum has long been implicated in reinforcement learning and has been suggested by several neurophysiological studies as the substrate for encoding the reward value of stimuli. Reward prediction error (RPE) has been used in several basal ganglia models as the underlying learning signal, which leads to Pavlovian conditioning abilities that can be simulated by the Rescorla-Wagner model.

Lateral inhibition between striatal projection neurons was once thought to have a winner-take-all function, useful in selecting between possible actions. However, it has been noted that the necessary reciprocal connections for this interpretation are too few, and the relative strength of these synaptic connections is weak. Still, modeling studies show that lateral inhibition does have an overall suppression effect on striatal activity and may play an important role in striatal processing. 

Neurophysiological recordings show task-relevant ensembles of responsive neurons at specific points in a behavioral paradigm (Barnes et al., 2005), which appear to be induced by lateral inhibition (see Ponzi and Wickens, 2010). We have developed a similarly responding, RPE-based model of the striatum by incorporating lateral inhibition. Model neurons are assigned to either the direct or the indirect pathway but lateral connections occur within and between these groups, leading to competition between both the individual neurons and their pathways. We successfully applied this model to the simulation of Pavlovian phenomena beyond those of the Rescorla-Wagner model, including negative patterning, unovershadowing, and external inhibition

    Aftereffects of Saccades Explored in a Dynamic Neural Field Model of the Superior Colliculus

    Get PDF
    When viewing a scene or searching for a target, an observer usually makes a series of saccades that quickly shift the orientation of the eyes. The present study explored how one saccade affects subsequent saccades within a dynamic neural field model of the superior colliculus (SC). The SC contains an oculocentric motor map that encodes the vector of saccades and remaps to the new fixation location after each saccade. Our simulations demonstrated that the observation that saccades which reverse their vectors are slower to initiate than those which repeat vectors can be explained by the afore-mentioned remapping process and the internal dynamics of the SC. How this finding connects to the study of inhibition of return is discussed and suggestions for future studies are presented

    Logical Activation Functions: Logit-space equivalents of Probabilistic Boolean Operators

    Full text link
    The choice of activation functions and their motivation is a long-standing issue within the neural network community. Neuronal representations within artificial neural networks are commonly understood as logits, representing the log-odds score of presence of features within the stimulus. We derive logit-space operators equivalent to probabilistic Boolean logic-gates AND, OR, and XNOR for independent probabilities. Such theories are important to formalize more complex dendritic operations in real neurons, and these operations can be used as activation functions within a neural network, introducing probabilistic Boolean-logic as the core operation of the neural network. Since these functions involve taking multiple exponents and logarithms, they are computationally expensive and not well suited to be directly used within neural networks. Consequently, we construct efficient approximations named ANDAIL\text{AND}_\text{AIL} (the AND operator Approximate for Independent Logits), ORAIL\text{OR}_\text{AIL}, and XNORAIL\text{XNOR}_\text{AIL}, which utilize only comparison and addition operations, have well-behaved gradients, and can be deployed as activation functions in neural networks. Like MaxOut, ANDAIL\text{AND}_\text{AIL} and ORAIL\text{OR}_\text{AIL} are generalizations of ReLU to two-dimensions. While our primary aim is to formalize dendritic computations within a logit-space probabilistic-Boolean framework, we deploy these new activation functions, both in isolation and in conjunction to demonstrate their effectiveness on a variety of tasks including image classification, transfer learning, abstract reasoning, and compositional zero-shot learning
    corecore