82 research outputs found

    A Common Cortical Circuit Mechanism for Perceptual Categorical Discrimination and Veridical Judgment

    Get PDF
    Perception involves two types of decisions about the sensory world: identification of stimulus features as analog quantities, or discrimination of the same stimulus features among a set of discrete alternatives. Veridical judgment and categorical discrimination have traditionally been conceptualized as two distinct computational problems. Here, we found that these two types of decision making can be subserved by a shared cortical circuit mechanism. We used a continuous recurrent network model to simulate two monkey experiments in which subjects were required to make either a two-alternative forced choice or a veridical judgment about the direction of random-dot motion. The model network is endowed with a continuum of bell-shaped population activity patterns, each representing a possible motion direction. Slow recurrent excitation underlies accumulation of sensory evidence, and its interplay with strong recurrent inhibition leads to decision behaviors. The model reproduced the monkey's performance as well as single-neuron activity in the categorical discrimination task. Furthermore, we examined how direction identification is determined by a combination of sensory stimulation and microstimulation. Using a population-vector measure, we found that direction judgments instantiate winner-take-all (with the population vector coinciding with either the coherent motion direction or the electrically elicited motion direction) when two stimuli are far apart, or vector averaging (with the population vector falling between the two directions) when two stimuli are close to each other. Interestingly, for a broad range of intermediate angular distances between the two stimuli, the network displays a mixed strategy in the sense that direction estimates are stochastically produced by winner-take-all on some trials and by vector averaging on the other trials, a model prediction that is experimentally testable. This work thus lends support to a common neurodynamic framework for both veridical judgment and categorical discrimination in perceptual decision making

    Collective stability of networks of winner-take-all circuits

    Full text link
    The neocortex has a remarkably uniform neuronal organization, suggesting that common principles of processing are employed throughout its extent. In particular, the patterns of connectivity observed in the superficial layers of the visual cortex are consistent with the recurrent excitation and inhibitory feedback required for cooperative-competitive circuits such as the soft winner-take-all (WTA). WTA circuits offer interesting computational properties such as selective amplification, signal restoration, and decision making. But, these properties depend on the signal gain derived from positive feedback, and so there is a critical trade-off between providing feedback strong enough to support the sophisticated computations, while maintaining overall circuit stability. We consider the question of how to reason about stability in very large distributed networks of such circuits. We approach this problem by approximating the regular cortical architecture as many interconnected cooperative-competitive modules. We demonstrate that by properly understanding the behavior of this small computational module, one can reason over the stability and convergence of very large networks composed of these modules. We obtain parameter ranges in which the WTA circuit operates in a high-gain regime, is stable, and can be aggregated arbitrarily to form large stable networks. We use nonlinear Contraction Theory to establish conditions for stability in the fully nonlinear case, and verify these solutions using numerical simulations. The derived bounds allow modes of operation in which the WTA network is multi-stable and exhibits state-dependent persistent activities. Our approach is sufficiently general to reason systematically about the stability of any network, biological or technological, composed of networks of small modules that express competition through shared inhibition.Comment: 7 Figure

    Coherence and recurrency: maintenance, control and integration in working memory

    Get PDF
    Working memory (WM), including a ‘central executive’, is used to guide behavior by internal goals or intentions. We suggest that WM is best described as a set of three interdependent functions which are implemented in the prefrontal cortex (PFC). These functions are maintenance, control of attention and integration. A model for the maintenance function is presented, and we will argue that this model can be extended to incorporate the other functions as well. Maintenance is the capacity to briefly maintain information in the absence of corresponding input, and even in the face of distracting information. We will argue that maintenance is based on recurrent loops between PFC and posterior parts of the brain, and probably within PFC as well. In these loops information can be held temporarily in an active form. We show that a model based on these structural ideas is capable of maintaining a limited number of neural patterns. Not the size, but the coherence of patterns (i.e., a chunking principle based on synchronous firing of interconnected cell assemblies) determines the maintenance capacity. A mechanism that optimizes coherent pattern segregation, also poses a limit to the number of assemblies (about four) that can concurrently reverberate. Top-down attentional control (in perception, action and memory retrieval) can be modelled by the modulation and re-entry of top-down information to posterior parts of the brain. Hierarchically organized modules in PFC create the possibility for information integration. We argue that large-scale multimodal integration of information creates an ‘episodic buffer’, and may even suffice for implementing a central executive

    Dynamical principles in neuroscience

    Full text link
    Dynamical modeling of neural systems and brain functions has a history of success over the last half century. This includes, for example, the explanation and prediction of some features of neural rhythmic behaviors. Many interesting dynamical models of learning and memory based on physiological experiments have been suggested over the last two decades. Dynamical models even of consciousness now exist. Usually these models and results are based on traditional approaches and paradigms of nonlinear dynamics including dynamical chaos. Neural systems are, however, an unusual subject for nonlinear dynamics for several reasons: (i) Even the simplest neural network, with only a few neurons and synaptic connections, has an enormous number of variables and control parameters. These make neural systems adaptive and flexible, and are critical to their biological function. (ii) In contrast to traditional physical systems described by well-known basic principles, first principles governing the dynamics of neural systems are unknown. (iii) Many different neural systems exhibit similar dynamics despite having different architectures and different levels of complexity. (iv) The network architecture and connection strengths are usually not known in detail and therefore the dynamical analysis must, in some sense, be probabilistic. (v) Since nervous systems are able to organize behavior based on sensory inputs, the dynamical modeling of these systems has to explain the transformation of temporal information into combinatorial or combinatorial-temporal codes, and vice versa, for memory and recognition. In this review these problems are discussed in the context of addressing the stimulating questions: What can neuroscience learn from nonlinear dynamics, and what can nonlinear dynamics learn from neuroscience?This work was supported by NSF Grant No. NSF/EIA-0130708, and Grant No. PHY 0414174; NIH Grant No. 1 R01 NS50945 and Grant No. NS40110; MEC BFI2003-07276, and Fundación BBVA

    Bumps and rings in a two-dimensional neural field: splitting and rotational instabilities

    Get PDF
    In this paper we consider instabilities of localised solutions in planar neural field firing rate models of Wilson-Cowan or Amari type. Importantly we show that angular perturbations can destabilise spatially localised solutions. For a scalar model with Heaviside firing rate function we calculate symmetric one-bump and ring solutions explicitly and use an Evans function approach to predict the point of instability and the shapes of the dominant growing modes. Our predictions are shown to be in excellent agreement with direct numerical simulations. Moreover, beyond the instability our simulations demonstrate the emergence of multi-bump and labyrinthine patterns. With the addition of spike-frequency adaptation, numerical simulations of the resulting vector model show that it is possible for structures without rotational symmetry, and in particular multi-bumps, to undergo an instability to a rotating wave. We use a general argument, valid for smooth firing rate functions, to establish the conditions necessary to generate such a rotational instability. Numerical continuation of the rotating wave is used to quantify the emergent angular velocity as a bifurcation parameter is varied. Wave stability is found via the numerical evaluation of an associated eigenvalue problem

    Associative memory of phase-coded spatiotemporal patterns in leaky Integrate and Fire networks

    Get PDF
    We study the collective dynamics of a Leaky Integrate and Fire network in which precise relative phase relationship of spikes among neurons are stored, as attractors of the dynamics, and selectively replayed at differentctime scales. Using an STDP-based learning process, we store in the connectivity several phase-coded spike patterns, and we find that, depending on the excitability of the network, different working regimes are possible, with transient or persistent replay activity induced by a brief signal. We introduce an order parameter to evaluate the similarity between stored and recalled phase-coded pattern, and measure the storage capacity. Modulation of spiking thresholds during replay changes the frequency of the collective oscillation or the number of spikes per cycle, keeping preserved the phases relationship. This allows a coding scheme in which phase, rate and frequency are dissociable. Robustness with respect to noise and heterogeneity of neurons parameters is studied, showing that, since dynamics is a retrieval process, neurons preserve stablecprecise phase relationship among units, keeping a unique frequency of oscillation, even in noisy conditions and with heterogeneity of internal parameters of the units

    Hebbian Reverberations in Emotional Memory Micro Circuits

    Get PDF
    The study of memory in most behavioral paradigms, including emotional memory paradigms, has focused on the feed forward components that underlie Hebb's first postulate, associative synaptic plasticity. Hebb's second postulate argues that activated ensembles of neurons reverberate in order to provide temporal coordination of different neural signals, and thereby facilitate coincidence detection. Recent evidence from our groups has suggested that the lateral amygdala (LA) contains recurrent microcircuits and that these may reverberate. Additionally this reverberant activity is precisely timed with latencies that would facilitate coincidence detection between cortical and sub cortical afferents to the LA. Thus, recent data at the microcircuit level in the amygdala provide some physiological evidence in support of the second Hebbian postulate

    Existence and Wandering of Bumps in a Spiking Neural Network Model

    Full text link
    corecore