36 research outputs found

    Coordinated prefrontal state transition leads extinction of reward-seeking behaviors

    Get PDF
    Extinction learning suppresses conditioned reward responses and is thus fundamental to adapt to changing environmental demands and to control excessive reward seeking. The medial prefrontal cortex (mPFC) monitors and controls conditioned reward responses. Using in vivo multiple single-unit recordings of mPFC we studied the relationship between single-unit and population dynamics during different phases of an operant conditioning task. To examine the fine temporal relation between neural activity and behavior, we developed a model-based statistical analysis that captured behavioral idiosyncrasies. We found that single-unit responses to conditioned stimuli changed throughout the course of a session even under stable experimental conditions and consistent behavior. However, when behavioral responses to task contingencies had to be updated during the extinction phase, unit-specific modulations became coordinated across the whole population, pushing the network into a new stable attractor state. These results show that extinction learning is not associated with suppressed mPFC responses to conditioned stimuli, but is driven by single-unit coordination into population-wide transitions of the animal’s internal state

    Detecting Multiple Change Points Using Adaptive Regression Splines With Application to Neural Recordings

    Get PDF
    Time series, as frequently the case in neuroscience, are rarely stationary, but often exhibit abrupt changes due to attractor transitions or bifurcations in the dynamical systems producing them. A plethora of methods for detecting such change points in time series statistics have been developed over the years, in addition to test criteria to evaluate their significance. Issues to consider when developing change point analysis methods include computational demands, difficulties arising from either limited amount of data or a large number of covariates, and arriving at statistical tests with sufficient power to detect as many changes as contained in potentially high-dimensional time series. Here, a general method called Paired Adaptive Regressors for Cumulative Sum is developed for detecting multiple change points in the mean of multivariate time series. The method's advantages over alternative approaches are demonstrated through a series of simulation experiments. This is followed by a real data application to neural recordings from rat medial prefrontal cortex during learning. Finally, the method's flexibility to incorporate useful features from state-of-the-art change point detection techniques is discussed, along with potential drawbacks and suggestions to remedy them

    Behavior control in the sensorimotor loop with short-term synaptic dynamics induced by self-regulating neurons

    Get PDF
    The behavior and skills of living systems depend on the distributed control provided by specialized and highly recurrent neural networks. Learning and memory in these systems is mediated by a set of adaptation mechanisms, known collectively as neuronal plasticity. Translating principles of recurrent neural control and plasticity to artificial agents has seen major strides, but is usually hampered by the complex interactions between the agent's body and its environment. One of the important standing issues is for the agent to support multiple stable states of behavior, so that its behavioral repertoire matches the requirements imposed by these interactions. The agent also must have the capacity to switch between these states in time scales that are comparable to those by which sensory stimulation varies. Achieving this requires a mechanism of short-term memory that allows the neurocontroller to keep track of the recent history of its input, which finds its biological counterpart in short-term synaptic plasticity. This issue is approached here by deriving synaptic dynamics in recurrent neural networks. Neurons are introduced as self-regulating units with a rich repertoire of dynamics. They exhibit homeostatic properties for certain parameter domains, which result in a set of stable states and the required short-term memory. They can also operate as oscillators, which allow them to surpass the level of activity imposed by their homeostatic operation conditions. Neural systems endowed with the derived synaptic dynamics can be utilized for the neural behavior control of autonomous mobile agents. The resulting behavior depends also on the underlying network structure, which is either engineered or developed by evolutionary techniques. The effectiveness of these self-regulating units is demonstrated by controlling locomotion of a hexapod with 18 degrees of freedom, and obstacle-avoidance of a wheel-driven robot. © 2014 Toutounji and Pasemann

    Spatiotemporal Computations of an Excitable and Plastic Brain: Neuronal Plasticity Leads to Noise-Robust and Noise-Constructive Computations

    Get PDF
    It is a long-established fact that neuronal plasticity occupies the central role in generating neural function and computation. Nevertheless, no unifying account exists of how neurons in a recurrent cortical network learn to compute on temporally and spatially extended stimuli. However, these stimuli constitute the norm, rather than the exception, of the brain's input. Here, we introduce a geometric theory of learning spatiotemporal computations through neuronal plasticity. To that end, we rigorously formulate the problem of neural representations as a relation in space between stimulus-induced neural activity and the asymptotic dynamics of excitable cortical networks. Backed up by computer simulations and numerical analysis, we show that two canonical and widely spread forms of neuronal plasticity, that is, spike-timing-dependent synaptic plasticity and intrinsic plasticity, are both necessary for creating neural representations, such that these computations become realizable. Interestingly, the effects of these forms of plasticity on the emerging neural code relate to properties necessary for both combating and utilizing noise. The neural dynamics also exhibits features of the most likely stimulus in the network's spontaneous activity. These properties of the spatiotemporal neural code resulting from plasticity, having their grounding in nature, further consolidate the biological relevance of our findings

    Homeostatic plasticity for single node delay-coupled reservoir computing

    Get PDF
    © 2015 Massachusetts Institute of Technology. Supplementing a differential equation with delays results in an infinitedimensional dynamical system. This property provides the basis for a reservoir computing architecture, where the recurrent neural network is replaced by a single nonlinear node, delay-coupled to itself. Instead of the spatial topology of a network, subunits in the delay-coupled reservoir are multiplexed in time along one delay span of the system. The computational power of the reservoir is contingent on this temporal multiplexing. Here, we learn optimal temporal multiplexing by means of a biologically inspired homeostatic plasticity mechanism. Plasticity acts locally and changes the distances between the subunits along the delay, depending on how responsive these subunits are to the input. After analytically deriving the learning mechanism, we illustrate its role in improving the reservoir's computational power. To this end, we investigate, first, the increase of the reservoir's memory capacity. Second, we predict a NARMA-10 time series, showing that plasticity reduces the normalized root-mean-square error by more than 20%. Third, we discuss plasticity's influence on the reservoir's input-information capacity, the coupling strength between subunits, and the distribution of the readout coefficients

    Coordinated prefrontal state transition leads extinction of reward-seeking behaviors

    Get PDF
    Extinction learning suppresses conditioned reward responses and is thus fundamental to adapt to changing environmental demands and to control excessive reward seeking. The medial prefrontal cortex (mPFC) monitors and controls conditioned reward responses. Abrupt transitions in mPFC activity anticipate changes in conditioned responses to altered contingencies. It remains, however, unknown whether such transitions are driven by the extinction of old behavioral strategies or by the acquisition of new competing ones. Using in vivo multiple single-unit recordings of mPFC in male rats, we studied the relationship between single-unit and population dynamics during extinction learning, using alcohol as a positive reinforcer in an operant conditioning paradigm. To examine the fine temporal relation between neural activity and behavior, we developed a novel behavioral model that allowed us to identify the number, onset, and duration of extinction-learning episodes in the behavior of each animal. We found that single-unit responses to conditioned stimuli changed even under stable experimental conditions and behavior. However, when behavioral responses to task contingencies had to be updated, unit-specific modulations became coordinated across the whole population, pushing the network into a new stable attractor state. Thus, extinction learning is not associated with suppressed mPFC responses to conditioned stimuli, but is anticipated by single-unit coordination into population-wide transitions of the internal state of the animal

    Species-conserved mechanisms of cognitive flexibility in complex environments

    Full text link
    Flexible decision making in complex environments is a hallmark of intelligent behavior but the underlying learning mechanisms and neural computations remain elusive. Through a combination of behavioral, computational and electrophysiological analysis of a novel multidimensional rule-learning paradigm, we show that both rats and humans sequentially probe different behavioral strategies to infer the task rule, rather than learning all possible mappings between environmental cues and actions as current theoretical formulations suppose. This species-conserved process reduces task dimensionality and explains both observed sudden behavioral transitions and positive transfer effects. Behavioral strategies are represented by rat prefrontal activity and strategy-related variables can be decoded from magnetoencephalography signals in human prefrontal cortex. These mechanistic findings provide a foundation for the translational investigation of impaired cognitive flexibility.One-Sentence SummaryBoth rats and humans use behavioral strategies to infer task rules during multidimensional rule-learning

    26th Annual Computational Neuroscience Meeting (CNS*2017): Part 1

    Get PDF

    Homeostatic Plasticity in Input-Driven Dynamical Systems

    No full text
    The degree by which a species can adapt to the demands of its changing environment defines how well it can exploit the resources of new ecological niches. Since the nervous system is the seat of an organism's behavior, studying adaptation starts from there. The nervous system adapts through neuronal plasticity, which may be considered as the brain's reaction to environmental perturbations. In a natural setting, these perturbations are always changing. As such, a full understanding of how the brain functions requires studying neuronal plasticity under temporally varying stimulation conditions, i.e., studying the role of plasticity in carrying out spatiotemporal computations. It is only then that we can fully benefit from the full potential of neural information processing to build powerful brain-inspired adaptive technologies. Here, we focus on homeostatic plasticity, where certain properties of the neural machinery are regulated so that they remain within a functionally and metabolically desirable range. Our main goal is to illustrate how homeostatic plasticity interacting with associative mechanisms is functionally relevant for spatiotemporal computations. The thesis consists of three studies that share two features: (1) homeostatic and synaptic plasticity act on a dynamical system such as a recurrent neural network. (2) The dynamical system is nonautonomous, that is, it is subject to temporally varying stimulation. In the first study, we develop a rigorous theory of spatiotemporal representations and computations, and the role of plasticity. Within the developed theory, we show that homeostatic plasticity increases the capacity of the network to encode spatiotemporal patterns, and that synaptic plasticity associates these patterns to network states. The second study applies the insights from the first study to the single node delay-coupled reservoir computing architecture, or DCR. The DCR's activity is sampled at several computational units. We derive a homeostatic plasticity rule acting on these units. We analytically show that the rule balances between the two necessary processes for spatiotemporal computations identified in the first study. As a result, we show that the computational power of the DCR significantly increases. The third study considers minimal neural control of robots. We show that recurrent neural control with homeostatic synaptic dynamics endows the robots with memory. We show through demonstrations that this memory is necessary for generating behaviors like obstacle-avoidance of a wheel-driven robot and stable hexapod locomotion
    corecore