36 research outputs found

    The “tweaking principle” for task switching

    Full text link

    DynaSim: a MATLAB toolbox for neural modeling and simulation

    Get PDF
    [EN] DynaSim is an open-source MATLAB/GNU Octave toolbox for rapid prototyping of neural models and batch simulation management. It is designed to speed up and simplify the process of generating, sharing, and exploring network models of neurons with one or more compartments. Models can be specified by equations directly (similar to XPP or the Brian simulator) or by lists of predefined or custom model components. The higher-level specification supports arbitrarily complex population models and networks of interconnected populations. DynaSim also includes a large set of features that simplify exploring model dynamics over parameter spaces, running simulations in parallel using both multicore processors and high-performance computer clusters, and analyzing and plotting large numbers of simulated data sets in parallel. It also includes a graphical user interface (DynaSim GUI) that supports full functionality without requiring user programming. The software has been implemented in MATLAB to enable advanced neural modeling using MATLAB, given its popularity and a growing interest in modeling neural systems. The design of DynaSim incorporates a novel schema for model specification to facilitate future interoperability with other specifications (e.g., NeuroML, SBML), simulators (e.g., NEURON, Brian, NEST), and web-based applications (e.g., Geppetto) outside MATLAB. DynaSim is freely available at http://dynasimtoolbox.org. This tool promises to reduce barriers for investigating dynamics in large neural models, facilitate collaborative modeling, and complement other tools being developed in the neuroinformatics community.This material is based upon research supported by the U.S. Army Research Office under award number ARO W911NF-12-R-0012-02, the U.S. Office of Naval Research under award number ONR MURI N00014-16-1-2832, and the National Science Foundation under award number NSF DMS-1042134 (Cognitive Rhythms Collaborative: A Discovery Network)Sherfey, JS.; Soplata, AE.; Ardid-Ramírez, JS.; Roberts, EA.; Stanley, DA.; Pittman-Polletta, BR.; Kopell, NJ. (2018). DynaSim: a MATLAB toolbox for neural modeling and simulation. Frontiers in Neuroinformatics. 12:1-15. https://doi.org/10.3389/fninf.2018.00010S11512Bokil, H., Andrews, P., Kulkarni, J. E., Mehta, S., & Mitra, P. P. (2010). Chronux: A platform for analyzing neural signals. Journal of Neuroscience Methods, 192(1), 146-151. doi:10.1016/j.jneumeth.2010.06.020Brette, R., Rudolph, M., Carnevale, T., Hines, M., Beeman, D., Bower, J. M., … Destexhe, A. (2007). Simulation of networks of spiking neurons: A review of tools and strategies. Journal of Computational Neuroscience, 23(3), 349-398. doi:10.1007/s10827-007-0038-6Börgers, C., & Kopell, N. (2005). Effects of Noisy Drive on Rhythms in Networks of Excitatory and Inhibitory Neurons. Neural Computation, 17(3), 557-608. doi:10.1162/0899766053019908Ching, S., Cimenser, A., Purdon, P. L., Brown, E. N., & Kopell, N. J. (2010). Thalamocortical model for a propofol-induced  -rhythm associated with loss of consciousness. Proceedings of the National Academy of Sciences, 107(52), 22665-22670. doi:10.1073/pnas.1017069108Delorme, A., & Makeig, S. (2004). EEGLAB: an open source toolbox for analysis of single-trial EEG dynamics including independent component analysis. Journal of Neuroscience Methods, 134(1), 9-21. doi:10.1016/j.jneumeth.2003.10.009Durstewitz, D., Seamans, J. K., & Sejnowski, T. J. (2000). Neurocomputational models of working memory. Nature Neuroscience, 3(S11), 1184-1191. doi:10.1038/81460EatonJ. W. BatemanD. HaubergS. WehbringR. GNU Octave Version 4.2.0 Manual: A High-Level Interactive Language for Numerical Computations2016Ermentrout, B. (2002). Simulating, Analyzing, and Animating Dynamical Systems. doi:10.1137/1.9780898718195FitzHugh, R. (1955). Mathematical models of threshold phenomena in the nerve membrane. The Bulletin of Mathematical Biophysics, 17(4), 257-278. doi:10.1007/bf02477753Gewaltig, M.-O., & Diesmann, M. (2007). NEST (NEural Simulation Tool). Scholarpedia, 2(4), 1430. doi:10.4249/scholarpedia.1430Gleeson, P., Crook, S., Cannon, R. C., Hines, M. L., Billings, G. O., Farinella, M., … Silver, R. A. (2010). NeuroML: A Language for Describing Data Driven Models of Neurons and Networks with a High Degree of Biological Detail. PLoS Computational Biology, 6(6), e1000815. doi:10.1371/journal.pcbi.1000815Goodman, D. (2008). Brian: a simulator for spiking neural networks in Python. Frontiers in Neuroinformatics, 2. doi:10.3389/neuro.11.005.2008Goodman, D. F. M. (2009). The Brian simulator. Frontiers in Neuroscience, 3(2), 192-197. doi:10.3389/neuro.01.026.2009Hines, M. L., & Carnevale, N. T. (1997). The NEURON Simulation Environment. Neural Computation, 9(6), 1179-1209. doi:10.1162/neco.1997.9.6.1179Hodgkin, A. L., & Huxley, A. F. (1952). A quantitative description of membrane current and its application to conduction and excitation in nerve. The Journal of Physiology, 117(4), 500-544. doi:10.1113/jphysiol.1952.sp004764Hucka, M., Finney, A., Sauro, H. M., Bolouri, H., Doyle, J. C., Kitano, H., … Wang. (2003). The systems biology markup language (SBML): a medium for representation and exchange of biochemical network models. Bioinformatics, 19(4), 524-531. doi:10.1093/bioinformatics/btg015Izhikevich, E. M. (2003). Simple model of spiking neurons. IEEE Transactions on Neural Networks, 14(6), 1569-1572. doi:10.1109/tnn.2003.820440Kopell, N., Ermentrout, G. B., Whittington, M. A., & Traub, R. D. (2000). Gamma rhythms and beta rhythms have different synchronization properties. Proceedings of the National Academy of Sciences, 97(4), 1867-1872. doi:10.1073/pnas.97.4.1867Kramer, M. A., Roopun, A. K., Carracedo, L. M., Traub, R. D., Whittington, M. A., & Kopell, N. J. (2008). Rhythm Generation through Period Concatenation in Rat Somatosensory Cortex. PLoS Computational Biology, 4(9), e1000169. doi:10.1371/journal.pcbi.1000169Lorenz, E. N. (1963). Deterministic Nonperiodic Flow. Journal of the Atmospheric Sciences, 20(2), 130-141. doi:10.1175/1520-0469(1963)0202.0.co;2Markram, H., Meier, K., Lippert, T., Grillner, S., Frackowiak, R., Dehaene, S., … Saria, A. (2011). Introducing the Human Brain Project. Procedia Computer Science, 7, 39-42. doi:10.1016/j.procs.2011.12.015McDougal, R. A., Morse, T. M., Carnevale, T., Marenco, L., Wang, R., Migliore, M., … Hines, M. L. (2016). Twenty years of ModelDB and beyond: building essential modeling tools for the future of neuroscience. Journal of Computational Neuroscience, 42(1), 1-10. doi:10.1007/s10827-016-0623-7Meng, L., Kramer, M. A., Middleton, S. J., Whittington, M. A., & Eden, U. T. (2014). A Unified Approach to Linking Experimental, Statistical and Computational Analysis of Spike Train Data. PLoS ONE, 9(1), e85269. doi:10.1371/journal.pone.0085269Morris, C., & Lecar, H. (1981). Voltage oscillations in the barnacle giant muscle fiber. Biophysical Journal, 35(1), 193-213. doi:10.1016/s0006-3495(81)84782-0Rudolph, M., & Destexhe, A. (2007). How much can we trust neural simulation strategies? Neurocomputing, 70(10-12), 1966-1969. doi:10.1016/j.neucom.2006.10.138Stimberg, M., Goodman, D. F. M., Benichoux, V., & Brette, R. (2014). Equation-oriented specification of neural models for simulations. Frontiers in Neuroinformatics, 8. doi:10.3389/fninf.2014.00006Traub, R. D., Buhl, E. H., Gloveli, T., & Whittington, M. A. (2003). Fast Rhythmic Bursting Can Be Induced in Layer 2/3 Cortical Neurons by Enhancing Persistent Na+Conductance or by Blocking BK Channels. Journal of Neurophysiology, 89(2), 909-921. doi:10.1152/jn.00573.200

    A Neural Circuit Model of the Striatum Resolves the Conflict between Context and Dominance Apparent in the Prefrontal Cortex

    No full text
    Neurons in the prefrontal cortex (PFC) encode sensory and context information, as well as sensory dominance in context-dependent decision-making [...

    What can tracking fluctuations in dozens of sensory neurons tell about selective attention? GENERAL COMMENTARY

    No full text
    A commentary on A neuronal population measure of attention predicts behavioral performance on individual trials by Answering the question "How does attentional correlate 'X' subserve behavioral benefits?" requires reliable measurement of the attentional correlate on the timescale of behavioral changes. However, the high variability that is present in the activity of sensory neurons makes it difficult to observe a net effect of most attentional measures on a single-trial basis. Most of the experimental evidence is based on averaging activity across trials so that the signal-to-noise ratio increases and the attentional effect becomes significant. Such averaging presents strong limitations. It cannot be used to evaluate whether the attentional state changes within a given attentional condition, and to what extent attentional fluctuations, if they exist, covary with behavior on a trialby-trial basis. A recent study by Cohen and Maunsell devised a trial-based measure that extended the typical singleneuron measure of attentional modulation. For each correct trial, the activity from all recorded neurons during the previous stimulus (i.e., the stimulus preceding the orientation change) was represented as a point in a multi-dimensional space, where each dimension encoded the activity of a single neuron. Next, an "attention axis" was defined as the line connecting the centroids of two clusters, each associated with one attentional condition (e.g., attention to the left hemifield). The projection of each point on the attention axis represented a measure of the attentional modulation in that trial. Projections from error trials tended toward the opposite attentional condition, indicating that the attentional modulation was less biased. These measurements showed that within a trial, attention during one stimulus presentation correlated with attention during the subsequent presentation. This explained why performance, ranging from nearly 0 to 70% correct trials within an attentional condition, covaried on a single-trial basis with the attentional state during the previous stimulus. For example, a high attentional allocation during the previous stimulus was associated with a good discrimination along the attention axis, which mostly persisted into the change stimulus, and eventually facilitated change detection. The highly predictive power of the attentional measure relied on only few dozens of neurons, which demonstrated the potential of the procedure. A remarkable and unanticipated finding of the Cohen and Maunsell (2010) study was that attentional fluctuations in each hemisphere appear to be independent. When the population was divided into two by hemisphere, the projections for the two separate hemispheres were not significantly correlated. This result has potential implications for the nature of the neural circuits serving as source of the attentional signal. In their view, this lack of correlation in V4 challenged the concept of a unified attentional "spotlight" that can only be directed to one location at a time, on the grounds that an attentional spotlight would induce an anticorrelation between the population projections of the two V4 hemispheres. Instead, they suggested that attention can be flexibly distributed across both hemifields and instantiated by separate neural populations with independent fluctuations. However, this conclusion does not necessarily follow from observing uncorrelated attentional fluctuations in V4. For concreteness, we present two alternative scenarios based on a single source of attention. Critically, the viability of each alternative depends on the magnitude of attentional modulation in the less-attended hemisphere, which the authors could not measure by task design. Since only the difference in attention between the two locations could be measured, it is not known whether the strength of the top-down attentional signal to the less-attended hemisphere was moderat

    Attentional Selection Can Be Predicted by Reinforcement Learning of Task-relevant Stimulus Features Weighted by Value-independent Stickiness

    No full text
    Attention includes processes that evaluate stimuli relevance, select the most relevant stimulus against less relevant stimuli, and bias choice behavior toward the selected information. It is not clear how these processes interact. Here, we captured these processes in a reinforcement learning framework applied to a feature-based attention task that required macaques to learn and update the value of stimulus features while ignoring nonrelevant sensory features, locations, and action plans. We found that value-based reinforcement learning mechanisms could account for feature-based attentional selection and choice behavior but required a value-independent stickiness selection process to explain selection errors while at asymptotic behavior. By comparing different reinforcement learning schemes, we found that trial-by-trial selections were best predicted by a model that only represents expected values for the task-relevant feature dimension, with nonrelevant stimulus features and action plans having only a marginal influence on covert selections. These findings show that attentional control subprocesses can be described by (1) the reinforcement learning of feature values within a restricted feature space that excludes irrelevant feature dimensions, (2) a stochastic selection process on feature-specific value representations, and (3) value-independent stickiness toward previous feature selections akin to perseveration in the motor domain. We speculate that these three mechanisms are implemented by distinct but interacting brain circuits and that the proposed formal account of feature-based stimulus selection will be important to understand how attentional subprocesses are implemented in primate brain networks
    corecore