501 research outputs found

    Motor Cortical Networks for Skilled Movements Have Dynamic Properties That Are Related to Accurate Reaching

    Get PDF
    Neurons in the Primary Motor Cortex (MI) are known to form functional ensembles with one another in order to produce voluntary movement. Neural network changes during skill learning are thought to be involved in improved fluency and accuracy of motor tasks. Unforced errors during skilled tasks provide an avenue to study network connections related to motor learning. In order to investigate network activity in MI, microwires were implanted in the MI of cats trained to perform a reaching task. Spike trains from eight groups of simultaneously recorded cells (95 neurons in total) were acquired. A point process generalized linear model (GLM) was developed to assess simultaneously recorded cells for functional connectivity during reaching attempts where unforced errors or no errors were made. Whilst the same groups of neurons were often functionally connected regardless of trial success, functional connectivity between neurons was significantly different at fine time scales when the outcome of task performance changed. Furthermore, connections were shown to be significantly more robust across multiple latencies during successful trials of task performance. The results of this study indicate that reach-related neurons in MI form dynamic spiking dependencies whose temporal features are highly sensitive to unforced movement errors.National Science Foundation (U.S.) (Grant DP1-OD003646)National Science Foundation (U.S.) (R01- DA015644)Australian Neuromuscular Research Institut

    Probabilistic spiking neural networks : Supervised, unsupervised and adversarial trainings

    Get PDF
    Spiking Neural Networks (SNNs), or third-generation neural networks, are networks of computation units, called neurons, in which each neuron with internal analogue dynamics receives as input and produces as output spiking, that is, binary sparse, signals. In contrast, second-generation neural networks, termed as Artificial Neural Networks (ANNs), rely on simple static non-linear neurons that are known to be energy-intensive, hindering their implementations on energy-limited processors such as mobile devices. The sparse event-based characteristics of SNNs for information transmission and encoding have made them more feasible for highly energy-efficient neuromorphic computing architectures. The most existing training algorithms for SNNs are based on deterministic spiking neurons that limit their flexibility and expressive power. Moreover, the SNNs are typically trained based on the back-propagation method, which unlike ANNs, it becomes challenging due to the non-differentiability nature of the spike dynamics. Considering these two key issues, this dissertation is devoted to develop probabilistic frameworks for SNNs that are tailored to the solution of supervised and unsupervised cognitive tasks. The SNNs utilize rich model, flexible and computationally tractable properties of Generalized Linear Model (GLM) neuron. The GLM is a probabilistic neural model that was previously considered within the computational neuroscience literature. A novel training method is proposed for the purpose of classification with a first-to-spike decoding rule, whereby the SNN can perform an early classification decision once spike firing is detected at an output neuron. This method is in contrast with conventional classification rules for SNNs that operate offline based on the number of output spikes at each output neuron. As a result, the proposed method improves the accuracy-inference complexity trade-off with respect to conventional decoding. For the first time in the field, the sensitivity of SNNs trained via Maximum Likelihood (ML) is studied under white-box adversarial attacks. Rate and time encoding, as well as rate and first-to-spike decoding, are considered. Furthermore, a robust training mechanism is proposed that is demonstrated to enhance the resilience of SNNs under adversarial examples. Finally, unsupervised training task for probabilistic SNNs is studied. Under generative model framework, multi-layers SNNs are designed for both encoding and generative parts. In order to train the Variational Autoencoders (VAEs), the standard ML approach is considered. To tackle the intractable inference part, variational learning approaches including doubly stochastic gradient learning, Maximum A Posterior (MAP)-based, and Rao-Blackwellization (RB)-based are considered. The latter is referred as the Hybrid Stochastic-MAP Variational Learning (HSM-VL) scheme. The numerical results show performance improvements using the HSM-VL method compared to the other two training schemes

    Coordinated neuronal ensembles in primary auditory cortical columns.

    Get PDF
    The synchronous activity of groups of neurons is increasingly thought to be important in cortical information processing and transmission. However, most studies of processing in the primary auditory cortex (AI) have viewed neurons as independent filters; little is known about how coordinated AI neuronal activity is expressed throughout cortical columns and how it might enhance the processing of auditory information. To address this, we recorded from populations of neurons in AI cortical columns of anesthetized rats and, using dimensionality reduction techniques, identified multiple coordinated neuronal ensembles (cNEs), which are groups of neurons with reliable synchronous activity. We show that cNEs reflect local network configurations with enhanced information encoding properties that cannot be accounted for by stimulus-driven synchronization alone. Furthermore, similar cNEs were identified in both spontaneous and evoked activity, indicating that columnar cNEs are stable functional constructs that may represent principal units of information processing in AI

    Motor Cortical Networks for Skilled Movements Have Dynamic Properties That Are Related to Accurate Reaching

    Get PDF
    Neurons in the Primary Motor Cortex (MI) are known to form functional ensembles with one another in order to produce voluntary movement. Neural network changes during skill learning are thought to be involved in improved fluency and accuracy of motor tasks. Unforced errors during skilled tasks provide an avenue to study network connections related to motor learning. In order to investigate network activity in MI, microwires were implanted in the MI of cats trained to perform a reaching task. Spike trains from eight groups of simultaneously recorded cells (95 neurons in total) were acquired. A point process generalized linear model (GLM) was developed to assess simultaneously recorded cells for functional connectivity during reaching attempts where unforced errors or no errors were made. Whilst the same groups of neurons were often functionally connected regardless of trial success, functional connectivity between neurons was significantly different at fine time scales when the outcome of task performance changed. Furthermore, connections were shown to be significantly more robust across multiple latencies during successful trials of task performance. The results of this study indicate that reach-related neurons in MI form dynamic spiking dependencies whose temporal features are highly sensitive to unforced movement errors

    Cognición y representación interna de entornos dinámicos en el cerebro de los mamíferos

    Get PDF
    Tesis inédita de la Universidad Complutense de Madrid, Facultad de Ciencias Biológicas, leída el 07/05/2021El tiempo es una de las dimensiones fundamentales de la realidad. Paradójicamente, los fenómenos temporales del mundo natural contienen ingentes cantidades de información redundante, y a pesar de ello, codificar internamente el tiempo en el cerebro es imprescindible para anticiparse a peligros en ambientes dinámicos. No obstante, dedicar grandes cantidades de recursos cognitivos a procesar las características espacio-temporales de entornos complejos debería ser incompatible con la supervivencia, que requiere respuestas rápidas. Aun así, los animales son capaces de tomar decisiones en intervalos de tiempo muy estrechos. ¿Cómo consigue hacer esto el cerebro? Como respuesta al balance entre complejidad y velocidad, la hipótesis de la compactación del tiempo propone que el cerebro no codifica el tiempo explícitamente, sino que lo integra en el espacio. En teoría, la compactación del tiempo simplifica las representaciones internas del entorno, reduciendo significativamente la carga de trabajo dedicada a la planificación y la toma de decisiones. La compactación del tiempo proporciona un marco operativo que pretende explicar cómo las situaciones dinámicas, percibidas o producidas, se representan cognitivamente en forma de predicciones espaciales o representaciones internas compactas (CIR), que pueden almacenarse en la memoria y recuperarse más adelante para generar respuestas. Aunque la compactación del tiempo ya ha sido implementada en robots, hasta ahora no se había comprobado su existencia como mecanismo biológico y cognitivo en el cerebro...Time is one of the most prominent dimensions that organize reality. Paradoxically, there are loads of redundant information contained within the temporal features of the natural world, and yet internal coding of time in the brain seems to be crucial for anticipating time-changing, dynamic hazards. Allocating such significant brain resources to process spatiotemporal aspects of complex environments should apparently be incompatible with survival, which requires fast and accurate responses. Nonetheless, animals make decisions under pressure and in narrow time windows. How does the brain achieve this? An effort to resolve the complexity-velocity trade-off led to a hypothesis called time compaction, which states the brain does not encode time explicitly but embeds it into space. Theoretically, time compaction can significantly simplify internal representations of the environment and hence ease the brain workload devoted to planning and decision-making. Time compaction also provides an operational framework that aims to explain how perceived and produced dynamic situations are cognitively represented, in the form of spatial predictions or compact internal representations (CIRs) that can be stored in memory and be used later on to guide behaviour and generate action. Although successfully implemented in robots, time compaction still lacked assessment of its biological soundness as an actual cognitive mechanism in the brain...Fac. de Ciencias BiológicasTRUEunpu

    Constructive spiking neural networks for simulations of neuroplasticity

    Get PDF
    Artificial neural networks are important tools in machine learning and neuroscience; however, a difficult step in their implementation is the selection of the neural network size and structure. This thesis develops fundamental theory on algorithms for constructing neurons in spiking neural networks and simulations of neuroplasticity. This theory is applied in the development of a constructive algorithm based on spike-timing- dependent plasticity (STDP) that achieves continual one-shot learning of hidden spike patterns through neuron construction. The theoretical developments in this thesis begin with the proposal of a set of definitions of the fundamental components of constructive neural networks. Disagreement in terminology across the literature and a lack of clear definitions and requirements for constructive neural networks is a factor in the poor visibility and fragmentation of research. The proposed definitions are used as the basis for a generalised methodology for decomposing constructive neural networks into components to perform comparisons, design and analysis. Spiking neuron models are uncommon in constructive neural network literature; however, spiking neurons are common in simulated studies in neuroscience. Spike- timing-dependent construction is proposed as a distinct class of constructive algorithm for spiking neural networks. Past algorithms that perform spike-timing-dependent construction are decomposed into defined components for a detailed critical comparison and found to have limited applicability in simulations of biological neural networks. This thesis develops concepts and principles for designing constructive algorithms that are compatible with simulations of biological neural networks. Simulations often have orders of magnitude fewer neurons than related biological neural systems; there- fore, the neurons in a simulation may be assumed to be a selection or subset of a larger neural system with many neurons not simulated. Neuron construction and pruning may therefore be reinterpreted as the transfer of neurons between sets of simulated neurons and hypothetical neurons in the neural system. Constructive algorithms with a functional equivalence to transferring neurons between sets allow simulated neural networks to maintain biological plausibility while changing size. The components of a novel constructive algorithm are incrementally developed from the principles for biological plausibility. First, processes for calculating new synapse weights from observed simulation activity and estimates of past STDP are developed and analysed. Second, a method for predicting postsynaptic spike times for synapse weight calculations through the simulation of a proxy for hypothetical neurons is developed. Finally, spike-dependent conditions for neuron construction and pruning are developed and the processes are combined in a constructive algorithm for simulations of STDP. Repeating hidden spike patterns can be detected by neurons tuned through STDP; this result is reproduced in STDP simulations with neuron construction. Tuned neurons become unresponsive to other activity, preventing detuning but also preventing neurons from learning new spike patterns. Continual learning is demonstrated through neuron construction with immediate detection of new spike patterns from one-shot predictions of STDP convergence. Future research may investigate applications of the developed constructive algorithm in neuroscience and machine learning. The developed theory on constructive neural networks and concepts of selective simulation of neurons also provide new directions for future research.Thesis (Ph.D.) -- University of Adelaide, School of Mechanical Engineering, 201

    Training deep neural density estimators to identify mechanistic models of neural dynamics

    Get PDF
    Mechanistic modeling in neuroscience aims to explain observed phenomena in terms of underlying causes. However, determining which model parameters agree with complex and stochastic neural data presents a significant challenge. We address this challenge with a machine learning tool which uses deep neural density estimators—trained using model simulations—to carry out Bayesian inference and retrieve the full space of parameters compatible with raw data or selected data features. Our method is scalable in parameters and data features and can rapidly analyze new data after initial training. We demonstrate the power and flexibility of our approach on receptive fields, ion channels, and Hodgkin–Huxley models. We also characterize the space of circuit configurations giving rise to rhythmic activity in the crustacean stomatogastric ganglion, and use these results to derive hypotheses for underlying compensation mechanisms. Our approach will help close the gap between data-driven and theory-driven models of neural dynamics
    corecore