9 research outputs found

    Fine-Tuning and the Stability of Recurrent Neural Networks

    Get PDF
    A central criticism of standard theoretical approaches to constructing stable, recurrent model networks is that the synaptic connection weights need to be finely-tuned. This criticism is severe because proposed rules for learning these weights have been shown to have various limitations to their biological plausibility. Hence it is unlikely that such rules are used to continuously fine-tune the network in vivo. We describe a learning rule that is able to tune synaptic weights in a biologically plausible manner. We demonstrate and test this rule in the context of the oculomotor integrator, showing that only known neural signals are needed to tune the weights. We demonstrate that the rule appropriately accounts for a wide variety of experimental results, and is robust under several kinds of perturbation. Furthermore, we show that the rule is able to achieve stability as good as or better than that provided by the linearly optimal weights often used in recurrent models of the integrator. Finally, we discuss how this rule can be generalized to tune a wide variety of recurrent attractor networks, such as those found in head direction and path integration systems, suggesting that it may be used to tune a wide variety of stable neural systems

    Programming the cerebellum

    Get PDF
    It is argued that large-scale neural network simulations of cerebellar cortex and nuclei, based on realistic compartmental models of me major cell populations, are necessary before the problem of motor learning in the cerebellum can be solved, [HOUK et al.; SIMPSON et al.

    A MODELING PERSPECTIVE ON DEVELOPING NATURALISTIC NEUROPROSTHETICS USING ELECTRICAL STIMULATION

    Get PDF
    Direct electrical stimulation of neurons has been an important tool for understanding the brain and neurons, since the field of neuroscience began. Electrical stimulation was used to first understand sensation, the mapping of the brain, and more recently function, and, as our understanding of neurological disorders has advanced, it has become an increasingly important tool for interacting with neurons to design and carry out treatments. The hardware for electrical stimulation has greatly improved during the last century, allowing smaller scale, implantable treatments for a variety of disorders, from loss of sensations (hearing, vision, balance) to Parkinson’s disease and depression. Due to the clinical success of these treatments for a variety of impairments today, there are millions of neural implant users around the globe, and interest in medical implants and implants for human-enhancement are only growing. However, present neural implant treatments restore only limited function compared to natural systems. A limiting factor in the advancement of electrical stimulation-based treatments has been the restriction of using charge-balanced and typically short sub-millisecond pulses in order to safely interact with the brain, due to a reliance on durable, metal electrodes. Material science developments have led to more flexible electrodes that are capable of delivering more charge safely, but a focus has been on density of electrodes implanted over changing the waveform of electrical stimulation delivery. Recently, the Fridman lab at Johns Hopkins University developed the Freeform Stimulation (FS)– an implantable device that uses a microfluidic H-bridge architecture to safely deliver current for prolonged periods of time and that is not restricted to charge-balanced waveforms. In this work, we refer to these non-restricted waveforms as galvanic stimulation, which is used as an umbrella term that encompasses direct current, sinusoidal current, or alternative forms of non-charge-balanced current. The invention of the FS has opened the door to usage of galvanic stimulation in neural implants, begging an exploration of the effects of local galvanic stimulation on neural function. Galvanic stimulation has been used in the field of neuroscience, prior to concerns about safe long-term interaction with neurons. Unlike many systems, it had been historically used in the vestibular system internally and in the form of transcutaneous stimulation to this day. Historic and recent studies confirm that galvanic stimulation of the vestibular system has more naturalistic effects on neural spike timing and on induced behavior (eye velocities) than pulsatile stimulation, the standard in neural implants now. Recent vestibular stimulation studies with pulses also show evidence of suboptimal responses of neurons to pulsatile stimulation in which suprathreshold pulses only induce about half as many action potentials as pulses. This combination of results prompted an investigation of differences between galvanic and pulsatile electrical stimulation in the vestibular system. The research in this dissertation uses detailed biophysical modeling of single vestibular neurons to investigate the differences in the biophysical mechanism of galvanic and pulsatile stimulation. In Chapter 2, a more accurate model of a vestibular afferent is constructed from an existing model, and it is used to provide a theory for how galvanic stimulation produces a number of known effects on vestibular afferents. In Chapter 3, the same model is used to explain why pulsatile stimulation produces fewer action potentials than expected, and the results show that pulse amplitude, pulse rate, and the spontaneous activity of neurons at the axon have a number of interactions that lead to several non-monotonic relationships between pulse parameters and induced firing rate. Equations are created to correct for these non-monotonic relationships and produce intended firing rates. Chapter 4 focuses on how to create a neural implant that induces more naturalistic firing using the scientific understanding from Chapters 2 and 3 and machine learning. The work concludes by describing the implications of these findings for interacting with neurons and population and network scales and how this may make electrical stimulation increasingly more suited for treating complex network-level and psychiatric disorders

    Learned Feedback & Feedforward Perception & Control

    Get PDF
    The notions of feedback and feedforward information processing gained prominence under cybernetics, an early movement at the dawn of computer science and theoretical neuroscience. Negative feedback processing corrects errors, whereas feedforward processing makes predictions, thereby preemptively reducing errors. A key insight of cybernetics was that such processes can be applied to both perception, or state estimation, and control, or action selection. The remnants of this insight are found in many modern areas, including predictive coding in neuroscience and deep latent variable models in machine learning. This thesis draws on feedback and feedforward ideas developed within predictive coding, adapting them to improve machine learning techniques for perception (Part II) and control (Part III). Upon establishing these conceptual connections, in Part IV, we traverse this bridge, from machine learning back to neuroscience, arriving at new perspectives on the correspondences between these fields.</p

    25th Annual Computational Neuroscience Meeting: CNS-2016

    Get PDF
    Abstracts of the 25th Annual Computational Neuroscience Meeting: CNS-2016 Seogwipo City, Jeju-do, South Korea. 2–7 July 201

    25th annual computational neuroscience meeting: CNS-2016

    Get PDF
    The same neuron may play different functional roles in the neural circuits to which it belongs. For example, neurons in the Tritonia pedal ganglia may participate in variable phases of the swim motor rhythms [1]. While such neuronal functional variability is likely to play a major role the delivery of the functionality of neural systems, it is difficult to study it in most nervous systems. We work on the pyloric rhythm network of the crustacean stomatogastric ganglion (STG) [2]. Typically network models of the STG treat neurons of the same functional type as a single model neuron (e.g. PD neurons), assuming the same conductance parameters for these neurons and implying their synchronous firing [3, 4]. However, simultaneous recording of PD neurons shows differences between the timings of spikes of these neurons. This may indicate functional variability of these neurons. Here we modelled separately the two PD neurons of the STG in a multi-neuron model of the pyloric network. Our neuron models comply with known correlations between conductance parameters of ionic currents. Our results reproduce the experimental finding of increasing spike time distance between spikes originating from the two model PD neurons during their synchronised burst phase. The PD neuron with the larger calcium conductance generates its spikes before the other PD neuron. Larger potassium conductance values in the follower neuron imply longer delays between spikes, see Fig. 17.Neuromodulators change the conductance parameters of neurons and maintain the ratios of these parameters [5]. Our results show that such changes may shift the individual contribution of two PD neurons to the PD-phase of the pyloric rhythm altering their functionality within this rhythm. Our work paves the way towards an accessible experimental and computational framework for the analysis of the mechanisms and impact of functional variability of neurons within the neural circuits to which they belong

    Brain Computations and Connectivity [2nd edition]

    Get PDF
    This is an open access title available under the terms of a CC BY-NC-ND 4.0 International licence. It is free to read on the Oxford Academic platform and offered as a free PDF download from OUP and selected open access locations. Brain Computations and Connectivity is about how the brain works. In order to understand this, it is essential to know what is computed by different brain systems; and how the computations are performed. The aim of this book is to elucidate what is computed in different brain systems; and to describe current biologically plausible computational approaches and models of how each of these brain systems computes. Understanding the brain in this way has enormous potential for understanding ourselves better in health and in disease. Potential applications of this understanding are to the treatment of the brain in disease; and to artificial intelligence which will benefit from knowledge of how the brain performs many of its extraordinarily impressive functions. This book is pioneering in taking this approach to brain function: to consider what is computed by many of our brain systems; and how it is computed, and updates by much new evidence including the connectivity of the human brain the earlier book: Rolls (2021) Brain Computations: What and How, Oxford University Press. Brain Computations and Connectivity will be of interest to all scientists interested in brain function and how the brain works, whether they are from neuroscience, or from medical sciences including neurology and psychiatry, or from the area of computational science including machine learning and artificial intelligence, or from areas such as theoretical physics
    corecore