370 research outputs found

    Optimal Subharmonic Entrainment

    Full text link
    For many natural and engineered systems, a central function or design goal is the synchronization of one or more rhythmic or oscillating processes to an external forcing signal, which may be periodic on a different time-scale from the actuated process. Such subharmonic synchrony, which is dynamically established when N control cycles occur for every M cycles of a forced oscillator, is referred to as N:M entrainment. In many applications, entrainment must be established in an optimal manner, for example by minimizing control energy or the transient time to phase locking. We present a theory for deriving inputs that establish subharmonic N:M entrainment of general nonlinear oscillators, or of collections of rhythmic dynamical units, while optimizing such objectives. Ordinary differential equation models of oscillating systems are reduced to phase variable representations, each of which consists of a natural frequency and phase response curve. Formal averaging and the calculus of variations are then applied to such reduced models in order to derive optimal subharmonic entrainment waveforms. The optimal entrainment of a canonical model for a spiking neuron is used to illustrate this approach, which is readily extended to arbitrary oscillating systems

    Shape Representation in Primate Visual Area 4 and Inferotemporal Cortex

    Get PDF
    The representation of contour shape is an essential component of object recognition, but the cortical mechanisms underlying it are incompletely understood, leaving it a fundamental open question in neuroscience. Such an understanding would be useful theoretically as well as in developing computer vision and Brain-Computer Interface applications. We ask two fundamental questions: “How is contour shape represented in cortex and how can neural models and computer vision algorithms more closely approximate this?” We begin by analyzing the statistics of contour curvature variation and develop a measure of salience based upon the arc length over which it remains within a constrained range. We create a population of V4-like cells – responsive to a particular local contour conformation located at a specific position on an object’s boundary – and demonstrate high recognition accuracies classifying handwritten digits in the MNIST database and objects in the MPEG-7 Shape Silhouette database. We compare the performance of the cells to the “shape-context” representation (Belongie et al., 2002) and achieve roughly comparable recognition accuracies using a small test set. We analyze the relative contributions of various feature sensitivities to recognition accuracy and robustness to noise. Local curvature appears to be the most informative for shape recognition. We create a population of IT-like cells, which integrate specific information about the 2-D boundary shapes of multiple contour fragments, and evaluate its performance on a set of real images as a function of the V4 cell inputs. We determine the sub-population of cells that are most effective at identifying a particular category. We classify based upon cell population response and obtain very good results. We use the Morris-Lecar neuronal model to more realistically illustrate the previously explored shape representation pathway in V4 – IT. We demonstrate recognition using spatiotemporal patterns within a winnerless competition network with FitzHugh-Nagumo model neurons. Finally, we use the Izhikevich neuronal model to produce an enhanced response in IT, correlated with recognition, via gamma synchronization in V4. Our results support the hypothesis that the response properties of V4 and IT cells, as well as our computer models of them, function as robust shape descriptors in the object recognition process

    Extending Transfer Entropy Improves Identification of Effective Connectivity in a Spiking Cortical Network Model

    Get PDF
    Transfer entropy (TE) is an information-theoretic measure which has received recent attention in neuroscience for its potential to identify effective connectivity between neurons. Calculating TE for large ensembles of spiking neurons is computationally intensive, and has caused most investigators to probe neural interactions at only a single time delay and at a message length of only a single time bin. This is problematic, as synaptic delays between cortical neurons, for example, range from one to tens of milliseconds. In addition, neurons produce bursts of spikes spanning multiple time bins. To address these issues, here we introduce a free software package that allows TE to be measured at multiple delays and message lengths. To assess performance, we applied these extensions of TE to a spiking cortical network model (Izhikevich, 2006) with known connectivity and a range of synaptic delays. For comparison, we also investigated single-delay TE, at a message length of one bin (D1TE), and cross-correlation (CC) methods. We found that D1TE could identify 36% of true connections when evaluated at a false positive rate of 1%. For extended versions of TE, this dramatically improved to 73% of true connections. In addition, the connections correctly identified by extended versions of TE accounted for 85% of the total synaptic weight in the network. Cross correlation methods generally performed more poorly than extended TE, but were useful when data length was short. A computational performance analysis demonstrated that the algorithm for extended TE, when used on currently available desktop computers, could extract effective connectivity from 1 hr recordings containing 200 neurons in ∼5 min. We conclude that extending TE to multiple delays and message lengths improves its ability to assess effective connectivity between spiking neurons. These extensions to TE soon could become practical tools for experimentalists who record hundreds of spiking neurons

    Capturing Dopaminergic Modulation and Bimodal Membrane Behaviour of Striatal Medium Spiny Neurons in Accurate, Reduced Models

    Get PDF
    Loss of dopamine from the striatum can cause both profound motor deficits, as in Parkinson's disease, and disrupt learning. Yet the effect of dopamine on striatal neurons remains a complex and controversial topic, and is in need of a comprehensive framework. We extend a reduced model of the striatal medium spiny neuron (MSN) to account for dopaminergic modulation of its intrinsic ion channels and synaptic inputs. We tune our D1 and D2 receptor MSN models using data from a recent large-scale compartmental model. The new models capture the input–output relationships for both current injection and spiking input with remarkable accuracy, despite the order of magnitude decrease in system size. They also capture the paired pulse facilitation shown by MSNs. Our dopamine models predict that synaptic effects dominate intrinsic effects for all levels of D1 and D2 receptor activation. We analytically derive a full set of equilibrium points and their stability for the original and dopamine modulated forms of the MSN model. We find that the stability types are not changed by dopamine activation, and our models predict that the MSN is never bistable. Nonetheless, the MSN models can produce a spontaneously bimodal membrane potential similar to that recently observed in vitro following application of NMDA agonists. We demonstrate that this bimodality is created by modelling the agonist effects as slow, irregular and massive jumps in NMDA conductance and, rather than a form of bistability, is due to the voltage-dependent blockade of NMDA receptors. Our models also predict a more pronounced membrane potential bimodality following D1 receptor activation. This work thus establishes reduced yet accurate dopamine-modulated models of MSNs, suitable for use in large-scale models of the striatum. More importantly, these provide a tractable framework for further study of dopamine's effects on computation by individual neurons

    Oscillator-based neuronal modeling for seizure progression investigation and seizure control strategy

    Get PDF
    The coupled oscillator model has previously been used for the simulation of neuronal activities in in vitro rat hippocampal slice seizure data and the evaluation of seizure suppression algorithms. Each model unit can be described as either an oscillator which can generate action potential spike trains without inputs, or a threshold-based unit. With the change of only one parameter, each unit can either be an oscillator or a threshold-based spiking unit. This would eliminate the need for a new set of equations for each type of unit. Previous analysis has suggested that long kernel duration and imbalance of inhibitory feedback can cause the system to intermittently transition into and out of ictal activities. The state transitions of seizure-like events were investigated here; specifically, how the system excitability may change when the system undergoes transitions in the preictal and postictal processes. Analysis showed that the area of the excitation kernel is positively correlated with the mean firing rate of the ictal activity. The kernel duration is also correlated to the amount of ictal activity. The transition into ictal activity involved the escape from the saddle point foci in the state space trajectory identified by using Newton\u27s method. The ability to accurately anticipate and suppress seizures is an important endeavor that has tremendous impact on improving the quality of lives for epileptic patients. The stimulation studies have suggested that an electrical stimulation strategy that uses the intrinsic high complexity dynamics of the biological system may be more effective in reducing the duration of seizure-like activities in the computer model. In this research, we evaluate this strategy on an in vitro rat hippocampal slice magnesium-free model. Simulated postictal field potential data generated by an oscillator-based hippocampal network model was applied to the CA1 region of the rat hippocampal slices through a multi-electrode array (MEA) system. It was found to suppress and delay the onset of future seizures temporarily. The average inter-seizure time was found to be significantly prolonged after postictal stimulation when compared to the negative control trials and bipolar square wave signals. The result suggests that neural signal-based stimulation related to resetting may be suitable for seizure control in the clinical environment

    A computational framework for similarity estimation and stimulus reconstruction of Hodgkin-Huxley neural responses

    Get PDF
    Periodic stimuli are known to induce chaotic oscillations in the squid giant axon for a certain range of frequencies, a behaviour modelled by the Hodgkin-Huxley equations. Inthe presence of chaotic oscillations, similarity between neural responses depends on their temporal nature as firing times and amplitudes together reflect the true dynamics of theneuron. This thesis presents a method to estimate similarity between neural responses exhibiting chaotic oscillations by using both amplitude fluctuations and firing times. It isobserved that identical stimuli have similar effect on the neural dynamics and therefore, as the temporal inputs to the neuron are identical, the occurrence of similar dynamicalpatterns result in a high estimate of similarity, which correlates with the observed temporal similarity.The information about a neural activity is encoded in a neural response and usually the underlying stimulus that triggers the activity is unknown. Thus, this thesis also presents anumerical solution to reconstruct stimuli from Hodgkin-Huxley neural responses while retrieving the neural dynamics. The stimulus is reconstructed by first retrieving themaximal conductances of the ion channels and then solving the Hodgkin-Huxley equations for the stimulus. The results show that the reconstructed stimulus is a good approximationof the original stimulus, while the retrieved the neural dynamics, which represent the voltage-dependent changes in the ion channels, help to understand the changes in neuralbiochemistry. As high non-linearity of neural dynamics renders analytical inversion of a neuron an arduous task, a numerical approach provides a local solution to the problem ofstimulus reconstruction and neural dynamics retrieval

    Simple and complex spiking neurons: perspectives and analysis in a simple STDP scenario

    Full text link
    Spiking neural networks (SNNs) are largely inspired by biology and neuroscience and leverage ideas and theories to create fast and efficient learning systems. Spiking neuron models are adopted as core processing units in neuromorphic systems because they enable event-based processing. The integrate-and-fire (I&F) models are often adopted, with the simple Leaky I&F (LIF) being the most used. The reason for adopting such models is their efficiency and/or biological plausibility. Nevertheless, rigorous justification for adopting LIF over other neuron models for use in artificial learning systems has not yet been studied. This work considers various neuron models in the literature and then selects computational neuron models that are single-variable, efficient, and display different types of complexities. From this selection, we make a comparative study of three simple I&F neuron models, namely the LIF, the Quadratic I&F (QIF) and the Exponential I&F (EIF), to understand whether the use of more complex models increases the performance of the system and whether the choice of a neuron model can be directed by the task to be completed. Neuron models are tested within an SNN trained with Spike-Timing Dependent Plasticity (STDP) on a classification task on the N-MNIST and DVS Gestures datasets. Experimental results reveal that more complex neurons manifest the same ability as simpler ones to achieve high levels of accuracy on a simple dataset (N-MNIST), albeit requiring comparably more hyper-parameter tuning. However, when the data possess richer Spatio-temporal features, the QIF and EIF neuron models steadily achieve better results. This suggests that accurately selecting the model based on the richness of the feature spectrum of the data could improve the whole system's performance. Finally, the code implementing the spiking neurons in the SpykeTorch framework is made publicly available

    Brain Disease Detection From EEGS: Comparing Spiking and Recurrent Neural Networks for Non-stationary Time Series Classification

    Get PDF
    Modeling non-stationary time series data is a difficult problem area in AI, due to the fact that the statistical properties of the data change as the time series progresses. This complicates the classification of non-stationary time series, which is a method used in the detection of brain diseases from EEGs. Various techniques have been developed in the field of deep learning for tackling this problem, with recurrent neural networks (RNN) approaches utilising Long short-term memory (LSTM) architectures achieving a high degree of success. This study implements a new, spiking neural network-based approach to time series classification for the purpose of detecting three brain diseases from EEG datasets - epilepsy, alcoholism, and schizophrenia. The performance and training time of the spiking neural network classifier is compared to those of both a baseline RNN-LSTM EEG classifier and the current state-of-the art RNN-LSTM EEG classifier architecture from the relevant literature. The SNN EEG classifier model developed in this study outperforms both the baseline and state of-the-art RNN models in terms of accuracy, and is able to detect all three brain diseases with an accuracy of 100%, while requiring a far smaller number of training data samples than recurrent neural network approaches. This represents the best performance present in the literature for the task of EEG classificatio
    corecore