11 research outputs found

    Multivariate Multiscale Analysis of Neural Spike Trains

    Get PDF
    This dissertation introduces new methodologies for the analysis of neural spike trains. Biological properties of the nervous system, and how they are reflected in neural data, can motivate specific analytic tools. Some of these biological aspects motivate multiscale frameworks, which allow for simultaneous modelling of the local and global behaviour of neurons. Chapter 1 provides the preliminary background on the biology of the nervous system and details the concept of information and randomness in the analysis of the neural spike trains. It also provides the reader with a thorough literature review on the current statistical models in the analysis of neural spike trains. The material presented in the next six chapters (2-7) have been the focus of three papers, which have either already been published or are being prepared for publication. It is demonstrated in Chapters 2 and 3 that the multiscale complexity penalized likelihood method, introduced in Kolaczyk and Nowak (2004), is a powerful model in the simultaneous modelling of spike trains with biological properties from different time scales. To detect the periodic spiking activities of neurons, two periodic models from the literature, Bickel et al. (2007, 2008); Shao and Li (2011), were combined and modified in a multiscale penalized likelihood model. The contributions of these chapters are (1) employinh a powerful visualization tool, inter-spike interval (ISI) plot, (2) combining the multiscale method of Kolaczyk and Nowak (2004) with the periodic models ofBickel et al. (2007, 2008) and Shao and Li (2011), to introduce the so-called additive and multiplicative models for the intensity function of neural spike trains and introducing a cross-validation scheme to estimate their tuning parameters, (3) providing the numerical bootstrap confidence bands for the multiscale estimate of the intensity function, and (4) studying the effect of time-scale on the statistical properties of spike counts. Motivated by neural integration phenomena, as well as the adjustments for the neural refractory period, Chapters 4 and 5 study the Skellam process and introduce the Skellam Process with Resetting (SPR). Introducing SPR and its application in the analysis of neural spike trains is one of the major contributions of this dissertation. This stochastic process is biologically plausible, and unlike the Poisson process, it does not suffer from limited dependency structure. It also has multivariate generalizations for the simultaneous analysis of multiple spike trains. A computationally efficient recursive algorithm for the estimation of the parameters of SPR is introduced in Chapter 5. Except for the literature review at the beginning of Chapter 4, the rest of the material within these two chapters is original. The specific contributions of Chapters 4 and 5 are (1) introducing the Skellam Process with Resetting as a statistical tool to analyze neural spike trains and studying its properties, including all theorems and lemmas provided in Chapter 4, (2) the two fairly standard definitions of the Skellam process (homogeneous and inhomogeneous) and the proof of their equivalency, (3) deriving the likelihood function based on the observable data (spike trains) and developing a computationally efficient recursive algorithm for parameter estimation, and (4) studying the effect of time scales on the SPR model. The challenging problem of multivariate analysis of the neural spike trains is addressed in Chapter 6. As far as we know, the multivariate models which are available in the literature suffer from limited dependency structures. In particular, modelling negative correlation among spike trains is a challenging problem. To address this issue, the multivariate Skellam distribution, as well as the multivariate Skellam process, which both have flexible dependency structures, are developed. Chapter 5 also introduces a multivariate version of Skellam Process with Resetting (MSPR), and a so-called profile-moment likelihood estimation of its parameters. This chapter generalizes the results of Chapter 4 and 5, and therefore, except for the brief literature review provided at the beginning of the chapter, the remainder of the material is original work. In particular, the contributions of this chapter are (1) introducing multivariate Skellam distribution, (2) introducing two definitions of the Multivariate Skellam process in both homogeneous and inhomogeneous cases and proving their equivalence, (3) introducing Multivariate Skellam Process with Resetting (MSPR) to simultaneously model spike trains from an ensemble of neurons, and (4) utilizing the so-called profile-moment likelihood method to compute estimates of the parameters of MSPR. The discussion of the developed methodologies as well as the ``next steps'' are outlined in Chapter 7

    Model-based analysis of stability in networks of neurons

    Get PDF
    Neurons, the building blocks of the brain, are an astonishingly capable type of cell. Collectively they can store, manipulate and retrieve biologically important information, allowing animals to learn and adapt to environmental changes. This universal adaptability is widely believed to be due to plasticity: the readiness of neurons to manipulate and adjust their intrinsic properties and strengths of connections to other cells. It is through such modifications that associations between neurons can be made, giving rise to memory representations; for example, linking a neuron responding to the smell of pancakes with neurons encoding sweet taste and general gustatory pleasure. However, this malleability inherent to neuronal cells poses a dilemma from the point of view of stability: how is the brain able to maintain stable operation while in the state of constant flux? First of all, won’t there occur purely technical problems akin to short-circuiting or runaway activity? And second of all, if the neurons are so easily plastic and changeable, how can they provide a reliable description of the environment? Of course, evidence abounds to testify to the robustness of brains, both from everyday experience and scientific experiments. How does this robustness come about? Firstly, many control feedback mechanisms are in place to ensure that neurons do not enter wild regimes of behaviour. These mechanisms are collectively known as homeostatic plasticity, since they ensure functional homeostasis through plastic changes. One well-known example is synaptic scaling, a type of plasticity ensuring that a single neuron does not get overexcited by its inputs: whenever learning occurs and connections between cells get strengthened, subsequently all the neurons’ inputs get downscaled to maintain a stable level of net incoming signals. And secondly, as hinted by other researchers and directly explored in this work, networks of neurons exhibit a property present in many complex systems called sloppiness. That is, they produce very similar behaviour under a wide range of parameters. This principle appears to operate on many scales and is highly useful (perhaps even unavoidable), as it permits for variation between individuals and for robustness to mutations and developmental perturbations: since there are many combinations of parameters resulting in similar operational behaviour, a disturbance of a single, or even several, parameters does not need to lead to dysfunction. It is also that same property that permits networks of neurons to flexibly reorganize and learn without becoming unstable. As an illustrative example, consider encountering maple syrup for the first time and associating it with pancakes; thanks to sloppiness, this new link can be added without causing the network to fire excessively. As has been found in previous experimental studies, consistent multi-neuron activity patterns arise across organisms, despite the interindividual differences in firing profiles of single cells and precise values of connection strengths. Such activity patterns, as has been furthermore shown, can be maintained despite pharmacological perturbation, as neurons compensate for the perturbed parameters by adjusting others; however, not all pharmacological perturbations can be thus amended. In the present work, it is for the first time directly demonstrated that groups of neurons are by rule sloppy; their collective parameter space is mapped to reveal which are the sensitive and insensitive parameter combinations; and it is shown that the majority of spontaneous fluctuations over time primarily affect the insensitive parameters. In order to demonstrate the above, hippocampal neurons of the rat were grown in culture over multi-electrode arrays and recorded from for several days. Subsequently, statistical models were fit to the activity patterns of groups of neurons to obtain a mathematically tractable description of their collective behaviour at each time point. These models provide robust fits to the data and allow for a principled sensitivity analysis with the use of information-theoretic tools. This analysis has revealed that groups of neurons tend to be governed by a few leader units. Furthermore, it appears that it was the stability of these key neurons and their connections that ensured the stability of collective firing patterns across time. The remaining units, in turn, were free to undergo plastic changes without risking destabilizing the collective behaviour. Together with what has been observed by other researchers, the findings of the present work suggest that the impressively adaptable yet robust functioning of the brain is made possible by the interplay of feedback control of few crucial properties of neurons and the general sloppy design of networks. It has, in fact, been hypothesised that any complex system subject to evolution is bound to rely on such design: in order to cope with natural selection under changing environmental circumstances, it would be difficult for a system to rely on tightly controlled parameters. It might be, therefore, that all life is just, by nature, sloppy..

    Neuronal oscillations, information dynamics, and behaviour: an evolutionary robotics study

    Get PDF
    Oscillatory neural activity is closely related to cognition and behaviour, with synchronisation mechanisms playing a key role in the integration and functional organization of different cortical areas. Nevertheless, its informational content and relationship with behaviour - and hence cognition - are still to be fully understood. This thesis is concerned with better understanding the role of neuronal oscillations and information dynamics towards the generation of embodied cognitive behaviours and with investigating the efficacy of such systems as practical robot controllers. To this end, we develop a novel model based on the Kuramoto model of coupled phase oscillators and perform three minimally cognitive evolutionary robotics experiments. The analyses focus both on a behavioural level description, investigating the robot’s trajectories, and on a mechanism level description, exploring the variables’ dynamics and the information transfer properties within and between the agent’s body and the environment. The first experiment demonstrates that in an active categorical perception task under normal and inverted vision, networks with a definite, but not too strong, propensity for synchronisation are more able to reconfigure, to organise themselves functionally, and to adapt to different behavioural conditions. The second experiment relates assembly constitution and phase reorganisation dynamics to performance in supervised and unsupervised learning tasks. We demonstrate that assembly dynamics facilitate the evolutionary process, can account for varying degrees of stimuli modulation of the sensorimotor interactions, and can contribute to solving different tasks leaving aside other plasticity mechanisms. The third experiment explores an associative learning task considering a more realistic connectivity pattern between neurons. We demonstrate that networks with travelling waves as a default solution perform poorly compared to networks that are normally synchronised in the absence of stimuli. Overall, this thesis shows that neural synchronisation dynamics, when suitably flexible and reconfigurable, produce an asymmetric flow of information and can generate minimally cognitive embodied behaviours

    27th Annual Computational Neuroscience Meeting (CNS*2018): Part One

    Get PDF

    29th Annual Computational Neuroscience Meeting: CNS*2020

    Get PDF
    Meeting abstracts This publication was funded by OCNS. The Supplement Editors declare that they have no competing interests. Virtual | 18-22 July 202

    Study of sequential information processing in electroreception through modelling and closed-loop stimulation techniques

    Full text link
    Tesis Doctoral inédita leída en la Universidad Autónoma de Madrid, Escuela Politécnica Superior, Departamento de Ingenieria Informática. Fecha de Lectura: 20-01-202

    25th Annual Computational Neuroscience Meeting: CNS-2016

    Get PDF
    Abstracts of the 25th Annual Computational Neuroscience Meeting: CNS-2016 Seogwipo City, Jeju-do, South Korea. 2–7 July 201

    25th annual computational neuroscience meeting: CNS-2016

    Get PDF
    The same neuron may play different functional roles in the neural circuits to which it belongs. For example, neurons in the Tritonia pedal ganglia may participate in variable phases of the swim motor rhythms [1]. While such neuronal functional variability is likely to play a major role the delivery of the functionality of neural systems, it is difficult to study it in most nervous systems. We work on the pyloric rhythm network of the crustacean stomatogastric ganglion (STG) [2]. Typically network models of the STG treat neurons of the same functional type as a single model neuron (e.g. PD neurons), assuming the same conductance parameters for these neurons and implying their synchronous firing [3, 4]. However, simultaneous recording of PD neurons shows differences between the timings of spikes of these neurons. This may indicate functional variability of these neurons. Here we modelled separately the two PD neurons of the STG in a multi-neuron model of the pyloric network. Our neuron models comply with known correlations between conductance parameters of ionic currents. Our results reproduce the experimental finding of increasing spike time distance between spikes originating from the two model PD neurons during their synchronised burst phase. The PD neuron with the larger calcium conductance generates its spikes before the other PD neuron. Larger potassium conductance values in the follower neuron imply longer delays between spikes, see Fig. 17.Neuromodulators change the conductance parameters of neurons and maintain the ratios of these parameters [5]. Our results show that such changes may shift the individual contribution of two PD neurons to the PD-phase of the pyloric rhythm altering their functionality within this rhythm. Our work paves the way towards an accessible experimental and computational framework for the analysis of the mechanisms and impact of functional variability of neurons within the neural circuits to which they belong
    corecore