61 research outputs found

    Effective influences in neuronal networks : attentional modulation of effective influences underlying flexible processing and how to measure them

    Get PDF
    Selective routing of information between brain areas is a key prerequisite for flexible adaptive behaviour. It allows to focus on relevant information and to ignore potentially distracting influences. Selective attention is a psychological process which controls this preferential processing of relevant information. The neuronal network structures and dynamics, and the attentional mechanisms by which this routing is enabled are not fully clarified. Based on previous experimental findings and theories, a network model is proposed which reproduces a range of results from the attention literature. It depends on shifting of phase relations between oscillating neuronal populations to modulate the effective influence of synapses. This network model might serve as a generic routing motif throughout the brain. The attentional modifications of activity in this network are investigated experimentally and found to employ two distinct channels to influence processing: facilitation of relevant information and independent suppression of distracting information. These findings are in agreement with the model and previously unreported on the level of neuronal populations. Furthermore, effective influence in dynamical systems is investigated more closely. Due to a lack of a theoretical underpinning for measurements of influence in non-linear dynamical systems such as neuronal networks, often unsuited measures are used for experimental data that can lead to erroneous conclusions. Based on a central theorem in dynamical systems, a novel theory of effective influence is developed. Measures derived from this theory are demonstrated to capture the time dependent effective influence and the asymmetry of influences in model systems and experimental data. This new theory holds the potential to uncover previously concealed interactions in generic non-linear systems studied in a range of disciplines, such as neuroscience, ecology, economy and climatology

    Effective influences in neuronal networks : attentional modulation of effective influences underlying flexible processing and how to measure them

    Get PDF
    Selective routing of information between brain areas is a key prerequisite for flexible adaptive behaviour. It allows to focus on relevant information and to ignore potentially distracting influences. Selective attention is a psychological process which controls this preferential processing of relevant information. The neuronal network structures and dynamics, and the attentional mechanisms by which this routing is enabled are not fully clarified. Based on previous experimental findings and theories, a network model is proposed which reproduces a range of results from the attention literature. It depends on shifting of phase relations between oscillating neuronal populations to modulate the effective influence of synapses. This network model might serve as a generic routing motif throughout the brain. The attentional modifications of activity in this network are investigated experimentally and found to employ two distinct channels to influence processing: facilitation of relevant information and independent suppression of distracting information. These findings are in agreement with the model and previously unreported on the level of neuronal populations. Furthermore, effective influence in dynamical systems is investigated more closely. Due to a lack of a theoretical underpinning for measurements of influence in non-linear dynamical systems such as neuronal networks, often unsuited measures are used for experimental data that can lead to erroneous conclusions. Based on a central theorem in dynamical systems, a novel theory of effective influence is developed. Measures derived from this theory are demonstrated to capture the time dependent effective influence and the asymmetry of influences in model systems and experimental data. This new theory holds the potential to uncover previously concealed interactions in generic non-linear systems studied in a range of disciplines, such as neuroscience, ecology, economy and climatology

    Information-theoretic Reasoning in Distributed and Autonomous Systems

    Get PDF
    The increasing prevalence of distributed and autonomous systems is transforming decision making in industries as diverse as agriculture, environmental monitoring, and healthcare. Despite significant efforts, challenges remain in robustly planning under uncertainty. In this thesis, we present a number of information-theoretic decision rules for improving the analysis and control of complex adaptive systems. We begin with the problem of quantifying the data storage (memory) and transfer (communication) within information processing systems. We develop an information-theoretic framework to study nonlinear interactions within cooperative and adversarial scenarios, solely from observations of each agent's dynamics. This framework is applied to simulations of robotic soccer games, where the measures reveal insights into team performance, including correlations of the information dynamics to the scoreline. We then study the communication between processes with latent nonlinear dynamics that are observed only through a filter. By using methods from differential topology, we show that the information-theoretic measures commonly used to infer communication in observed systems can also be used in certain partially observed systems. For robotic environmental monitoring, the quality of data depends on the placement of sensors. These locations can be improved by either better estimating the quality of future viewpoints or by a team of robots operating concurrently. By robustly handling the uncertainty of sensor model measurements, we are able to present the first end-to-end robotic system for autonomously tracking small dynamic animals, with a performance comparable to human trackers. We then solve the issue of coordinating multi-robot systems through distributed optimisation techniques. These allow us to develop non-myopic robot trajectories for these tasks and, importantly, show that these algorithms provide guarantees for convergence rates to the optimal payoff sequence

    Scalable Tools for Information Extraction and Causal Modeling of Neural Data

    Get PDF
    Systems neuroscience has entered in the past 20 years into an era that one might call "large scale systems neuroscience". From tuning curves and single neuron recordings there has been a conceptual shift towards a more holistic understanding of how the neural circuits work and as a result how their representations produce neural tunings. With the introduction of a plethora of datasets in various scales, modalities, animals, and systems; we as a community have witnessed invaluable insights that can be gained from the collective view of a neural circuit which was not possible with small scale experimentation. The concurrency of the advances in neural recordings such as the production of wide field imaging technologies and neuropixels with the developments in statistical machine learning and specifically deep learning has brought system neuroscience one step closer to data science. With this abundance of data, the need for developing computational models has become crucial. We need to make sense of the data, and thus we need to build models that are constrained up to the acceptable amount of biological detail and probe those models in search of neural mechanisms. This thesis consists of sections covering a wide range of ideas from computer vision, statistics, machine learning, and dynamical systems. But all of these ideas share a common purpose, which is to help automate neuroscientific experimentation process in different levels. In chapters 1, 2, and 3, I develop tools that automate the process of extracting useful information from raw neuroscience data in the model organism C. elegans. The goal of this is to avoid manual labor and pave the way for high throughput data collection aiming at better quantification of variability across the population of worms. Due to its high level of structural and functional stereotypy, and its relative simplicity, the nematode C. elegans has been an attractive model organism for systems and developmental research. With 383 neurons in males and 302 neurons in hermaphrodites, the positions and function of neurons is remarkably conserved across individuals. Furthermore, C. elegans remains the only organism for which a complete cellular, lineage, and anatomical map of the entire nervous system has been described for both sexes. Here, I describe the analysis pipeline that we developed for the recently proposed NeuroPAL technique in C. elegans. Our proposed pipeline consists of atlas building (chapter 1), registration, segmentation, neural tracking (chapter 2), and signal extraction (chapter 3). I emphasize that categorizing the analysis techniques as a pipeline consisting of the above steps is general and can be applied to virtually every single animal model and emerging imaging modality. I use the language of probabilistic generative modeling and graphical models to communicate the ideas in a rigorous form, therefore some familiarity with those concepts could help the reader navigate through the chapters of this thesis more easily. In chapters 4 and 5 I build models that aim to automate hypothesis testing and causal interrogation of neural circuits. The notion of functional connectivity (FC) has been instrumental in our understanding of how information propagates in a neural circuit. However, an important limitation is that current techniques do not dissociate between causal connections and purely functional connections with no mechanistic correspondence. I start chapter 4 by introducing causal inference as a unifying language for the following chapters. In chapter 4 I define the notion of interventional connectivity (IC) as a way to summarize the effect of stimulation in a neural circuit providing a more mechanistic description of the information flow. I then investigate which functional connectivity metrics are best predictive of IC in simulations and real data. Following this framework, I discuss how stimulations and interventions can be used to improve fitting and generalization properties of time series models. Building on the literature of model identification and active causal discovery I develop a switching time series model and a method for finding stimulation patterns that help the model to generalize to the vicinity of the observed neural trajectories. Finally in chapter 5 I develop a new FC metric that separates the transferred information from one variable to the other into unique and synergistic sources. In all projects, I have abstracted out concepts that are specific to the datasets at hand and developed the methods in the most general form. This makes the presented methods applicable to a broad range of datasets, potentially leading to new findings. In addition, all projects are accompanied with extensible and documented code packages, allowing theorists to repurpose the modules for novel applications and experimentalists to run analysis on their datasets efficiently and scalably. In summary my main contribution in this thesis are the following: 1) Building the first atlases of hermaphrodite and male C. elegans and developing a generic statistical framework for constructing atlases for a broad range of datasets. 2) Developing a semi-automated analysis pipeline for neural registration, segmentation, and tracking in C. elegans. 3) Extending the framework of non-negative matrix factorization to datasets with deformable motion and developing algorithms for joint tracking and signal demixing from videos of semi-immobilized C. elegans. 4) Defining the notion of interventional connectivity (IC) as a way to summarize the effect of stimulation in a neural circuit and investigating which functional connectivity metrics are best predictive of IC in simulations and real data. 5) Developing a switching time series model and a method for finding stimulation patterns that help the model to generalize to the vicinity of the observed neural trajectories. 6) Developing a new functional connectivity metric that separates the transferred information from one variable to the other into unique and synergistic sources. 7) Implementing extensible, well documented, open source code packages for each of the above contributions

    Understanding spiking and bursting electrical activity through piece-wise linear systems

    Get PDF
    In recent years there has been an increased interest in working with piece-wise linear caricatures of nonlinear models. Such models are often preferred over more detailed conductance based models for their small number of parameters and low computational overhead. Moreover, their piece-wise linear (PWL) form, allow the construction of action potential shapes in closed form as well as the calculation of phase response curves (PRC). With the inclusion of PWL adaptive currents they can also support bursting behaviour, though remain amenable to mathematical analysis at both the single neuron and network level. In fact, PWL models caricaturing conductance based models such as that of Morris-Lecar or McKean have also been studied for some time now and are known to be mathematically tractable at the network level. In this work we proceed to analyse PWL neuron models of conductance type. In particular we focus on PWL models of the FitzHugh-Nagumo type and describe in detail the mechanism for a canard explosion. This model is further explored at the network level in the presence of gap junction coupling. The study moves to a different area where excitable cells (pancreatic beta-cells) are used to explain insulin secretion phenomena. Here, Ca2+ signals obtained from pancreatic beta-cells of mice are extracted from image data and analysed using signal processing techniques. Both synchrony and functional connectivity analyses are performed. As regards to PWL bursting models we focus on a variant of the adaptive absolute IF model that can support bursting. We investigate the bursting electrical activity of such models with an emphasis on pancreatic beta-cells

    A Mathematical Framework on Machine Learning: Theory and Application

    Get PDF
    The dissertation addresses the research topics of machine learning outlined below. We developed the theory about traditional first-order algorithms from convex opti- mization and provide new insights in nonconvex objective functions from machine learning. Based on the theory analysis, we designed and developed new algorithms to overcome the difficulty of nonconvex objective and to accelerate the speed to obtain the desired result. In this thesis, we answer the two questions: (1) How to design a step size for gradient descent with random initialization? (2) Can we accelerate the current convex optimization algorithms and improve them into nonconvex objective? For application, we apply the optimization algorithms in sparse subspace clustering. A new algorithm, CoCoSSC, is proposed to improve the current sample complexity under the condition of the existence of noise and missing entries. Gradient-based optimization methods have been increasingly modeled and inter- preted by ordinary differential equations (ODEs). Existing ODEs in the literature are, however, inadequate to distinguish between two fundamentally different meth- ods, Nesterov’s acceleration gradient method for strongly convex functions (NAG-SC) and Polyak’s heavy-ball method. In this paper, we derive high-resolution ODEs as more accurate surrogates for the two methods in addition to Nesterov’s acceleration gradient method for general convex functions (NAG-C), respectively. These novel ODEs can be integrated into a general framework that allows for a fine-grained anal- ysis of the discrete optimization algorithms through translating properties of the amenable ODEs into those of their discrete counterparts. As a first application of this framework, we identify the effect of a term referred to as gradient correction in NAG-SC but not in the heavy-ball method, shedding deep insight into why the for- mer achieves acceleration while the latter does not. Moreover, in this high-resolution ODE framework, NAG-C is shown to boost the squared gradient norm minimization at the inverse cubic rate, which is the sharpest known rate concerning NAG-C itself. Finally, by modifying the high-resolution ODE of NAG-C, we obtain a family of new optimization methods that are shown to maintain the accelerated convergence rates as NAG-C for minimizing convex functions
    • …
    corecore