902 research outputs found
Minimum Number of Probes for Brain Dynamics Observability
In this paper, we address the problem of placing sensor probes in the brain
such that the system dynamics' are generically observable. The system dynamics
whose states can encode for instance the fire-rating of the neurons or their
ensemble following a neural-topological (structural) approach, and the sensors
are assumed to be dedicated, i.e., can only measure a state at each time. Even
though the mathematical description of brain dynamics is (yet) to be
discovered, we build on its observed fractal characteristics and assume that
the model of the brain activity satisfies fractional-order dynamics.
Although the sensor placement explored in this paper is particularly
considering the observability of brain dynamics, the proposed methodology
applies to any fractional-order linear system. Thus, the main contribution of
this paper is to show how to place the minimum number of dedicated sensors,
i.e., sensors measuring only a state variable, to ensure generic observability
in discrete-time fractional-order systems for a specified finite interval of
time. Finally, an illustrative example of the main results is provided using
electroencephalogram (EEG) data.Comment: arXiv admin note: text overlap with arXiv:1507.0720
Modeled channel distributions explain extracellular recordings from cultured neurons sealed to microelectrodes
Amplitudes and shapes of extracellular recordings from single neurons cultured on a substrate embedded microelectrode depend not only on the volume conducting properties of the neuron-electrode interface, but might also depend on the distribution of voltage-sensitive channels over the neuronal membrane. In this paper, finite-element modeling is used to quantify the effect of these channel distributions on the neuron-electrode contact. Slight accumulation or depletion of voltage-sensitive channels in the sealing membrane of the neuron results in various shapes and amplitudes of simulated extracellular recordings. However, estimation of channel-specific accumulation factors from extracellular recordings can be obstructed by co-occuring ion currents and defect sealing. Experimental data from cultured neuron-electrode interfaces suggest depletion of sodium channels and accumulation of potassium channels
Observing and tracking bandlimited graph processes from sampled measurements
A critical challenge in graph signal processing is the sampling of bandlimited graph signals; signals that are sparse in a well-defined graph Fourier domain. Current works focused on sampling time-invariant graph signals and ignored their temporal evolution. However, time can bring new insights on sampling since sensor, biological, and financial network signals are correlated in both domains. Hence, in this work, we develop a sampling theory for time varying graph signals, named graph processes, to observe and track a process described by a linear state-space model. We provide a mathematical analysis to highlight the role of the graph, process bandwidth, and sample locations. We also propose sampling strategies that exploit the coupling between the topology and the corresponding process. Numerical experiments corroborate our theory and show the proposed methods trade well the number of samples with accuracy
Recommended from our members
Optimal anticipatory control as a theory of motor preparation
Supported by a decade of primate electrophysiological experiments, the prevailing theory of neural motor control holds that movement generation is accomplished by a preparatory process that progressively steers the state of the motor cortex into a movement-specific optimal subspace prior to movement onset. The state of the cortex then evolves from these optimal subspaces, producing patterns of neural activity that serve as control inputs to the musculature. This theory, however, does not address the following questions: what characterizes the optimal subspace and what are the neural mechanisms that underlie the preparatory process? We address these questions with a circuit model of movement preparation and control. Specifically, we propose that preparation can be achieved by optimal feedback control (OFC) of the cortical state via a thalamo-cortical loop. Under OFC, the state of the cortex is selectively controlled along state-space directions that have future motor consequences, and not in other inconsequential ones. We show that OFC enables fast movement preparation and explains the observed orthogonality between preparatory and movement-related monkey motor cortex activity. This illustrates the importance of constraining new theories of neural function with experimental data. However, as recording technologies continue to improve, a key challenge is to extract meaningful insights from increasingly large-scale neural recordings. Latent variable models (LVMs) are powerful tools for addressing this challenge due to their ability to identify the low-dimensional latent variables that best explain these large data sets. One shortcoming of most LVMs, however, is that they assume a Euclidean latent space, while many kinematic variables, such as head rotations and the configuration of an arm, are naturally described by variables that live on non-Euclidean latent spaces (e.g., SO(3) and tori). To address this shortcoming, we propose the Manifold Gaussian Process Latent Variable Model, a method for simultaneously inferring nonparametric tuning curves and latent variables on non-Euclidean latent spaces. We show that our method is able to correctly infer the latent ring topology of the fly and mouse head direction circuits.This work was supported by a Trinity-Henry Barlow scholarship and a scholarship from the Ministry of Education, ROC Taiwan
Understanding the Role of Dynamics in Brain Networks: Methods, Theory and Application
The brain is inherently a dynamical system whose networks interact at multiple spatial and temporal scales. Understanding the functional role of these dynamic interactions is a fundamental question in neuroscience. In this research, we approach this question through the development of new methods for characterizing brain dynamics from real data and new theories for linking dynamics to function. We perform our study at two scales: macro (at the level of brain regions) and micro (at the level of individual neurons).
In the first part of this dissertation, we develop methods to identify the underlying dynamics at macro-scale that govern brain networks during states of health and disease in humans. First, we establish an optimization framework to actively probe connections in brain networks when the underlying network dynamics are changing over time. Then, we extend this framework to develop a data-driven approach for analyzing neurophysiological recordings without active stimulation, to describe the spatiotemporal structure of neural activity at different timescales. The overall goal is to detect how the dynamics of brain networks may change within and between particular cognitive states. We present the efficacy of this approach in characterizing spatiotemporal motifs of correlated neural activity during the transition from wakefulness to general anesthesia in functional magnetic resonance imaging (fMRI) data. Moreover, we demonstrate how such an approach can be utilized to construct an automatic classifier for detecting different levels of coma in electroencephalogram (EEG) data.
In the second part, we study how ongoing function can constraint dynamics at micro-scale in recurrent neural networks, with particular application to sensory systems. Specifically, we develop theoretical conditions in a linear recurrent network in the presence of both disturbance and noise for exact and stable recovery of dynamic sparse stimuli applied to the network. We show how network dynamics can affect the decoding performance in such systems. Moreover, we formulate the problem of efficient encoding of an afferent input and its history in a nonlinear recurrent network. We show that a linear neural network architecture with a thresholding activation function is emergent if we assume that neurons optimize their activity based on a particular cost function. Such an architecture can enable the production of lightweight, history-sensitive encoding schemes
Distributed and Decentralized Kalman Filtering for Cascaded Fractional Order Systems
This paper presents a distributed Kalman filter algorithm for cascaded systems of fractional order. Certain conditions are introduced under which a division of a fractional system into cascaded subsystems is possible. A functional distribution of a large scale system and of the state estimation algorithm leads to smaller and scalable nodes with reduced memory and computational effort. Since each subsystem performs its calculations locally, a central processing node is not needed. All data which are required by subsequent nodes are communicated to them unidirectionally. Also a comparison between the Fractional Kalman Filter (FKF) and the Cascaded Fractional Kalman Filter (CFKF) is given by an example
- …