625 research outputs found
Recommended from our members
Optimal anticipatory control as a theory of motor preparation
Supported by a decade of primate electrophysiological experiments, the prevailing theory of neural motor control holds that movement generation is accomplished by a preparatory process that progressively steers the state of the motor cortex into a movement-specific optimal subspace prior to movement onset. The state of the cortex then evolves from these optimal subspaces, producing patterns of neural activity that serve as control inputs to the musculature. This theory, however, does not address the following questions: what characterizes the optimal subspace and what are the neural mechanisms that underlie the preparatory process? We address these questions with a circuit model of movement preparation and control. Specifically, we propose that preparation can be achieved by optimal feedback control (OFC) of the cortical state via a thalamo-cortical loop. Under OFC, the state of the cortex is selectively controlled along state-space directions that have future motor consequences, and not in other inconsequential ones. We show that OFC enables fast movement preparation and explains the observed orthogonality between preparatory and movement-related monkey motor cortex activity. This illustrates the importance of constraining new theories of neural function with experimental data. However, as recording technologies continue to improve, a key challenge is to extract meaningful insights from increasingly large-scale neural recordings. Latent variable models (LVMs) are powerful tools for addressing this challenge due to their ability to identify the low-dimensional latent variables that best explain these large data sets. One shortcoming of most LVMs, however, is that they assume a Euclidean latent space, while many kinematic variables, such as head rotations and the configuration of an arm, are naturally described by variables that live on non-Euclidean latent spaces (e.g., SO(3) and tori). To address this shortcoming, we propose the Manifold Gaussian Process Latent Variable Model, a method for simultaneously inferring nonparametric tuning curves and latent variables on non-Euclidean latent spaces. We show that our method is able to correctly infer the latent ring topology of the fly and mouse head direction circuits.This work was supported by a Trinity-Henry Barlow scholarship and a scholarship from the Ministry of Education, ROC Taiwan
Revealing networks from dynamics: an introduction
What can we learn from the collective dynamics of a complex network about its
interaction topology? Taking the perspective from nonlinear dynamics, we
briefly review recent progress on how to infer structural connectivity (direct
interactions) from accessing the dynamics of the units. Potential applications
range from interaction networks in physics, to chemical and metabolic
reactions, protein and gene regulatory networks as well as neural circuits in
biology and electric power grids or wireless sensor networks in engineering.
Moreover, we briefly mention some standard ways of inferring effective or
functional connectivity.Comment: Topical review, 48 pages, 7 figure
Neural Networks: Training and Application to Nonlinear System Identification and Control
This dissertation investigates training neural networks for system identification and classification. The research contains two main contributions as follow:1. Reducing number of hidden layer nodes using a feedforward componentThis research reduces the number of hidden layer nodes and training time of neural networks to make them more suited to online identification and control applications by adding a parallel feedforward component. Implementing the feedforward component with a wavelet neural network and an echo state network provides good models for nonlinear systems.The wavelet neural network with feedforward component along with model predictive controller can reliably identify and control a seismically isolated structure during earthquake. The network model provides the predictions for model predictive control. Simulations of a 5-story seismically isolated structure with conventional lead-rubber bearings showed significant reductions of all response amplitudes for both near-field (pulse) and far-field ground motions, including reduced deformations along with corresponding reduction in acceleration response. The controller effectively regulated the apparent stiffness at the isolation level. The approach is also applied to the online identification and control of an unmanned vehicle. Lyapunov theory is used to prove the stability of the wavelet neural network and the model predictive controller. 2. Training neural networks using trajectory based optimization approachesTraining neural networks is a nonlinear non-convex optimization problem to determine the weights of the neural network. Traditional training algorithms can be inefficient and can get trapped in local minima. Two global optimization approaches are adapted to train neural networks and avoid the local minima problem. Lyapunov theory is used to prove the stability of the proposed methodology and its convergence in the presence of measurement errors. The first approach transforms the constraint satisfaction problem into unconstrained optimization. The constraints define a quotient gradient system (QGS) whose stable equilibrium points are local minima of the unconstrained optimization. The QGS is integrated to determine local minima and the local minimum with the best generalization performance is chosen as the optimal solution. The second approach uses the QGS together with a projected gradient system (PGS). The PGS is a nonlinear dynamical system, defined based on the optimization problem that searches the components of the feasible region for solutions. Lyapunov theory is used to prove the stability of PGS and QGS and their stability under presence of measurement noise
An event-based architecture for solving constraint satisfaction problems
Constraint satisfaction problems (CSPs) are typically solved using
conventional von Neumann computing architectures. However, these architectures
do not reflect the distributed nature of many of these problems and are thus
ill-suited to solving them. In this paper we present a hybrid analog/digital
hardware architecture specifically designed to solve such problems. We cast
CSPs as networks of stereotyped multi-stable oscillatory elements that
communicate using digital pulses, or events. The oscillatory elements are
implemented using analog non-stochastic circuits. The non-repeating phase
relations among the oscillatory elements drive the exploration of the solution
space. We show that this hardware architecture can yield state-of-the-art
performance on a number of CSPs under reasonable assumptions on the
implementation. We present measurements from a prototype electronic chip to
demonstrate that a physical implementation of the proposed architecture is
robust to practical non-idealities and to validate the theory proposed.Comment: First two authors contributed equally to this wor
A view of Neural Networks as dynamical systems
We consider neural networks from the point of view of dynamical systems
theory. In this spirit we review recent results dealing with the following
questions, adressed in the context of specific models.
1. Characterizing the collective dynamics; 2. Statistical analysis of spikes
trains; 3. Interplay between dynamics and network structure; 4. Effects of
synaptic plasticity.Comment: Review paper, 51 pages, 10 figures. submitte
Electricity Price Time Series Forecasting in Deregulated Markets Using Recurrent Neural Network Based Approaches
Ph.DDOCTOR OF PHILOSOPH
Dynamical structure in neural population activity
The question of how the collective activity of neural populations in the brain gives rise to complex behaviour is fundamental to neuroscience. At the core of this question lie considerations about how neural circuits can perform computations that enable sensory perception, motor control, and decision making. It is thought that such computations are implemented by the dynamical evolution of distributed activity in recurrent circuits. Thus, identifying and interpreting dynamical structure in neural population activity is a key challenge towards a better understanding of neural computation. In this thesis, I make several contributions in addressing this challenge. First, I develop two novel methods for neural data analysis. Both methods aim to extract trajectories of low-dimensional computational state variables directly from the unbinned spike-times of simultaneously recorded neurons on single trials. The first method separates inter-trial variability in the low-dimensional trajectory from variability in the timing of progression along its path, and thus offers a quantification of inter-trial variability in the underlying computational process. The second method simultaneously learns a low-dimensional portrait of the underlying nonlinear dynamics of the circuit, as well as the system's fixed points and locally linearised dynamics around them. This approach facilitates extracting interpretable low-dimensional hypotheses about computation directly from data. Second, I turn to the question of how low-dimensional dynamical structure may be embedded within a high-dimensional neurobiological circuit with excitatory and inhibitory cell-types. I analyse how such circuit-level features shape population activity, with particular focus on responses to targeted optogenetic perturbations of the circuit. Third, I consider the problem of implementing multiple computations in a single dynamical system. I address this in the framework of multi-task learning in recurrently connected networks and demonstrate that a careful organisation of low-dimensional, activity-defined subspaces within the network can help to avoid interference across tasks
Synchrony and bifurcations in coupled dynamical systems and effects of time delay
Dynamik auf Netzwerken ist ein mathematisches Feld, das in den letzten Jahrzehnten schnell gewachsen ist und Anwendungen in zahlreichen Disziplinen wie z.B. Physik, Biologie und Soziologie findet. Die Funktion vieler Netzwerke hängt von der Fähigkeit ab, die Elemente des Netzwerkes zu synchronisieren. Mit anderen Worten, die Existenz und die transversale Stabilität der synchronen Mannigfaltigkeit sind zentrale Eigenschaften. Erst seit einigen Jahren wird versucht, den verwickelten Zusammenhang zwischen der Kopplungsstruktur und den Stabilitätseigenschaften synchroner Zustände zu verstehen. Genau das ist das zentrale Thema dieser Arbeit. Zunächst präsentiere ich erste Ergebnisse zur Klassifizierung der Kanten eines gerichteten Netzwerks bezüglich ihrer Bedeutung für die Stabilität des synchronen Zustands. Folgend untersuche ich ein komplexes Verzweigungsszenario in einem gerichteten Ring von Stuart-Landau Oszillatoren und zeige, dass das Szenario persistent ist, wenn dem Netzwerk eine schwach gewichtete Kante hinzugefügt wird. Daraufhin untersuche ich synchrone Zustände in Ringen von Phasenoszillatoren die mit Zeitverzögerung gekoppelt sind. Ich bespreche die Koexistenz synchroner Lösungen und analysiere deren Stabilität und Verzweigungen. Weiter zeige ich, dass eine Zeitverschiebung genutzt werden kann, um Muster im Ring zu speichern und wiederzuerkennen. Diese Zeitverschiebung untersuche ich daraufhin für beliebige Kopplungsstrukturen. Ich zeige, dass invariante Mannigfaltigkeiten des Flusses sowie ihre Stabilität unter der Zeitverschiebung erhalten bleiben. Darüber hinaus bestimme ich die minimale Anzahl von Zeitverzögerungen, die gebraucht werden, um das System äquivalent zu beschreiben. Schließlich untersuche ich das auffällige Phänomen eines nichtstetigen Übergangs zu Synchronizität in Klassen großer Zufallsnetzwerke indem ich einen kürzlich eingeführten Zugang zur Beschreibung großer Zufallsnetzwerke auf den Fall zeitverzögerter Kopplungen verallgemeinere.Since a couple of decades, dynamics on networks is a rapidly growing branch of mathematics with applications in various disciplines such as physics, biology or sociology. The functioning of many networks heavily relies on the ability to synchronize the network’s nodes. More precisely, the existence and the transverse stability of the synchronous manifold are essential properties. It was only in the last few years that people tried to understand the entangled relation between the coupling structure of a network, given by a (di-)graph, and the stability properties of synchronous states. This is the central theme of this dissertation. I first present results towards a classification of the links in a directed, diffusive network according to their impact on the stability of synchronization. Then I investigate a complex bifurcation scenario observed in a directed ring of Stuart-Landau oscillators. I show that under the addition of a single weak link, this scenario is persistent. Subsequently, I investigate synchronous patterns in a directed ring of phase oscillators coupled with time delay. I discuss the coexistence of multiple of synchronous solutions and investigate their stability and bifurcations. I apply these results by showing that a certain time-shift transformation can be used in order to employ the ring as a pattern recognition device. Next, I investigate the same time-shift transformation for arbitrary coupling structures in a very general setting. I show that invariant manifolds of the flow together with their stability properties are conserved under the time-shift transformation. Furthermore, I determine the minimal number of delays needed to equivalently describe the system’s dynamics. Finally, I investigate a peculiar phenomenon of non-continuous transition to synchrony observed in certain classes of large random networks, generalizing a recently introduced approach for the description of large random networks to the case of delayed couplings
- …