3,140 research outputs found
Recommended from our members
Model-Based and Machine Learning-Based Control of Biological Oscillators
Nonlinear oscillators - dynamical systems with stable periodic orbits - arise in many systems of physical, technological, and biological interest. This dissertation investigates the dynamics of such oscillators arising in biology, and develops several control algorithms to modify their collective behavior. We demonstrate that these control algorithms have potential in devising treatments for Parkinson's disease, cardiac alternans, and jet lag. Phase reduction, a classical reduction technique, has been instrumental in understanding such biological oscillators. In this dissertation, we investigate a new reduction technique called augmented phase reduction, and calculate its associated analytical expressions for six dynamically different planar systems: This helps us to understand the dynamical regimes for which the use of augmented phase reduction is advantageous over the standard phase reduction. We further this study by developing a novel optimal control algorithm based on the augmented phase reduction to change the phase of a single oscillator using a minimum energy input. We show that our control algorithm is effective even when a large phase change is required or when the nontrivial Floquet multiplier of the oscillator is close to unity; in such cases, the previously proposed control algorithm based on the standard phase reduction fails.We then devise a novel framework to control a population of biological oscillators as a whole, and change their collective behavior. Our first two control algorithms are Lyapunov-based, and our third is an optimal control algorithm which minimizes the control energy consumption while achieving the desired collective behavior of an oscillator population. We show that the developed control algorithms can synchronize, desynchronize, cluster, and phase shift the population.We continue this investigation by developing two novel machine learning control algorithms, which have a simple and intelligent structure that makes them effective even with a sparse data set. We show that these algorithms are powerful enough to control a wide variety of dynamical systems and not just biological oscillators. We conclude this study by understanding how the developed machine learning algorithms work in terms of phase reduction.In this dissertation, we have developed all these algorithms with the goal of ease of experimental implementation, for which the model parameters/training data can be measured experimentally. We close the loop on this dissertation by carrying out robustness analysis for the developed algorithms; demonstrating their resilience to noise, and thus their suitability for controlling living biological tissue. They truly hold great potential in devising treatments for Parkinson's disease, cardiac alternans, and jet lag
Global computation of phase-amplitude reduction for limit-cycle dynamics
Recent years have witnessed increasing interest to phase-amplitude reduction
of limit-cycle dynamics. Adding an amplitude coordinate to the phase coordinate
allows to take into account the dynamics transversal to the limit cycle and
thereby overcomes the main limitations of classic phase reduction (strong
convergence to the limit cycle and weak inputs). While previous studies mostly
focus on local quantities such as infinitesimal responses, a major and limiting
challenge of phase-amplitude reduction is to compute amplitude coordinates
globally, in the basin of attraction of the limit cycle.
In this paper, we propose a method to compute the full set of phase-amplitude
coordinates in the large. Our method is based on the so-called Koopman
(composition) operator and aims at computing the eigenfunctions of the operator
through Laplace averages (in combination with the harmonic balance method).
This yields a forward integration method that is not limited to two-dimensional
systems. We illustrate the method by computing the so-called isostables of
limit cycles in two, three, and four-dimensional state spaces, as well as their
responses to strong external inputs.Comment: 26 page
Structure Learning in Coupled Dynamical Systems and Dynamic Causal Modelling
Identifying a coupled dynamical system out of many plausible candidates, each
of which could serve as the underlying generator of some observed measurements,
is a profoundly ill posed problem that commonly arises when modelling real
world phenomena. In this review, we detail a set of statistical procedures for
inferring the structure of nonlinear coupled dynamical systems (structure
learning), which has proved useful in neuroscience research. A key focus here
is the comparison of competing models of (ie, hypotheses about) network
architectures and implicit coupling functions in terms of their Bayesian model
evidence. These methods are collectively referred to as dynamical casual
modelling (DCM). We focus on a relatively new approach that is proving
remarkably useful; namely, Bayesian model reduction (BMR), which enables rapid
evaluation and comparison of models that differ in their network architecture.
We illustrate the usefulness of these techniques through modelling
neurovascular coupling (cellular pathways linking neuronal and vascular
systems), whose function is an active focus of research in neurobiology and the
imaging of coupled neuronal systems
An isostable coordinate based amelioration strategy to mitigate the effects of Jet lag
Commercial air travel has become extremely commonplace in the last 20 to 30 years especially as the world has moved towards new heights of globalization. Though air travel has greatly reduced transit times allowing people to cover thousand of miles within hours, it comes with its fair share of issues. jet-lag can be regarded to be at the top of those list of problems; jet-lag typically results from rapid travel through multiple time zones which causes a significant misalignment between the person\u27s internal circadian clock and the external time. A person\u27s circadian clock is governed by a population of coupled neurons entrained to a 24-hour light and dark cycle and thus after rapid air travel, the neuron population needs a certain time to get accustomed to the new time zone. This misalignment can result in a variety of health problems including, but not limited to, lethargy, insomnia and adverse effects to the sleep cycle.
Various techniques have been proposed and are currently in use for jet-lag treatment like melatonin ingestion or making drastic changes to one\u27s own routine prior to air travel. However, these treatment strategies are normally accompanied with long re-entrainment times or following a strict schedule to help with correcting the sleep cycle. The presented work explores an alternate strategy for jet-lag treatment using the notion of operational phase and isostable coordinates for model reduction and then, applying optimal control to derive inputs which can be applied directly to the model. To show the framework\u27s efficacy, results are presented by applying the strategy to a 2-d model; preliminary results show that the proposed approach greatly reduces the reentrainment time required to acclimatize to the new time zone
Synchronization and Redundancy: Implications for Robustness of Neural Learning and Decision Making
Learning and decision making in the brain are key processes critical to
survival, and yet are processes implemented by non-ideal biological building
blocks which can impose significant error. We explore quantitatively how the
brain might cope with this inherent source of error by taking advantage of two
ubiquitous mechanisms, redundancy and synchronization. In particular we
consider a neural process whose goal is to learn a decision function by
implementing a nonlinear gradient dynamics. The dynamics, however, are assumed
to be corrupted by perturbations modeling the error which might be incurred due
to limitations of the biology, intrinsic neuronal noise, and imperfect
measurements. We show that error, and the associated uncertainty surrounding a
learned solution, can be controlled in large part by trading off
synchronization strength among multiple redundant neural systems against the
noise amplitude. The impact of the coupling between such redundant systems is
quantified by the spectrum of the network Laplacian, and we discuss the role of
network topology in synchronization and in reducing the effect of noise. A
range of situations in which the mechanisms we model arise in brain science are
discussed, and we draw attention to experimental evidence suggesting that
cortical circuits capable of implementing the computations of interest here can
be found on several scales. Finally, simulations comparing theoretical bounds
to the relevant empirical quantities show that the theoretical estimates we
derive can be tight.Comment: Preprint, accepted for publication in Neural Computatio
- …