10 research outputs found
Variations of Boundary Surface in Chua’s Circuit
The paper compares the boundary surfaces with help of cross-sections in three projection planes, for the four changes of Chua’s circuit parameters. It is known that due to changing the parameters, the Chua’s circuit can be characterized in addition to a stable limit cycle also by one double scroll chaotic attractor, two single scroll chaotic attractors or other two stable limit cycles. Chua’s circuit can even start working as a binary memory. It is not known yet, how changes in parameters and conseqently in attractors in the circuit will affect the morphology of the boundary surface. The boundary surface separates the double scroll chaotic attractor from the stable limit cycle. In a variation of the parameters presented in this paper the boundary surface will separate even single scroll chaotic attractors from each other. Dividing the state space into regions of attractivity for different attractors, however, remains fundamentally the same
Dynamical principles in neuroscience
Dynamical modeling of neural systems and brain functions has a history of success over the last half century. This includes, for example, the explanation and prediction of some features of neural rhythmic behaviors. Many interesting dynamical models of learning and memory based on physiological experiments have been suggested over the last two decades. Dynamical models even of consciousness now exist. Usually these models and results are based on traditional approaches and paradigms of nonlinear dynamics including dynamical chaos. Neural systems are, however, an unusual subject for nonlinear dynamics for several reasons: (i) Even the simplest neural network, with only a few neurons and synaptic connections, has an enormous number of variables and control parameters. These make neural systems adaptive and flexible, and are critical to their biological function. (ii) In contrast to traditional physical systems described by well-known basic principles, first principles governing the dynamics of neural systems are unknown. (iii) Many different neural systems exhibit similar dynamics despite having different architectures and different levels of complexity. (iv) The network architecture and connection strengths are usually not known in detail and therefore the dynamical analysis must, in some sense, be probabilistic. (v) Since nervous systems are able to organize behavior based on sensory inputs, the dynamical modeling of these systems has to explain the transformation of temporal information into combinatorial or combinatorial-temporal codes, and vice versa, for memory and recognition. In this review these problems are discussed in the context of addressing the stimulating questions: What can neuroscience learn from nonlinear dynamics, and what can nonlinear dynamics learn from neuroscience?This work was supported by NSF Grant No. NSF/EIA-0130708, and Grant No. PHY 0414174; NIH Grant No. 1 R01 NS50945 and Grant No. NS40110; MEC BFI2003-07276, and Fundación BBVA
Recommended from our members
Dynamics of neural systems with time delays
Complex networks are ubiquitous in nature. Numerous neurological diseases, such as
Alzheimer's, Parkinson's, epilepsy are caused by the abnormal collective behaviour of
neurons in the brain. In particular, there is a strong evidence that Parkinson's disease is
caused by the synchronisation of neurons, and understanding how and why such synchronisation
occurs will bring scientists closer to the design and implementation of appropriate
control to support desynchronisation required for the normal functioning of the brain. In
order to study the emergence of (de)synchronisation, it is necessary first to understand
how the dynamical behaviour of the system under consideration depends on the changes
in systems parameters. This can be done using a powerful mathematical method, called
bifurcation analysis, which allows one to identify and classify different dynamical regimes,
such as, for example, stable/unstable steady states, Hopf and fold bifurcations, and find
periodic solutions by varying parameters of the nonlinear system.
In real-world systems, interactions between elements do not happen instantaneously
due to a finite time of signal propagation, reaction times of individual elements, etc.
Moreover, time delays are normally non-constant and may vary with time. This means
that it is vital to introduce time delays in any realistic model of neural networks. In
this thesis, I consider four different models. First, in order to analyse the fundamental
properties of neural networks with time-delayed connections, I consider a system of four
coupled nonlinear delay differential equations. This model represents a neural network,
where one subsystem receives a delayed input from another subsystem. The exciting
feature of this model is the combination of both discrete and distributed time delays, where
distributed time delays represent the neural feedback between the two sub-systems, and the
discrete delays describe neural interactions within each of the two subsystems. Stability
properties are investigated for different commonly used distribution kernels, and the results
are compared to the corresponding stability results for networks with no distributed delays.
It is shown how approximations to the boundary of stability region of an equilibrium point
can be obtained analytically for the cases of delta, uniform, and gamma delay distributions.
Numerical techniques are used to investigate stability properties of the fully nonlinear
system and confirm our analytical findings.
In the second part of this thesis, I consider a globally coupled network composed of
active (oscillatory) and inactive (non-oscillatory) oscillators with distributed time delayed
coupling. Analytical conditions for the amplitude death, where the oscillations are quenched,
are obtained in terms of the coupling strength, the ratio of inactive oscillators, the width
of the uniformly distributed delay and the mean time delay for gamma distribution. The
results show that for uniform distribution, by increasing both the width of the delay distribution
and the ratio of inactive oscillators, the amplitude death region increases in the
mean time delay and the coupling strength parameter space. In the case of the gamma
distribution kernel, we find the amplitude death region in the space of the ratio of inactive
oscillators, the mean time delay for gamma distribution, and the coupling strength for
both weak and strong gamma distribution kernels.
Furthermore, I analyse a model of the subthalamic nucleus (STN)-globus palidus (GP)
network with three different transmission delays. A time-shift transformation reduces the
model to a system with two time delays, for which the existence of a unique steady
state is established. Conditions for stability of the steady state are derived in terms of
system parameters and the time delays. Numerical stability analysis is performed using
traceDDE and DDE-BIFTOOL in Matlab to investigate different dynamical regimes in
the STN-GP model, and to obtain critical stability boundaries separating stable (healthy)
and oscillatory (Parkinsonian-like) neural ring. Direct numerical simulations of the fully
nonlinear system are performed to confirm analytical findings, and to illustrate different
dynamical behaviours of the system.
Finally, I consider a ring of n neurons coupled through the discrete and distributed
time delays. I show that the amplitude death occurs in the symmetric (asymmetric) region
depending on the even (odd) number of neurons in the ring neural system. Analytical
conditions for linear stability of the trivial steady state are represented in a parameter space
of the synaptic weight of the self-feedback and the coupling strength between the connected
neurons, as well as in the space of the delayed self-feedback and the coupling strength
between the neurons. It is shown that both Hopf and steady-state bifurcations may occur
when the steady state loses its stability. Stability properties are also investigated for
different commonly used distribution kernels, such as delta function and weak gamma
distributions. Moreover, the obtained analytical results are confirmed by the numerical
simulations of the fully nonlinear system
Brain Computations and Connectivity [2nd edition]
This is an open access title available under the terms of a CC BY-NC-ND 4.0 International licence. It is free to read on the Oxford Academic platform and offered as a free PDF download from OUP and selected open access locations.
Brain Computations and Connectivity is about how the brain works. In order to understand this, it is essential to know what is computed by different brain systems; and how the computations are performed.
The aim of this book is to elucidate what is computed in different brain systems; and to describe current biologically plausible computational approaches and models of how each of these brain systems computes.
Understanding the brain in this way has enormous potential for understanding ourselves better in health and in disease. Potential applications of this understanding are to the treatment of the brain in disease; and to artificial intelligence which will benefit from knowledge of how the brain performs many of its extraordinarily impressive functions.
This book is pioneering in taking this approach to brain function: to consider what is computed by many of our brain systems; and how it is computed, and updates by much new evidence including the connectivity of the human brain the earlier book: Rolls (2021) Brain Computations: What and How, Oxford University Press.
Brain Computations and Connectivity will be of interest to all scientists interested in brain function and how the brain works, whether they are from neuroscience, or from medical sciences including neurology and psychiatry, or from the area of computational science including machine learning and artificial intelligence, or from areas such as theoretical physics
A complex systems approach to education in Switzerland
The insights gained from the study of complex systems in biological, social, and engineered systems enables us not only to observe and understand, but also to actively design systems which will be capable of successfully coping with complex and dynamically changing situations. The methods and mindset required for this approach have been applied to educational systems with their diverse levels of scale and complexity. Based on the general case made by Yaneer Bar-Yam, this paper applies the complex systems approach to the educational system in Switzerland. It confirms that the complex systems approach is valid. Indeed, many recommendations made for the general case have already been implemented in the Swiss education system. To address existing problems and difficulties, further steps are recommended. This paper contributes to the further establishment complex systems approach by shedding light on an area which concerns us all, which is a frequent topic of discussion and dispute among politicians and the public, where billions of dollars have been spent without achieving the desired results, and where it is difficult to directly derive consequences from actions taken. The analysis of the education system's different levels, their complexity and scale will clarify how such a dynamic system should be approached, and how it can be guided towards the desired performance