9 research outputs found

    Dynamical principles in neuroscience

    Full text link
    Dynamical modeling of neural systems and brain functions has a history of success over the last half century. This includes, for example, the explanation and prediction of some features of neural rhythmic behaviors. Many interesting dynamical models of learning and memory based on physiological experiments have been suggested over the last two decades. Dynamical models even of consciousness now exist. Usually these models and results are based on traditional approaches and paradigms of nonlinear dynamics including dynamical chaos. Neural systems are, however, an unusual subject for nonlinear dynamics for several reasons: (i) Even the simplest neural network, with only a few neurons and synaptic connections, has an enormous number of variables and control parameters. These make neural systems adaptive and flexible, and are critical to their biological function. (ii) In contrast to traditional physical systems described by well-known basic principles, first principles governing the dynamics of neural systems are unknown. (iii) Many different neural systems exhibit similar dynamics despite having different architectures and different levels of complexity. (iv) The network architecture and connection strengths are usually not known in detail and therefore the dynamical analysis must, in some sense, be probabilistic. (v) Since nervous systems are able to organize behavior based on sensory inputs, the dynamical modeling of these systems has to explain the transformation of temporal information into combinatorial or combinatorial-temporal codes, and vice versa, for memory and recognition. In this review these problems are discussed in the context of addressing the stimulating questions: What can neuroscience learn from nonlinear dynamics, and what can nonlinear dynamics learn from neuroscience?This work was supported by NSF Grant No. NSF/EIA-0130708, and Grant No. PHY 0414174; NIH Grant No. 1 R01 NS50945 and Grant No. NS40110; MEC BFI2003-07276, and Fundación BBVA

    Stochastic and complex dynamics in mesoscopic brain networks

    Get PDF
    The aim of this thesis is to deepen into the understanding of the mechanisms responsible for the generation of complex and stochastic dynamics, as well as emerging phenomena, in the human brain. We study typical features from the mesoscopic scale, i.e., the scale in which the dynamics is given by the activity of thousands or even millions of neurons. At this scale the synchronous activity of large neuronal populations gives rise to collective oscillations of the average voltage potential. These oscillations can easily be recorded using electroencephalography devices (EEG) or measuring the Local Field Potentials (LFPs). In Chapter 5 we show how the communication between two cortical columns (mesoscopic structures) can be mediated efficiently by a microscopic neural network. We use the synchronization of both cortical columns as a probe to ensure that an effective communication is established between the three neural structures. Our results indicate that there are certain dynamical regimes from the microscopic neural network that favor the correct communication between the cortical columns: therefore, if the LFP frequency of the neural network is of around 40Hz, the synchronization between the cortical columns is more robust compared to the situation in which the neural network oscillates at a lower frequency (10Hz). However, microscopic topological characteristics of the network also influence communication, being a small-world structure the one that best promotes the synchronization of the cortical columns. Finally, this Chapter shows how the mediation exerted by the neural network cannot be substituted by the average of its activity, that is, the dynamic properties of the microscopic neural network are essential for the proper transmission of information between all neural structures. The oscillatory brain electrical activity is largely dependent on the interplay between excitation and inhibition. In Chapter 6 we study how groups of cortical columns show complex patterns of cortical excitation and inhibition taking into account their topological features and the strength of their couplings. These cortical columns segregate between those dominated by excitation and those dominated by inhibition, affecting the synchronization properties of networks of cortical columns. In Chapter 7 we study a dynamic regime by which complex patterns of synchronization between chaotic oscillators appear spontaneously in a network. We show what conditions must a set of coupled dynamical systems fulfill in order to display heterogeneity in synchronization. Therefore, our results are related to the complex phenomenon of synchronization in the brain, which is a focus of study nowadays. Finally, in Chapter 8 we study the ability of the brain to compute and process information. The novelty here is our use of complex synchronization in the brain in order to implement basic elements of Boolean computation. In this way, we show that the partial synchronization of the oscillations in the brain establishes a code in terms of synchronization / non-synchronization (1/0, respectively), and thus all simple Boolean functions can be implemented (AND, OR, XOR, etc.). We also show that complex Boolean functions, such as a flip-flop memory, can be constructed in terms of states of dynamic synchronization of brain oscillations.L'objectiu d'aquesta Tesi és aprofundir en la comprensió dels mecanismes responsables de la generació de dinàmica complexa i estocàstica, així com de fenòmens emergents, en el cervell humà. Estudiem la fenomenologia característica de l'escala mesoscòpica, és a dir, aquella en la que la dinàmica característica ve donada per l'activitat de milers de neurones. En aquesta escala l'activitat síncrona de grans poblacions neuronals dóna lloc a un fenomen col·lectiu pel qual es produeixen oscil·lacions del seu potencial mitjà. Aquestes oscil·lacions poden ser fàcilment enregistrades mitjançant aparells d'electroencefalograma (EEG) o enregistradors de Potencials de Camp Local (LFP). En el Capítol 5 mostrem com la comunicació entre dos columnes corticals (estructures mesoscòpiques) pot ser conduïda de forma eficient per una xarxa neuronal microscòpica. De fet, emprem la sincronització de les dues columnes corticals per comprovar que s'ha establert una comunicació efectiva entre les tres estructures neuronals. Els resultats indiquen que hi ha règims dinàmics de la xarxa neuronal microscòpica que afavoreixen la correcta comunicació entre les columnes corticals: si la freqüència típica de LFP a la xarxa neuronal està al voltant dels 40Hz la sincronització entre les columnes corticals és més robusta que a una menor freqüència (10Hz). La topologia de la xarxa microscòpica també influeix en la comunicació, essent una estructura de tipus món petit (small-world) la que més afavoreix la sincronització. Finalment, la mediació de xarxa neuronal no pot ser substituïda per la mitjana de la seva activitat, és a dir, les propietats dinàmiques microscòpiques són imprescindibles per a la correcta transmissió d'informació entre totes les escales cerebrals. L'activitat elèctrica oscil·latòria cerebral ve donada en gran mesura per la interacció entre excitació i inhibició neuronal. En el Capítol 6 estudiem com grups de columnes corticals mostren patrons complexos d'excitació i inhibició segons quina sigui la seva topologia i d'acoblament. D'aquesta manera les columnes corticals se segreguen entre aquelles dominades per l'excitació i aquelles dominades per la inhibició, influint en les capacitats de sincronització de xarxes de columnes corticals. En el Capítol 7 estudiem un règim dinàmic segons el qual patrons complexos de sincronització apareixen espontàniament en xarxes d'oscil·ladors caòtics. Mostrem quines condicions s'han de donar en un conjunt de sistemes dinàmics acoblats per tal de mostrar heterogeneïtat en la sincronització, és a dir, coexistència de sincronitzacions. D'aquesta manera relacionem els nostres resultats amb el fenomen de sincronització complexa en el cervell. Finalment, en el Capítol 8 estudiem com el cervell computa i processa informació. La novetat aquí és l'ús que fem de la sincronització complexa de columnes corticals per tal d'implementar elements bàsics de computació Booleana. Mostrem com la sincronització parcial de les oscil·lacions cerebrals estableix un codi neuronal en termes de sincronització/no sincronització (1/0, respectivament) amb el qual totes les funcions Booleanes simples poden ésser implementades (AND, OR, XOR, etc). Mostrem, també, com emprant xarxes mesoscòpiques extenses les capacitats de computació creixen proporcionalment. Així funcions Booleanes complexes, com una memòria del tipus flip-flop, pot ésser construïda en termes d'estats de sincronització dinàmica d'oscil·lacions cerebrals.Postprint (published version

    The dynamics of neural codes in biological and artificial neural networks

    Full text link
    Advancing our knowledge of how the brain processes information remains a key challenge in neuroscience. This thesis combines three different approaches to the study of the dynamics of neural networks and their encoding representations: a computational approach, that builds upon basic biological features of neurons and their networks to construct effective models that can simulate their structure and dynamics; a machine-learning approach, which draws a parallel with the functional capabilities of brain networks, allowing us to infer the dynamical and encoding properties required to solve certain input-processing tasks; and a final, theoretical treatment, which will take us into the fascinating hypothesis of the "critical" brain as the mathematical foundation that can explain the emergent collective properties arising from the interactions of millions of neurons. Hand in hand with physics, we venture into the realm of neuroscience to explain the existence of quasi-universal scaling properties across brain regions, setting out to quantify the distance of their dynamics from a critical point. Next, we move into the grounds of artificial intelligence, where the very same theory of critical phenomena will prove very useful for explaining the effects of biologically-inspired plasticity rules in the forecasting ability of Reservoir Computers. Halfway into our journey, we explore the concept of neural representations of external stimuli, unveiling a surprising link between the dynamical regime of neural networks and the optimal topological properties of such representation manifolds. The thesis ends with the singular problem of representational drift in the process of odor encoding carried out by the olfactory cortex, uncovering the potential synaptic plasticity mechanisms that could explain this recently observed phenomenon.Comment: A dissertation submitted to the University of Granada in partial fulfillment of the requirements for the degree of Doctor of Philosoph

    Control of chaos in nonlinear circuits and systems

    Get PDF
    Nonlinear circuits and systems, such as electronic circuits (Chapter 5), power converters (Chapter 6), human brains (Chapter 7), phase lock loops (Chapter 8), sigma delta modulators (Chapter 9), etc, are found almost everywhere. Understanding nonlinear behaviours as well as control of these circuits and systems are important for real practical engineering applications. Control theories for linear circuits and systems are well developed and almost complete. However, different nonlinear circuits and systems could exhibit very different behaviours. Hence, it is difficult to unify a general control theory for general nonlinear circuits and systems. Up to now, control theories for nonlinear circuits and systems are still very limited. The objective of this book is to review the state of the art chaos control methods for some common nonlinear circuits and systems, such as those listed in the above, and stimulate further research and development in chaos control for nonlinear circuits and systems. This book consists of three parts. The first part of the book consists of reviews on general chaos control methods. In particular, a time-delayed approach written by H. Huang and G. Feng is reviewed in Chapter 1. A master slave synchronization problem for chaotic Lur’e systems is considered. A delay independent and delay dependent synchronization criteria are derived based on the H performance. The design of the time delayed feedback controller can be accomplished by means of the feasibility of linear matrix inequalities. In Chapter 2, a fuzzy model based approach written by H.K. Lam and F.H.F. Leung is reviewed. The synchronization of chaotic systems subject to parameter uncertainties is considered. A chaotic system is first represented by the fuzzy model. A switching controller is then employed to synchronize the systems. The stability conditions in terms of linear matrix inequalities are derived based on the Lyapunov stability theory. The tracking performance and parameter design of the controller are formulated as a generalized eigenvalue minimization problem which is solved numerically via some convex programming techniques. In Chapter 3, a sliding mode control approach written by Y. Feng and X. Yu is reviewed. Three kinds of sliding mode control methods, traditional sliding mode control, terminal sliding mode control and non-singular terminal sliding mode control, are employed for the control of a chaotic system to realize two different control objectives, namely to force the system states to converge to zero or to track desired trajectories. Observer based chaos synchronizations for chaotic systems with single nonlinearity and multi-nonlinearities are also presented. In Chapter 4, an optimal control approach written by C.Z. Wu, C.M. Liu, K.L. Teo and Q.X. Shao is reviewed. Systems with nonparametric regression with jump points are considered. The rough locations of all the possible jump points are identified using existing kernel methods. A smooth spline function is used to approximate each segment of the regression function. A time scaling transformation is derived so as to map the undecided jump points to fixed points. The approximation problem is formulated as an optimization problem and solved via existing optimization tools. The second part of the book consists of reviews on general chaos controls for continuous-time systems. In particular, chaos controls for Chua’s circuits written by L.A.B. Tôrres, L.A. Aguirre, R.M. Palhares and E.M.A.M. Mendes are discussed in Chapter 5. An inductorless Chua’s circuit realization is presented, as well as some practical issues, such as data analysis, mathematical modelling and dynamical characterization, are discussed. The tradeoff among the control objective, the control energy and the model complexity is derived. In Chapter 6, chaos controls for pulse width modulation current mode single phase H-bridge inverters written by B. Robert, M. Feki and H.H.C. Iu are discussed. A time delayed feedback controller is used in conjunction with the proportional controller in its simple form as well as in its extended form to stabilize the desired periodic orbit for larger values of the proportional controller gain. This method is very robust and easy to implement. In Chapter 7, chaos controls for epileptiform bursting in the brain written by M.W. Slutzky, P. Cvitanovic and D.J. Mogul are discussed. Chaos analysis and chaos control algorithms for manipulating the seizure like behaviour in a brain slice model are discussed. The techniques provide a nonlinear control pathway for terminating or potentially preventing epileptic seizures in the whole brain. The third part of the book consists of reviews on general chaos controls for discrete-time systems. In particular, chaos controls for phase lock loops written by A.M. Harb and B.A. Harb are discussed in Chapter 8. A nonlinear controller based on the theory of backstepping is designed so that the phase lock loops will not be out of lock. Also, the phase lock loops will not exhibit Hopf bifurcation and chaotic behaviours. In Chapter 9, chaos controls for sigma delta modulators written by B.W.K. Ling, C.Y.F. Ho and J.D. Reiss are discussed. A fuzzy impulsive control approach is employed for the control of the sigma delta modulators. The local stability criterion and the condition for the occurrence of limit cycle behaviours are derived. Based on the derived conditions, a fuzzy impulsive control law is formulated so that the occurrence of the limit cycle behaviours, the effect of the audio clicks and the distance between the state vectors and an invariant set are minimized supposing that the invariant set is nonempty. The state vectors can be bounded within any arbitrary nonempty region no matter what the input step size, the initial condition and the filter parameters are. The editors are much indebted to the editor of the World Scientific Series on Nonlinear Science, Prof. Leon Chua, and to Senior Editor Miss Lakshmi Narayan for their help and congenial processing of the edition

    Complex Concentrated Alloys (CCAs)

    Get PDF
    This book is a collection of several unique articles on the current state of research on complex concentrated alloys, as well as their compelling future opportunities in wide ranging applications. Complex concentrated alloys consist of multiple principal elements and represent a new paradigm in structural alloy design. They show a range of exceptional properties that are unachievable in conventional alloys, including high strength–ductility combination, resistance to oxidation, corrosion/wear resistance, and excellent high-temperature properties. The research articles, reviews, and perspectives are intended to provide a wholistic view of this multidisciplinary subject of interest to scientists and engineers

    Book of abstracts

    Get PDF

    Generalized averaged Gaussian quadrature and applications

    Get PDF
    A simple numerical method for constructing the optimal generalized averaged Gaussian quadrature formulas will be presented. These formulas exist in many cases in which real positive GaussKronrod formulas do not exist, and can be used as an adequate alternative in order to estimate the error of a Gaussian rule. We also investigate the conditions under which the optimal averaged Gaussian quadrature formulas and their truncated variants are internal

    MS FT-2-2 7 Orthogonal polynomials and quadrature: Theory, computation, and applications

    Get PDF
    Quadrature rules find many applications in science and engineering. Their analysis is a classical area of applied mathematics and continues to attract considerable attention. This seminar brings together speakers with expertise in a large variety of quadrature rules. It is the aim of the seminar to provide an overview of recent developments in the analysis of quadrature rules. The computation of error estimates and novel applications also are described
    corecore