462 research outputs found

    Sparse Gamma Rhythms Arising through Clustering in Adapting Neuronal Networks

    Get PDF
    Gamma rhythms (30–100 Hz) are an extensively studied synchronous brain state responsible for a number of sensory, memory, and motor processes. Experimental evidence suggests that fast-spiking interneurons are responsible for carrying the high frequency components of the rhythm, while regular-spiking pyramidal neurons fire sparsely. We propose that a combination of spike frequency adaptation and global inhibition may be responsible for this behavior. Excitatory neurons form several clusters that fire every few cycles of the fast oscillation. This is first shown in a detailed biophysical network model and then analyzed thoroughly in an idealized model. We exploit the fact that the timescale of adaptation is much slower than that of the other variables. Singular perturbation theory is used to derive an approximate periodic solution for a single spiking unit. This is then used to predict the relationship between the number of clusters arising spontaneously in the network as it relates to the adaptation time constant. We compare this to a complementary analysis that employs a weak coupling assumption to predict the first Fourier mode to destabilize from the incoherent state of an associated phase model as the external noise is reduced. Both approaches predict the same scaling of cluster number with respect to the adaptation time constant, which is corroborated in numerical simulations of the full system. Thus, we develop several testable predictions regarding the formation and characteristics of gamma rhythms with sparsely firing excitatory neurons

    The Dynamic Brain in Action: Cortical Oscillations and Coordination Dynamics

    Get PDF
    Cortical oscillations are electrical activities with rhythmic and/or repetitive nature generated spontaneously and in response to stimuli. Study of cortical oscillations has become an area of converging interests since the last two decades and has deepened our understanding of its physiological basis across different behavioral states. Experimental and modeling work has taught us that there is a wide diversity of cellular and circuit mechanisms underlying the generation of cortical rhythms. A wildly diverse set of functions has pertained to synchronous oscillations but their significance in cognition should be better appraised in the more general framework of correlation between spike times of neurons. Oscillations are the core mechanism in adjusting neuronal interactions and shaping temporal coordination of neural activity. In the first part of this thesis, we review essential feature of cortical oscillations in membrane potentials and local field potentials recorded from turtle ex vivo preparation. Then we develop a simple computational model that reproduces the observed features. This modeling investigation suggests a plausible underlying mechanism for rhythmogenesis through cellular and circuit properties. The second part of the thesis is about temporal coordination dynamics quantified by signal and noise correlations. Here, again, we present a computational model to show how temporal coordination and synchronous oscillations can be sewn together. More importantly, what biophysical ingrediants are necessary for a network to reproduce the observed coordination dynamics

    Über die Selbstorganisation einer hierarchischen Gedächtnisstruktur für kompositionelle Objektrepräsentation im visuellen Kortex

    Get PDF
    At present, there is a huge lag between the artificial and the biological information processing systems in terms of their capability to learn. This lag could be certainly reduced by gaining more insight into the higher functions of the brain like learning and memory. For instance, primate visual cortex is thought to provide the long-term memory for the visual objects acquired by experience. The visual cortex handles effortlessly arbitrary complex objects by decomposing them rapidly into constituent components of much lower complexity along hierarchically organized visual pathways. How this processing architecture self-organizes into a memory domain that employs such compositional object representation by learning from experience remains to a large extent a riddle. The study presented here approaches this question by proposing a functional model of a self-organizing hierarchical memory network. The model is based on hypothetical neuronal mechanisms involved in cortical processing and adaptation. The network architecture comprises two consecutive layers of distributed, recurrently interconnected modules. Each module is identified with a localized cortical cluster of fine-scale excitatory subnetworks. A single module performs competitive unsupervised learning on the incoming afferent signals to form a suitable representation of the locally accessible input space. The network employs an operating scheme where ongoing processing is made of discrete successive fragments termed decision cycles, presumably identifiable with the fast gamma rhythms observed in the cortex. The cycles are synchronized across the distributed modules that produce highly sparse activity within each cycle by instantiating a local winner-take-all-like operation. Equipped with adaptive mechanisms of bidirectional synaptic plasticity and homeostatic activity regulation, the network is exposed to natural face images of different persons. The images are presented incrementally one per cycle to the lower network layer as a set of Gabor filter responses extracted from local facial landmarks. The images are presented without any person identity labels. In the course of unsupervised learning, the network creates simultaneously vocabularies of reusable local face appearance elements, captures relations between the elements by linking associatively those parts that encode the same face identity, develops the higher-order identity symbols for the memorized compositions and projects this information back onto the vocabularies in generative manner. This learning corresponds to the simultaneous formation of bottom-up, lateral and top-down synaptic connectivity within and between the network layers. In the mature connectivity state, the network holds thus full compositional description of the experienced faces in form of sparse memory traces that reside in the feed-forward and recurrent connectivity. Due to the generative nature of the established representation, the network is able to recreate the full compositional description of a memorized face in terms of all its constituent parts given only its higher-order identity symbol or a subset of its parts. In the test phase, the network successfully proves its ability to recognize identity and gender of the persons from alternative face views not shown before. An intriguing feature of the emerging memory network is its ability to self-generate activity spontaneously in absence of the external stimuli. In this sleep-like off-line mode, the network shows a self-sustaining replay of the memory content formed during the previous learning. Remarkably, the recognition performance is tremendously boosted after this off-line memory reprocessing. The performance boost is articulated stronger on those face views that deviate more from the original view shown during the learning. This indicates that the off-line memory reprocessing during the sleep-like state specifically improves the generalization capability of the memory network. The positive effect turns out to be surprisingly independent of synapse-specific plasticity, relying completely on the synapse-unspecific, homeostatic activity regulation across the memory network. The developed network demonstrates thus functionality not shown by any previous neuronal modeling approach. It forms and maintains a memory domain for compositional, generative object representation in unsupervised manner through experience with natural visual images, using both on- ("wake") and off-line ("sleep") learning regimes. This functionality offers a promising departure point for further studies, aiming for deeper insight into the learning mechanisms employed by the brain and their consequent implementation in the artificial adaptive systems for solving complex tasks not tractable so far.Gegenwärtig besteht immer noch ein enormer Abstand zwischen der Lernfähigkeit von künstlichen und biologischen Informationsverarbeitungssystemen. Dieser Abstand ließe sich durch eine bessere Einsicht in die höheren Funktionen des Gehirns wie Lernen und Gedächtnis verringern. Im visuellen Kortex etwa werden die Objekte innerhalb kürzester Zeit entlang der hierarchischen Verarbeitungspfade in ihre Bestandteile zerlegt und so durch eine Komposition von Elementen niedrigerer Komplexität dargestellt. Bereits bekannte Objekte werden so aus dem Langzeitgedächtnis abgerufen und wiedererkannt. Wie eine derartige kompositionell-hierarchische Gedächtnisstruktur durch die visuelle Erfahrung zustande kommen kann, ist noch weitgehend ungeklärt. Um dieser Frage nachzugehen, wird hier ein funktionelles Modell eines lernfähigen rekurrenten neuronalen Netzwerkes vorgestellt. Im Netzwerk werden neuronale Mechanismen implementiert, die der kortikalen Verarbeitung und Plastizität zugrunde liegen. Die hierarchische Architektur des Netzwerkes besteht aus zwei nacheinander geschalteten Schichten, die jede eine Anzahl von verteilten, rekurrent vernetzten Modulen beherbergen. Ein Modul umfasst dabei mehrere funktionell separate Subnetzwerke. Jedes solches Modul ist imstande, aus den eintreffenden Signalen eine geeignete Repräsentation für den lokalen Eingaberaum unüberwacht zu lernen. Die fortlaufende Verarbeitung im Netzwerk setzt sich zusammen aus diskreten Fragmenten, genannt Entscheidungszyklen, die man mit den schnellen kortikalen Rhythmen im gamma-Frequenzbereich in Verbindung setzen kann. Die Zyklen sind synchronisiert zwischen den verteilten Modulen. Innerhalb eines Zyklus wird eine lokal umgrenzte winner-take-all-ähnliche Operation in Modulen durchgeführt. Die Kompetitionsstärke wächst im Laufe des Zyklus an. Diese Operation aktiviert in Abhängigkeit von den Eingabesignalen eine sehr kleine Anzahl von Einheiten und verstärkt sie auf Kosten der anderen, um den dargebotenen Reiz in der Netzwerkaktivität abzubilden. Ausgestattet mit adaptiven Mechanismen der bidirektionalen synaptischen Plastizität und der homöostatischen Aktivitätsregulierung, erhält das Netzwerk natürliche Gesichtsbilder von verschiedenen Personen dargeboten. Die Bilder werden der unteren Netzwerkschicht, je ein Bild pro Zyklus, als Ansammlung von Gaborfilterantworten aus lokalen Gesichtslandmarken zugeführt, ohne Information über die Personenidentität zur Verfügung zu stellen. Im Laufe der unüberwachten Lernprozedur formt das Netzwerk die Verbindungsstruktur derart, dass die Gesichter aller dargebotenen Personen im Netzwerk in Form von dünn besiedelten Gedächtnisspuren abgelegt werden. Hierzu werden gleichzeitig vorwärtsgerichtete (bottom-up) und rekurrente (lateral, top-down) synaptische Verbindungen innerhalb und zwischen den Schichten gelernt. Im reifen Verbindungszustand werden infolge dieses Lernens die einzelnen Gesichter als Komposition ihrer Bestandteile auf generative Art gespeichert. Dank der generativen Art der gelernten Struktur reichen schon allein das höhere Identitätssymbol oder eine kleine Teilmenge von zugehörigen Gesichtselementen, um alle Bestandteile der gespeicherten Gesichter aus dem Gedächtnis abzurufen. In der Testphase kann das Netzwerk erfolgreich sowohl die Identität als auch das Geschlecht von Personen aus vorher nicht gezeigten Gesichtsansichten erkennen. Eine bemerkenswerte Eigenschaft der entstandenen Gedächtnisarchitektur ist ihre Fähigkeit, ohne Darbietung von externen Stimuli spontan Aktivitätsmuster zu generieren und die im Gedächtnis abgelegten Inhalte in diesem schlafähnlichen "off-line" Regime wiederzugeben. Interessanterweise ergibt sich aus der Schlafphase ein direkter Vorteil für die Gedächtnisfunktion. Dieser Vorteil macht sich durch eine drastisch verbesserte Erkennungsrate nach der Schlafphase bemerkbar, wenn das Netwerk mit den zuvor nicht dargebotenen Ansichten von den bereits bekannten Personen konfrontiert wird. Die Leistungsverbesserung nach der Schlafphase ist umso deutlicher, je stärker die Alternativansichten vom Original abweichen. Dieser positive Effekt ist zudem komplett unabhängig von der synapsenspezifischen Plastizität und kann allein durch die synapsenunspezifische, homöostatische Regulation der Aktivität im Netzwerk erklärt werden. Das entwickelte Netzwerk demonstriert so eine im Bereich der neuronalen Modellierung bisher nicht gezeigte Funktionalität. Es kann unüberwacht eine Gedächtnisdomäne für kompositionelle, generative Objektrepräsentation durch die Erfahrung mit natürlichen Bildern sowohl im reizgetriebenen, wachähnlichen Zustand als auch im reizabgekoppelten, schlafähnlichen Zustand formen und verwalten. Diese Funktionalität bietet einen vielversprechenden Ausgangspunkt für weitere Studien, die die neuronalen Lernmechanismen des Gehirns ins Visier nehmen und letztendlich deren konsequente Umsetzung in technischen, adaptiven Systemen anstreben

    Neuronal oscillations: from single-unit activity to emergent dynamics and back

    Get PDF
    L’objectiu principal d’aquesta tesi és avançar en la comprensió del processament d’informació en xarxes neuronals en presència d’oscil lacions subumbrals. La majoria de neurones propaguen la seva activitat elèctrica a través de sinapsis químiques que són activades, exclusivament, quan el corrent elèctric que les travessa supera un cert llindar. És per aquest motiu que les descàrregues ràpides i intenses produïdes al soma neuronal, els anomenats potencials d’acció, són considerades la unitat bàsica d’informació neuronal, és a dir, el senyal mínim i necessari per a iniciar la comunicació entre dues neurones. El codi neuronal és entès, doncs, com un llenguatge binari que expressa qualsevol missatge (estímul sensorial, memòries, etc.) en un tren de potencials d’acció. Tanmateix, cap funció cognitiva rau en la dinàmica d’una única neurona. Circuits de milers de neurones connectades entre sí donen lloc a determinats ritmes, palesos en registres d’activitat colectiva com els electroencefalogrames (EEG) o els potencials de camp local (LFP). Si els potencials d’acció de cada cèl lula, desencadenats per fluctuacions estocàstiques de les corrents sinàptiques, no assolissin un cert grau de sincronia, no apareixeria aquesta periodicitat a nivell de xarxa. Per tal de poder entendre si aquests ritmes intervenen en el codi neuronal hem estudiat tres situacions. Primer, en el Capítol 2, hem mostrat com una cadena oberta de neurones amb un potencial de membrana intrínsecament oscil latori filtra un senyal periòdic arribant per un dels extrems. La resposta de cada neurona (pulsar o no pulsar) depèn de la seva fase, de forma que cada una d’elles rep un missatge filtrat per la precedent. A més, cada potencial d’acció presinàptic provoca un canvi de fase en la neurona postsinàptica que depèn de la seva posició en l’espai de fases. Els períodes d’entrada capaços de sincronitzar les oscil lacions subumbrals són aquells que mantenen la fase d’arribada dels potencials d’acció fixa al llarg de la cadena. Per tal de què el missatge arribi intacte a la darrera neurona cal, a més a més, que aquesta fase permeti la descàrrega del voltatge transmembrana. En segon cas, hem estudiat una xarxa neuronal amb connexions tant a veïns propers com de llarg abast, on les oscil lacions subumbrals emergeixen de l’activitat col lectiva reflectida en els corrents sinàptics (o equivalentment en el LFP). Les neurones inhibidores aporten un ritme a l’excitabilitat de la xarxa, és a dir, que els episodis en què la inhibició és baixa, la probabilitat d’una descàrrega global de la població neuronal és alta. En el Capítol 3 mostrem com aquest ritme implica l’aparició d’una bretxa en la freqüència de descàrrega de les neurones: o bé polsen espaiadament en el temps o bé en ràfegues d’elevada intensitat. La fase del LFP determina l’estat de la xarxa neuronal codificant l’activitat de la població: els mínims indiquen la descàrrega simultània de moltes neurones que, ocasionalment, han superat el llindar d’excitabilitat degut a un decreixement global de la inhibició, mentre que els màxims indiquen la coexistència de ràfegues en diferents punts de la xarxa degut a decreixements locals de la inhibició en estats globals d’excitació. Aquesta dinàmica és possible gràcies al domini de la inhibició sobre l’excitació. En el Capítol 4 considerem acoblament entre dues xarxes neuronals per tal d’estudiar la interacció entre ritmes diferents. Les oscil lacions indiquen recurrència en la sincronització de l’activitat col lectiva, de manera que durant aquestes finestres temporals una població optimitza el seu impacte en una xarxa diana. Quan el ritme de la població receptora i el de l’emissora difereixen significativament, l’eficiència en la comunicació decreix, ja que les fases de màxima resposta de cada senyal LFP no mantenen una diferència constant entre elles. Finalment, en el Capítol 5 hem estudiat com les oscil lacions col lectives pròpies de l’estat de son donen lloc al fenomen de coherència estocàstica. Per a una intensitat òptima del soroll, modulat per l’excitabilitat de la xarxa, el LFP assoleix una regularitat màxima donant lloc a un període refractari de la població neuronal. En resum, aquesta Tesi mostra escenaris d’interacció entre els potencials d’acció, característics de la dinàmica de neurones individuals, i les oscil lacions subumbrals, fruit de l’acoblament entre les cèl lules i ubiqües en la dinàmica de poblacions neuronals. Els resultats obtinguts aporten funcionalitat a aquests ritmes emergents, agents sincronitzadors i moduladors de les descàrregues neuronals i reguladors de la comunicació entre xarxes neuronals.The main objective of this thesis is to better understand information processing in neuronal networks in the presence of subthreshold oscillations. Most neurons propagate their electrical activity via chemical synapses, which are only activated when the electric current that passes through them surpasses a certain threshold. Therefore, fast and intense discharges produced at the neuronal soma (the action potentials or spikes) are considered the basic unit of neuronal information. The neuronal code is understood, then, as a binary language that expresses any message (sensory stimulus, memories, etc.) in a train of action potentials. Circuits of thousands of interconnected neurons give rise to certain rhythms, revealed in collective activity measures such as electroencephalograms (EEG) and local field potentials (LFP). Synchronization of action potentials of each cell, triggered by stochastic fluctuations of the synaptic currents, cause this periodicity at the network level.To understand whether these rhythms are involved in the neuronal code we studied three situations. First, in Chapter 2, we showed how an open chain of neurons with an intrinsically oscillatory membrane potential filters a periodic signal coming from one of its ends. The response of each neuron (to spike or not) depends on its phase, so that each cell receives a message filtered by the preceding one. Each presynaptic action potential causes a phase change in the postsynaptic neuron, which depends on its position in the phase space. Those incoming periods that are able to synchronize the subthreshold oscillations, keep the phase of arrival of action potentials fixed along the chain. The original message reaches intact the last neuron provided that this phase allows the discharge of the transmembrane voltage.I the second case, we studied a neuronal network with connections to both long range and close neighbors, in which the subthreshold oscillations emerge from the collective activity apparent in the synaptic currents. The inhibitory neurons provide a rhythm to the excitability of the network. When inhibition is low, the likelihood of a global discharge of the neuronal population is high. In Chapter 3 we show how this rhythm causes a gap in the discharge frequency of neurons: either they pulse single spikes or they fire bursts of high intensity. The LFP phase determines the state of the neuronal network, coding the activity of the population: its minima indicate the simultaneous discharge of many neurons, while its maxima indicate the coexistence of bursts due to local decreases of inhibition at global states of excitation. In Chapter 4 we consider coupling between two neural networks in order to study the interaction between different rhythms. The oscillations indicate recurrence in the synchronization of collective activity, so that during these time windows a population optimizes its impact on a target network. When the rhythm of the emitter and receiver population differ significantly, the communication efficiency decreases as the phases of maximum response of each LFP signal do not maintain a constant difference between them.Finally, in Chapter 5 we studied how oscillations typical of the collective sleep state give rise to stochastic coherence. For an optimal noise intensity, modulated by the excitability of the network, the LFP reaches a maximal regularity leading to a refractory period of the neuronal population.In summary, this Thesis shows scenarios of interaction between action potentials, characteristics of the dynamics of individual neurons, and the subthreshold oscillations, outcome of the coupling between the cells and ubiquitous in the dynamics of neuronal populations . The results obtained provide functionality to these emerging rhythms, triggers of synchronization and modulator agents of the neuronal discharges and regulators of the communication between neuronal networks

    Interacting Mechanisms Driving Synchrony in Neural Networks with Inhibitory Interneurons

    Full text link
    Computational neuroscience contributes to our understanding of the brain by applying techniques from fields including mathematics, physics, and computer science to neuroscientific problems that are not amenable to purely biologic study. One area in which this interdisciplinary research is particularly valuable is the proposal and analysis of mechanisms underlying neural network behaviors. Neural synchrony, especially when driven by inhibitory interneurons, is a behavior of particular importance considering this behavior play a role in neural oscillations underlying important brain functions such as memory formation and attention. Typically, these oscillations arise from synchronous firing of a neural population, and thus the study of neural oscillations and neural synchrony are deeply intertwined. Such network behaviors are particularly amenable to computational analysis given the variety of mathematical techniques that are of use in this field. Inhibitory interneurons are thought to drive synchrony in ways described by two computational mechanisms: Interneuron Network Gamma (ING), which describes how an inhibitory network synchronizes itself; and Pyramidal Interneuron Network Gamma (PING), which describes how a population of interneurons inter-connected with a population of excitatory pyramidal cells (an E-I network) synchronizes both populations. As first articulated using simplified interneuron models, these mechanisms find network properties are the primary impetus for synchrony. However, as neurobiologists uncover interneurons exhibiting a vast array of cellular and intra-connectivity properties, our understanding of how interneurons drive oscillations must account for this diversity. This necessitates an investigation of how changing interneuron properties might disrupt the predictions of ING and PING, and whether other mechanisms might interact with or disrupt these network-driven mechanisms. In my dissertation, I broach this topic utilizing the Type I and Type II neuron classifications, which refer to properties derived from the mathematics of coupled oscillators. Classic ING and PING literature typically utilize Type I neurons which always respond to an excitatory perturbation with an advance of the subsequent action potential. However, many interneurons exhibit Type II properties, which respond to some excitatory perturbations with a delay in the subsequent action potential. Interneuronal diversity is also reflected in the strength and density of the synaptic connections between these neurons, which is also explored in this work. My research reveals a variety of ways in which interneuronal diversity alters synchronous oscillations in networks containing inhibitory interneurons and the mechanisms likely driving these dynamics. For example, oscillations in networks of Type II interneurons violate ING predictions and can be explained mechanistically primarily utilizing cellular properties. Additionally, varying the type of both excitatory and inhibitory cells in E-I networks reveals that synchronous excitatory activity arises with different network connectivities for different neuron types, sometimes driven by cellular properties rather than PING. Furthermore, E-I networks respond differently to varied strengths of inhibitory intra-connectivity depending upon interneuron type, sometimes in ways not fully accounted for by PING theory. Taken together, this research reveals that network-driven and cellularly-driven mechanisms promoting oscillatory activity in networks containing inhibitory interneurons interact, and oftentimes compete, in order to dictate the overall network dynamics. These dynamics are more complex than those predicted by the classic ING and PING mechanisms alone. The diverse dynamical properties imparted to oscillating neural networks by changing inhibitory interneuron properties provides some insight into the biological need for such variability.PHDApplied and Interdisciplinary MathematicsUniversity of Michigan, Horace H. Rackham School of Graduate Studieshttps://deepblue.lib.umich.edu/bitstream/2027.42/143981/1/sbrich_1.pd

    Analytical insights on theta-gamma coupled neural oscillators

    Get PDF
    International audienceIn this paper we study the dynamics of a quadratic integrate-and-fire neuron, spiking in the gamma (30-100 Hz) range, coupled to a delta/theta frequency (1-8 Hz) neural oscillator. Using analytical and semi-analytical methods we were able to derive characteristic spiking times for the system in two distinct regimes (depending on parameter values): one regime where the gamma neuron is intrinsically oscillating in the absence of theta input, and a second one in which gamma spiking is directly gated by theta input, i.e. windows of gamma activity alternate with silence periods depending on the underlying theta phase. In the former case we transform the equations such that the system becomes analogous to the Mathieu differential equation. By solving this equation we can compute numerically the time to the first gamma spike and then use singular perturbation theory to find successive spike times. On the other hand in the excitable condition we make direct use of singular perturbation theory to obtain an approximation of the time to first gamma spike, and then extend the result to calculate ensuing gamma spikes in a recursive fashion. We thereby give explicit formulas for the onset and offset of gamma spike burst during a theta cycle, and provide an estimation of the total number of spikes per theta cycle both for excitable and oscillator regimes

    Stochastic neural network dynamics: synchronisation and control

    Get PDF
    Biological brains exhibit many interesting and complex behaviours. Understanding of the mechanisms behind brain behaviours is critical for continuing advancement in fields of research such as artificial intelligence and medicine. In particular, synchronisation of neuronal firing is associated with both improvements to and degeneration of the brain’s performance; increased synchronisation can lead to enhanced information-processing or neurological disorders such as epilepsy and Parkinson’s disease. As a result, it is desirable to research under which conditions synchronisation arises in neural networks and the possibility of controlling its prevalence. Stochastic ensembles of FitzHugh-Nagumo elements are used to model neural networks for numerical simulations and bifurcation analysis. The FitzHugh-Nagumo model is employed because of its realistic representation of the flow of sodium and potassium ions in addition to its advantageous property of allowing phase plane dynamics to be observed. Network characteristics such as connectivity, configuration and size are explored to determine their influences on global synchronisation generation in their respective systems. Oscillations in the mean-field are used to detect the presence of synchronisation over a range of coupling strength values. To ensure simulation efficiency, coupling strengths between neurons that are identical and fixed with time are investigated initially. Such networks where the interaction strengths are fixed are referred to as homogeneously coupled. The capacity of controlling and altering behaviours produced by homogeneously coupled networks is assessed through the application of weak and strong delayed feedback independently with various time delays. To imitate learning, the coupling strengths later deviate from one another and evolve with time in networks that are referred to as heterogeneously coupled. The intensity of coupling strength fluctuations and the rate at which coupling strengths converge to a desired mean value are studied to determine their impact upon synchronisation performance. The stochastic delay differential equations governing the numerically simulated networks are then converted into a finite set of deterministic cumulant equations by virtue of the Gaussian approximation method. Cumulant equations for maximal and sub-maximal connectivity are used to generate two-parameter bifurcation diagrams on the noise intensity and coupling strength plane, which provides qualitative agreement with numerical simulations. Analysis of artificial brain networks, in respect to biological brain networks, are discussed in light of recent research in sleep theor

    Neurobehavioral Strategies of Skill Acquisition in Left and Right Hand Dominant Individuals

    Get PDF
    The brain consists of vast networks of connected pathways communicating through synchronized electrochemical activity propagated along fiber tracts. The current understanding is that the brain has a modular organization where regions of specialized processes are dynamically coupled through long-range projections of dense axonal networks connecting spatially distinct regions enabling signal transfer necessary for all complex thought and behavior, including regulation of movement. The central objective of the dissertation was to understand how sensorimotor information is integrated, allowing for adaptable motor behavior and skill acquisition in the left-and right-hand dominant populations. To this end participants, of both left- and right-hand dominance, repeatedly completed a visually guided, force matching task while neurobiological and neurobehavioral outcome measurements were continuously recorded via EEG and EMG. Functional connectivity and graph theoretical measurements were derived from EEG. Cortico-cortical coherence patterns were used to infer neurostrategic discrepancies employed in the execution of a motor task for each population. EEG activity was also correlated with neuromuscular activity from EMG to calculate cortico-muscular connectivity. Neurological patterns and corresponding behavioral changes were used to express how hand dominance influenced the developing motor plan, thereby increasing understanding of the sensorimotor integration process. The cumulative findings indicated fundamental differences in how left- and right-hand dominant populations interact with the world. The right-hand dominant group was found to rely on visual information to inform motor behavior where the left-hand dominant group used visual information to update motor behavior. The left-hand group was found to have a more versatile motor plan, adaptable to both dominant, nondominant, and bimanual tasks. Compared to the right-hand group it might be said that they were more successful in encoding the task, however behaviorally they performed the same. The implications of the findings are relevant to both clinical and performance applications providing insight as to potential alternative methods of information integration. The inclusion of the left-hand dominant population in the growing conceptualization of the brain will generate a more complete, stable, and accurate understanding of our complex biology
    • …
    corecore