44 research outputs found
Brain Rhythms Reveal a Hierarchical Network Organization
Recordings of ongoing neural activity with EEG and MEG exhibit oscillations of specific frequencies over a non-oscillatory background. The oscillations appear in the power spectrum as a collection of frequency bands that are evenly spaced on a logarithmic scale, thereby preventing mutual entrainment and cross-talk. Over the last few years, experimental, computational and theoretical studies have made substantial progress on our understanding of the biophysical mechanisms underlying the generation of network oscillations and their interactions, with emphasis on the role of neuronal synchronization. In this paper we ask a very different question. Rather than investigating how brain rhythms emerge, or whether they are necessary for neural function, we focus on what they tell us about functional brain connectivity. We hypothesized that if we were able to construct abstract networks, or “virtual brains”, whose dynamics were similar to EEG/MEG recordings, those networks would share structural features among themselves, and also with real brains. Applying mathematical techniques for inverse problems, we have reverse-engineered network architectures that generate characteristic dynamics of actual brains, including spindles and sharp waves, which appear in the power spectrum as frequency bands superimposed on a non-oscillatory background dominated by low frequencies. We show that all reconstructed networks display similar topological features (e.g. structural motifs) and dynamics. We have also reverse-engineered putative diseased brains (epileptic and schizophrenic), in which the oscillatory activity is altered in different ways, as reported in clinical studies. These reconstructed networks show consistent alterations of functional connectivity and dynamics. In particular, we show that the complexity of the network, quantified as proposed by Tononi, Sporns and Edelman, is a good indicator of brain fitness, since virtual brains modeling diseased states display lower complexity than virtual brains modeling normal neural function. We finally discuss the implications of our results for the neurobiology of health and disease
Steady-State Visual Evoked Potentials Can Be Explained by Temporal Superposition of Transient Event-Related Responses
<p><b>Background:</b> One common criterion for classifying electrophysiological brain responses is based on the distinction between transient (i.e. event-related potentials, ERPs) and steady-state responses (SSRs). The generation of SSRs is usually attributed to the entrainment of a neural rhythm driven by the stimulus train. However, a more parsimonious account suggests that SSRs might result from the linear addition of the transient responses elicited by each stimulus. This study aimed to investigate this possibility.</p>
<p><b>Methodology/Principal Findings::</b> We recorded brain potentials elicited by a checkerboard stimulus reversing at different rates. We modeled SSRs by sequentially shifting and linearly adding rate-specific ERPs. Our results show a strong resemblance between recorded and synthetic SSRs, supporting the superposition hypothesis. Furthermore, we did not find evidence of entrainment of a neural oscillation at the stimulation frequency.</p>
<p><b>Conclusions/Significance:</b> This study provides evidence that visual SSRs can be explained as a superposition of transient ERPs. These findings have critical implications in our current understanding of brain oscillations. Contrary to the idea that neural networks can be tuned to a wide range of frequencies, our findings rather suggest that the oscillatory response of a given neural network is constrained within its natural frequency range.</p>
Metabifurcation analysis of a mean field model of the cortex
Mean field models (MFMs) of cortical tissue incorporate salient features of
neural masses to model activity at the population level. One of the common
aspects of MFM descriptions is the presence of a high dimensional parameter
space capturing neurobiological attributes relevant to brain dynamics. We study
the physiological parameter space of a MFM of electrocortical activity and
discover robust correlations between physiological attributes of the model
cortex and its dynamical features. These correlations are revealed by the study
of bifurcation plots, which show that the model responses to changes in
inhibition belong to two families. After investigating and characterizing
these, we discuss their essential differences in terms of four important
aspects: power responses with respect to the modeled action of anesthetics,
reaction to exogenous stimuli, distribution of model parameters and oscillatory
repertoires when inhibition is enhanced. Furthermore, while the complexity of
sustained periodic orbits differs significantly between families, we are able
to show how metamorphoses between the families can be brought about by
exogenous stimuli. We unveil links between measurable physiological attributes
of the brain and dynamical patterns that are not accessible by linear methods.
They emerge when the parameter space is partitioned according to bifurcation
responses. This partitioning cannot be achieved by the investigation of only a
small number of parameter sets, but is the result of an automated bifurcation
analysis of a representative sample of 73,454 physiologically admissible sets.
Our approach generalizes straightforwardly and is well suited to probing the
dynamics of other models with large and complex parameter spaces
The Dynamic Brain: From Spiking Neurons to Neural Masses and Cortical Fields
The cortex is a complex system, characterized by its dynamics and architecture,
which underlie many functions such as action, perception, learning, language,
and cognition. Its structural architecture has been studied for more than a
hundred years; however, its dynamics have been addressed much less thoroughly.
In this paper, we review and integrate, in a unifying framework, a variety of
computational approaches that have been used to characterize the dynamics of the
cortex, as evidenced at different levels of measurement. Computational models at
different space–time scales help us understand the fundamental
mechanisms that underpin neural processes and relate these processes to
neuroscience data. Modeling at the single neuron level is necessary because this
is the level at which information is exchanged between the computing elements of
the brain; the neurons. Mesoscopic models tell us how neural elements interact
to yield emergent behavior at the level of microcolumns and cortical columns.
Macroscopic models can inform us about whole brain dynamics and interactions
between large-scale neural systems such as cortical regions, the thalamus, and
brain stem. Each level of description relates uniquely to neuroscience data,
from single-unit recordings, through local field potentials to functional
magnetic resonance imaging (fMRI), electroencephalogram (EEG), and
magnetoencephalogram (MEG). Models of the cortex can establish which types of
large-scale neuronal networks can perform computations and characterize their
emergent properties. Mean-field and related formulations of dynamics also play
an essential and complementary role as forward models that can be inverted given
empirical data. This makes dynamic models critical in integrating theory and
experiments. We argue that elaborating principled and informed models is a
prerequisite for grounding empirical neuroscience in a cogent theoretical
framework, commensurate with the achievements in the physical sciences
Work Toward a Theory of Brain Function
This dissertation reports research from 1971 to the present, performed in three parts.
The first part arose from unilateral electrical stimulation of motivational/reward pathways in the lateral hypothalamus and brain stem of “split-brain” cats, in which the great cerebral commissures were surgically divided. This showed that motivation systems in split-brain animals exert joint influence upon learning in both of the divided cerebral hemispheres, in contrast to the separation of cognitive functions produced by commissurotomy. However, attempts to identify separate signatures of electrocortical activity associated with the diffuse motivational/alerting effects and those of the cortically lateralised processes failed to achieve this goal, and showed that an adequate model of cerebral information processing was lacking.
The second part describes how this recognition of inadequacy led into computer simulations of large populations of cortical neurons – work which slowly led my colleagues and me to successful explanations of mechanisms for cortical synchrony and oscillation, and of evoked potentials and the global EEG. These results complemented the work of overseas groups led by Nunez, by Freeman, by Lopes da Silva and others, but also differed from the directions taken by these workers in certain important respects. It became possible to conceive of information transfer in the active cortex as a series of punctuated synchronous equilibria of signal exchange among cortical neurons – equilibria reached repeatedly, with sequential perturbations of the neural activity away from equilibrium caused by exogenous inputs and endogenous pulse-bursting, thus forming a basis for cognitive sequences.
The third part reports how the explanation of synchrony gave rise to a new theory of the regulation of embryonic cortical growth and the emergence of mature functional connections. This work was based upon very different assumptions, and reaches very different conclusions, to that of pioneers of the field such as Hubel and Wiesel, whose ideas have dominated cortical physiology for more than fifty years.
In conclusion, findings from all the stages of this research are linked together, to show they provide a sketch of the working brain, fitting within and helping to unify wider contemporary concepts of brain function
Anesthetic action on the transmission delay between cortex and thalamus explains the beta-buzz observed under propofol anesthesia
In recent years, more and more surgeries under general anesthesia have been performed with the assistance of electroencephalogram (EEG) monitors. An increase in anesthetic concentration leads to characteristic changes in the power spectra of the EEG. Although tracking the anesthetic-induced changes in EEG rhythms can be employed to estimate the depth of anesthesia, their precise underlying mechanisms are still unknown. A prominent feature in the EEG of some patients is the emergence of a strong power peak in the β–frequency band, which moves to the α–frequency band while increasing the anesthetic concentration. This feature is called the beta-buzz. In the present study, we use a thalamo-cortical neural population feedback model to reproduce observed characteristic features in frontal EEG power obtained experimentally during propofol general anesthesia, such as this beta-buzz. First, we find that the spectral power peak in the α– and δ–frequency ranges depend on the decay rate constant of excitatory and inhibitory synapses, but the anesthetic action on synapses does not explain the beta-buzz. Moreover, considering the action of propofol on the transmission delay between cortex and thalamus, the model reveals that the beta-buzz may result from a prolongation of the transmission delay by increasing propofol concentration. A corresponding relationship between transmission delay and anesthetic blood concentration is derived. Finally, an analytical stability study demonstrates that increasing propofol concentration moves the systems resting state towards its stability threshold