17,316 research outputs found
Oscillations, metastability and phase transitions in brain and models of cognition
Neuroscience is being practiced in many different forms and at many different organizational levels of the Nervous System. Which of these levels and associated conceptual frameworks is most informative for elucidating the association of neural processes with processes of Cognition is an empirical question and subject to pragmatic validation. In this essay, I select the framework of Dynamic System Theory. Several investigators have applied in recent years tools and concepts of this theory to interpretation of observational data, and for designing neuronal models of cognitive functions. I will first trace the essentials of conceptual development and hypotheses separately for discerning observational tests and criteria for functional realism and conceptual plausibility of the alternatives they offer. I will then show that the statistical mechanics of phase transitions in brain activity, and some of its models, provides a new and possibly revealing perspective on brain events in cognition
Recognizing recurrent neural networks (rRNN): Bayesian inference for recurrent neural networks
Recurrent neural networks (RNNs) are widely used in computational
neuroscience and machine learning applications. In an RNN, each neuron computes
its output as a nonlinear function of its integrated input. While the
importance of RNNs, especially as models of brain processing, is undisputed, it
is also widely acknowledged that the computations in standard RNN models may be
an over-simplification of what real neuronal networks compute. Here, we suggest
that the RNN approach may be made both neurobiologically more plausible and
computationally more powerful by its fusion with Bayesian inference techniques
for nonlinear dynamical systems. In this scheme, we use an RNN as a generative
model of dynamic input caused by the environment, e.g. of speech or kinematics.
Given this generative RNN model, we derive Bayesian update equations that can
decode its output. Critically, these updates define a 'recognizing RNN' (rRNN),
in which neurons compute and exchange prediction and prediction error messages.
The rRNN has several desirable features that a conventional RNN does not have,
for example, fast decoding of dynamic stimuli and robustness to initial
conditions and noise. Furthermore, it implements a predictive coding scheme for
dynamic inputs. We suggest that the Bayesian inversion of recurrent neural
networks may be useful both as a model of brain function and as a machine
learning tool. We illustrate the use of the rRNN by an application to the
online decoding (i.e. recognition) of human kinematics
Metastability, Criticality and Phase Transitions in brain and its Models
This essay extends the previously deposited paper "Oscillations, Metastability and Phase Transitions" to incorporate the theory of Self-organizing Criticality. The twin concepts of Scaling and Universality of the theory of nonequilibrium phase transitions is applied to the role of reentrant activity in neural circuits of cerebral cortex and subcortical neural structures
Statistical Mechanics of Recurrent Neural Networks I. Statics
A lecture notes style review of the equilibrium statistical mechanics of
recurrent neural networks with discrete and continuous neurons (e.g. Ising,
coupled-oscillators). To be published in the Handbook of Biological Physics
(North-Holland). Accompanied by a similar review (part II) dealing with the
dynamics.Comment: 49 pages, LaTe
Relative entropy minimizing noisy non-linear neural network to approximate stochastic processes
A method is provided for designing and training noise-driven recurrent neural
networks as models of stochastic processes. The method unifies and generalizes
two known separate modeling approaches, Echo State Networks (ESN) and Linear
Inverse Modeling (LIM), under the common principle of relative entropy
minimization. The power of the new method is demonstrated on a stochastic
approximation of the El Nino phenomenon studied in climate research
Modes and models in disorders of consciousness science
The clinical assessment of non-communicative brain damaged patients is extremely difficult and there is a need for paraclinical diagnostic markers of the level of consciousness. In the last few years, progress within neuroimaging has led to a growing body of studies investigating vegetative state and minimally conscious state patients, which can be classified in two main approaches. Active neuroimaging paradigms search for a response to command without requiring a motor response. Passive neuroimaging paradigms investigate spontaneous brain activity and brain responses to external stimuli and aim at identifying neural correlates of consciousness. Other passive paradigms eschew neuroimaging in favour of behavioural markers which reliably distinguish conscious and unconscious conditions in healthy controls. In order to furnish accurate diagnostic criteria, a mechanistic explanation of how the brain gives rise to consciousness seems desirable. Mechanistic and theoretical approaches could also ultimately lead to a unification of passive and active paradigms in a coherent diagnostic approach. In this paper, we survey current passive and active paradigms available for diagnosis of residual consciousness in vegetative state and minimally conscious patients. We then review the current main theories of consciousness and see how they can apply in this context. Finally, we discuss some avenues for future research in this domai
- …