120 research outputs found

    Recognizing recurrent neural networks (rRNN): Bayesian inference for recurrent neural networks

    Get PDF
    Recurrent neural networks (RNNs) are widely used in computational neuroscience and machine learning applications. In an RNN, each neuron computes its output as a nonlinear function of its integrated input. While the importance of RNNs, especially as models of brain processing, is undisputed, it is also widely acknowledged that the computations in standard RNN models may be an over-simplification of what real neuronal networks compute. Here, we suggest that the RNN approach may be made both neurobiologically more plausible and computationally more powerful by its fusion with Bayesian inference techniques for nonlinear dynamical systems. In this scheme, we use an RNN as a generative model of dynamic input caused by the environment, e.g. of speech or kinematics. Given this generative RNN model, we derive Bayesian update equations that can decode its output. Critically, these updates define a 'recognizing RNN' (rRNN), in which neurons compute and exchange prediction and prediction error messages. The rRNN has several desirable features that a conventional RNN does not have, for example, fast decoding of dynamic stimuli and robustness to initial conditions and noise. Furthermore, it implements a predictive coding scheme for dynamic inputs. We suggest that the Bayesian inversion of recurrent neural networks may be useful both as a model of brain function and as a machine learning tool. We illustrate the use of the rRNN by an application to the online decoding (i.e. recognition) of human kinematics

    Molecular dynamics simulation of the fragile glass former ortho-terphenyl: a flexible molecule model

    Full text link
    We present a realistic model of the fragile glass former orthoterphenyl and the results of extensive molecular dynamics simulations in which we investigated its basic static and dynamic properties. In this model the internal molecular interactions between the three rigid phenyl rings are described by a set of force constants, including harmonic and anharmonic terms; the interactions among different molecules are described by Lennard-Jones site-site potentials. Self-diffusion properties are discussed in detail together with the temperature and momentum dependencies of the self-intermediate scattering function. The simulation data are compared with existing experimental results and with the main predictions of the Mode Coupling Theory.Comment: 20 pages and 28 postscript figure

    Propylene Carbonate Reexamined: Mode-Coupling β\beta Scaling without Factorisation ?

    Full text link
    The dynamic susceptibility of propylene carbonate in the moderately viscous regime above TcT_{\rm c} is reinvestigated by incoherent neutron and depolarised light scattering, and compared to dielectric loss and solvation response. Depending on the strength of α\alpha relaxation, a more or less extended β\beta scaling regime is found. Mode-coupling fits yield consistently λ=0.72\lambda=0.72 and Tc=182T_{\rm c}=182 K, although different positions of the susceptibility minimum indicate that not all observables have reached the universal asymptotics

    Static and Dynamic Properties of a Viscous Silica Melt Molecular Dynamics Computer Simulations

    Full text link
    We present the results of a large scale molecular dynamics computer simulation in which we investigated the static and dynamic properties of a silica melt in the temperature range in which the viscosity of the system changes from O(10^-2) Poise to O(10^2) Poise. We show that even at temperatures as high as 4000 K the structure of this system is very similar to the random tetrahedral network found in silica at lower temperatures. The temperature dependence of the concentration of the defects in this network shows an Arrhenius law. From the partial structure factors we calculate the neutron scattering function and find that it agrees very well with experimental neutron scattering data. At low temperatures the temperature dependence of the diffusion constants DD shows an Arrhenius law with activation energies which are in very good agreement with the experimental values. With increasing temperature we find that this dependence shows a cross-over to one which can be described well by a power-law, D\propto (T-T_c)^gamma. The critical temperature T_c is 3330 K and the exponent gamma is close to 2.1. Since we find a similar cross-over in the viscosity we have evidence that the relaxation dynamics of the system changes from a flow-like motion of the particles, as described by the ideal version of mode-coupling theory, to a hopping like motion. We show that such a change of the transport mechanism is also observed in the product of the diffusion constant and the life time of a Si-O bond, or the space and time dependence of the van Hove correlation functions.Comment: 30 pages of Latex, 14 figure

    Time Scale Hierarchies in the Functional Organization of Complex Behaviors

    Get PDF
    Traditional approaches to cognitive modelling generally portray cognitive events in terms of ‘discrete’ states (point attractor dynamics) rather than in terms of processes, thereby neglecting the time structure of cognition. In contrast, more recent approaches explicitly address this temporal dimension, but typically provide no entry points into cognitive categorization of events and experiences. With the aim to incorporate both these aspects, we propose a framework for functional architectures. Our approach is grounded in the notion that arbitrary complex (human) behaviour is decomposable into functional modes (elementary units), which we conceptualize as low-dimensional dynamical objects (structured flows on manifolds). The ensemble of modes at an agent’s disposal constitutes his/her functional repertoire. The modes may be subjected to additional dynamics (termed operational signals), in particular, instantaneous inputs, and a mechanism that sequentially selects a mode so that it temporarily dominates the functional dynamics. The inputs and selection mechanisms act on faster and slower time scales then that inherent to the modes, respectively. The dynamics across the three time scales are coupled via feedback, rendering the entire architecture autonomous. We illustrate the functional architecture in the context of serial behaviour, namely cursive handwriting. Subsequently, we investigate the possibility of recovering the contributions of functional modes and operational signals from the output, which appears to be possible only when examining the output phase flow (i.e., not from trajectories in phase space or time)

    Relationship between Activity in Human Primary Motor Cortex during Action Observation and the Mirror Neuron System

    Get PDF
    The attenuation of the beta cortical oscillations during action observation has been interpreted as evidence of a mirror neuron system (MNS) in humans. Here we investigated the modulation of beta cortical oscillations with the viewpoint of an observed action. We asked subjects to observe videos of an actor making a variety of arm movements. We show that when subjects were observing arm movements there was a significant modulation of beta oscillations overlying left and right sensorimotor cortices. This pattern of attenuation was driven by the side of the screen on which the observed movement occurred and not by the hand that was observed moving. These results are discussed in terms of the firing patterns of mirror neurons in F5 which have been reported to have similar properties

    A nice surprise? Predictive processing and the active pursuit of novelty

    Get PDF
    Recent work in cognitive and computational neuroscience depicts human brains as devices that minimize prediction error signals: signals that encode the difference between actual and expected sensory stimulations. This raises a series of puzzles whose common theme concerns a potential misfit between this bedrock informationtheoretic vision and familiar facts about the attractions of the unexpected. We humans often seem to actively seek out surprising events, deliberately harvesting novel and exciting streams of sensory stimulation. Conversely, we often experience some wellexpected sensations as unpleasant and to-be-avoided. In this paper, I explore several core and variant forms of this puzzle, using them to display multiple interacting elements that together deliver a satisfying solution. That solution requires us to go beyond the discussion of simple information-theoretic imperatives (such as 'minimize long-term prediction error') and to recognize the essential role of species-specific prestructuring, epistemic foraging, and cultural practices in shaping the restless, curious, novelty-seeking human mind

    A Neurodynamic Account of Spontaneous Behaviour

    Get PDF
    The current article suggests that deterministic chaos self-organized in cortical dynamics could be responsible for the generation of spontaneous action sequences. Recently, various psychological observations have suggested that humans and primates can learn to extract statistical structures hidden in perceptual sequences experienced during active environmental interactions. Although it has been suggested that such statistical structures involve chunking or compositional primitives, their neuronal implementations in brains have not yet been clarified. Therefore, to reconstruct the phenomena, synthetic neuro-robotics experiments were conducted by using a neural network model, which is characterized by a generative model with intentional states and its multiple timescales dynamics. The experimental results showed that the robot successfully learned to imitate tutored behavioral sequence patterns by extracting the underlying transition probability among primitive actions. An analysis revealed that a set of primitive action patterns was embedded in the fast dynamics part, and the chaotic dynamics of spontaneously sequencing these action primitive patterns was structured in the slow dynamics part, provided that the timescale was adequately set for each part. It was also shown that self-organization of this type of functional hierarchy ensured robust action generation by the robot in its interactions with a noisy environment. This article discusses the correspondence of the synthetic experiments with the known hierarchy of the prefrontal cortex, the supplementary motor area, and the primary motor cortex for action generation. We speculate that deterministic dynamical structures organized in the prefrontal cortex could be essential because they can account for the generation of both intentional behaviors of fixed action sequences and spontaneous behaviors of pseudo-stochastic action sequences by the same mechanism

    An Efficient Coding Hypothesis Links Sparsity and Selectivity of Neural Responses

    Get PDF
    To what extent are sensory responses in the brain compatible with first-order principles? The efficient coding hypothesis projects that neurons use as few spikes as possible to faithfully represent natural stimuli. However, many sparsely firing neurons in higher brain areas seem to violate this hypothesis in that they respond more to familiar stimuli than to nonfamiliar stimuli. We reconcile this discrepancy by showing that efficient sensory responses give rise to stimulus selectivity that depends on the stimulus-independent firing threshold and the balance between excitatory and inhibitory inputs. We construct a cost function that enforces minimal firing rates in model neurons by linearly punishing suprathreshold synaptic currents. By contrast, subthreshold currents are punished quadratically, which allows us to optimally reconstruct sensory inputs from elicited responses. We train synaptic currents on many renditions of a particular bird's own song (BOS) and few renditions of conspecific birds' songs (CONs). During training, model neurons develop a response selectivity with complex dependence on the firing threshold. At low thresholds, they fire densely and prefer CON and the reverse BOS (REV) over BOS. However, at high thresholds or when hyperpolarized, they fire sparsely and prefer BOS over REV and over CON. Based on this selectivity reversal, our model suggests that preference for a highly familiar stimulus corresponds to a high-threshold or strong-inhibition regime of an efficient coding strategy. Our findings apply to songbird mirror neurons, and in general, they suggest that the brain may be endowed with simple mechanisms to rapidly change selectivity of neural responses to focus sensory processing on either familiar or nonfamiliar stimuli. In summary, we find support for the efficient coding hypothesis and provide new insights into the interplay between the sparsity and selectivity of neural responses

    Tracing the Flow of Perceptual Features in an Algorithmic Brain Network

    Get PDF
    The model of the brain as an information processing machine is a profound hypothesis in which neuroscience, psychology and theory of computation are now deeply rooted. Modern neuroscience aims to model the brain as a network of densely interconnected functional nodes. However, to model the dynamic information processing mechanisms of perception and cognition, it is imperative to understand brain networks at an algorithmic level–i.e. as the information flow that network nodes code and communicate. Here, using innovative methods (Directed Feature Information), we reconstructed examples of possible algorithmic brain networks that code and communicate the specific features underlying two distinct perceptions of the same ambiguous picture. In each observer, we identified a network architecture comprising one occipito-temporal hub where the features underlying both perceptual decisions dynamically converge. Our focus on detailed information flow represents an important step towards a new brain algorithmics to model the mechanisms of perception and cognition
    • …
    corecore