229 research outputs found

    Online Discrimination of Nonlinear Dynamics with Switching Differential Equations

    Full text link
    How to recognise whether an observed person walks or runs? We consider a dynamic environment where observations (e.g. the posture of a person) are caused by different dynamic processes (walking or running) which are active one at a time and which may transition from one to another at any time. For this setup, switching dynamic models have been suggested previously, mostly, for linear and nonlinear dynamics in discrete time. Motivated by basic principles of computations in the brain (dynamic, internal models) we suggest a model for switching nonlinear differential equations. The switching process in the model is implemented by a Hopfield network and we use parametric dynamic movement primitives to represent arbitrary rhythmic motions. The model generates observed dynamics by linearly interpolating the primitives weighted by the switching variables and it is constructed such that standard filtering algorithms can be applied. In two experiments with synthetic planar motion and a human motion capture data set we show that inference with the unscented Kalman filter can successfully discriminate several dynamic processes online

    Free Energy and Dendritic Self-Organization

    Get PDF
    In this paper, we pursue recent observations that, through selective dendritic filtering, single neurons respond to specific sequences of presynaptic inputs. We try to provide a principled and mechanistic account of this selectivity by applying a recent free-energy principle to a dendrite that is immersed in its neuropil or environment. We assume that neurons self-organize to minimize a variational free-energy bound on the self-information or surprise of presynaptic inputs that are sampled. We model this as a selective pruning of dendritic spines that are expressed on a dendritic branch. This pruning occurs when postsynaptic gain falls below a threshold. Crucially, postsynaptic gain is itself optimized with respect to free energy. Pruning suppresses free energy as the dendrite selects presynaptic signals that conform to its expectations, specified by a generative model implicit in its intracellular kinetics. Not only does this provide a principled account of how neurons organize and selectively sample the myriad of potential presynaptic inputs they are exposed to, but it also connects the optimization of elemental neuronal (dendritic) processing to generic (surprise or evidence-based) schemes in statistics and machine learning, such as Bayesian model selection and automatic relevance determination

    Bayesian sparsification for deep neural networks with Bayesian model reduction

    Full text link
    Deep learning's immense capabilities are often constrained by the complexity of its models, leading to an increasing demand for effective sparsification techniques. Bayesian sparsification for deep learning emerges as a crucial approach, facilitating the design of models that are both computationally efficient and competitive in terms of performance across various deep learning applications. The state-of-the-art -- in Bayesian sparsification of deep neural networks -- combines structural shrinkage priors on model weights with an approximate inference scheme based on stochastic variational inference. However, model inversion of the full generative model is exceptionally computationally demanding, especially when compared to standard deep learning of point estimates. In this context, we advocate for the use of Bayesian model reduction (BMR) as a more efficient alternative for pruning of model weights. As a generalization of the Savage-Dickey ratio, BMR allows a post-hoc elimination of redundant model weights based on the posterior estimates under a straightforward (non-hierarchical) generative model. Our comparative study highlights the advantages of the BMR method relative to established approaches based on hierarchical horseshoe priors over model weights. We illustrate the potential of BMR across various deep learning architectures, from classical networks like LeNet to modern frameworks such as Vision Transformers and MLP-Mixers

    Perception and Hierarchical Dynamics

    Get PDF
    In this paper, we suggest that perception could be modeled by assuming that sensory input is generated by a hierarchy of attractors in a dynamic system. We describe a mathematical model which exploits the temporal structure of rapid sensory dynamics to track the slower trajectories of their underlying causes. This model establishes a proof of concept that slowly changing neuronal states can encode the trajectories of faster sensory signals. We link this hierarchical account to recent developments in the perception of human action; in particular artificial speech recognition. We argue that these hierarchical models of dynamical systems are a plausible starting point to develop robust recognition schemes, because they capture critical temporal dependencies induced by deep hierarchical structure. We conclude by suggesting that a fruitful computational neuroscience approach may emerge from modeling perception as non-autonomous recognition dynamics enslaved by autonomous hierarchical dynamics in the sensorium

    A Hierarchy of Time-Scales and the Brain

    Get PDF
    In this paper, we suggest that cortical anatomy recapitulates the temporal hierarchy that is inherent in the dynamics of environmental states. Many aspects of brain function can be understood in terms of a hierarchy of temporal scales at which representations of the environment evolve. The lowest level of this hierarchy corresponds to fast fluctuations associated with sensory processing, whereas the highest levels encode slow contextual changes in the environment, under which faster representations unfold. First, we describe a mathematical model that exploits the temporal structure of fast sensory input to track the slower trajectories of their underlying causes. This model of sensory encoding or perceptual inference establishes a proof of concept that slowly changing neuronal states can encode the paths or trajectories of faster sensory states. We then review empirical evidence that suggests that a temporal hierarchy is recapitulated in the macroscopic organization of the cortex. This anatomic-temporal hierarchy provides a comprehensive framework for understanding cortical function: the specific time-scale that engages a cortical area can be inferred by its location along a rostro-caudal gradient, which reflects the anatomical distance from primary sensory areas. This is most evident in the prefrontal cortex, where complex functions can be explained as operations on representations of the environment that change slowly. The framework provides predictions about, and principled constraints on, cortical structure–function relationships, which can be tested by manipulating the time-scales of sensory input

    Revealing human sensitivity to a latent temporal structure of changes

    Get PDF
    Precisely timed behavior and accurate time perception plays a critical role in our everyday lives, as our wellbeing and even survival can depend on well-timed decisions. Although the temporal structure of the world around us is essential for human decision making, we know surprisingly little about how representation of temporal structure of our everyday environment impacts decision making. How does the representation of temporal structure affect our ability to generate well-timed decisions? Here we address this question by using a well-established dynamic probabilistic learning task. Using computational modeling, we found that human subjects' beliefs about temporal structure are reflected in their choices to either exploit their current knowledge or to explore novel options. The model-based analysis illustrates a large within-group and within-subject heterogeneity. To explain these results, we propose a normative model for how temporal structure is used in decision making, based on the semi-Markov formalism in the active inference framework. We discuss potential key applications of the presented approach to the fields of cognitive phenotyping and computational psychiatry

    Fast frequency modulation is encoded according to the listener expectations in the human subcortical auditory pathway

    Get PDF
    Available online 30 august 2024Expectations aid and bias our perception. For instance, expected words are easier to recognise than unexpected words, particularly in noisy environments, and incorrect expectations can make us misunderstand our conversational partner. Expectations are combined with the output from the sensory pathways to form representations of auditory objects in the cerebral cortex. Previous literature has shown that expectations propagate further down to subcortical stations during the encoding of static pure tones. However, it is unclear whether expectations also drive the subcortical encoding of subtle dynamic elements of the acoustic signal that are not represented in the tonotopic axis. Here, we tested the hypothesis that subjective expectations drive the encoding of fast frequency modulation (FM) in the human subcortical auditory pathway. We used fMRI to measure neural responses in the human auditory midbrain (inferior colliculus) and thalamus (medial geniculate body). Participants listened to sequences of FM-sweeps for which they held different expectations based on the task instructions. We found robust evidence that the responses in auditory midbrain and thalamus encode the difference between the acoustic input and the subjective expectations of the listener. The results indicate that FM-sweeps are already encoded at the level of the human auditory midbrain and that encoding is mainly driven by subjective expectations. We conclude that the subcortical auditory pathway is integrated in the cortical network of predictive processing and that expectations are used to optimise the encoding of fast dynamic elements of the acoustic signal.A.T. and K.v.K. are founded by the European Research Council (grant SENSOCOM 647051)
    corecore