124 research outputs found

    Larval food quantity affects the capacity of adult mosquitoes to transmit human malaria

    Get PDF
    Adult traits of holometabolous insects are shaped by conditions experienced during larval development, which might impact interactions between adult insect hosts and parasites. However, the ecology of larval insects that vector disease remains poorly understood. Here, we used Anopheles stephensi mosquitoes and the human malaria parasite Plasmodium falciparum, to investigate whether larval conditions affect the capacity of adult mosquitoes to transmit malaria. We reared larvae in two groups; one group received a standard laboratory rearing diet, whereas the other received a reduced diet. Emerging adult females were then provided an infectious blood meal. We assessed mosquito longevity, parasite development rate and prevalence of infectious mosquitoes over time. Reduced larval food led to increased adult mortality and caused a delay in parasite development and a slowing in the rate at which parasites invaded the mosquito salivary glands, extending the time it took for mosquitoes to become infectious. Together, these effects increased transmission potential of mosquitoes in the high food regime by 260-330%. Such effects have not, to our knowledge, been shown previously for human malaria and highlight the importance of improving knowledge of larval ecology to better understand vector-borne disease transmission dynamics

    Recognizing recurrent neural networks (rRNN): Bayesian inference for recurrent neural networks

    Get PDF
    Recurrent neural networks (RNNs) are widely used in computational neuroscience and machine learning applications. In an RNN, each neuron computes its output as a nonlinear function of its integrated input. While the importance of RNNs, especially as models of brain processing, is undisputed, it is also widely acknowledged that the computations in standard RNN models may be an over-simplification of what real neuronal networks compute. Here, we suggest that the RNN approach may be made both neurobiologically more plausible and computationally more powerful by its fusion with Bayesian inference techniques for nonlinear dynamical systems. In this scheme, we use an RNN as a generative model of dynamic input caused by the environment, e.g. of speech or kinematics. Given this generative RNN model, we derive Bayesian update equations that can decode its output. Critically, these updates define a 'recognizing RNN' (rRNN), in which neurons compute and exchange prediction and prediction error messages. The rRNN has several desirable features that a conventional RNN does not have, for example, fast decoding of dynamic stimuli and robustness to initial conditions and noise. Furthermore, it implements a predictive coding scheme for dynamic inputs. We suggest that the Bayesian inversion of recurrent neural networks may be useful both as a model of brain function and as a machine learning tool. We illustrate the use of the rRNN by an application to the online decoding (i.e. recognition) of human kinematics

    Decision, Sensation, and Habituation: A Multi-Layer Dynamic Field Model for Inhibition of Return

    Get PDF
    Inhibition of Return (IOR) is one of the most consistent and widely studied effects in experimental psychology. The effect refers to a delayed response to visual stimuli in a cued location after initial priming at that location. This article presents a dynamic field model for IOR. The model describes the evolution of three coupled activation fields. The decision field, inspired by the intermediate layer of the superior colliculus, receives endogenous input and input from a sensory field. The sensory field, inspired by earlier sensory processing, receives exogenous input. Habituation of the sensory field is implemented by a reciprocal coupling with a third field, the habituation field. The model generates IOR because, due to the habituation of the sensory field, the decision field receives a reduced target-induced input in cue-target-compatible situations. The model is consistent with single-unit recordings of neurons of monkeys that perform IOR tasks. Such recordings have revealed that IOR phenomena parallel the activity of neurons in the intermediate layer of the superior colliculus and that neurons in this layer receive reduced input in cue-target-compatible situations. The model is also consistent with behavioral data concerning temporal expectancy effects. In a discussion, the multi-layer dynamic field account of IOR is used to illustrate the broader view that behavior consists of a tuning of the organism to the environment that continuously and concurrently takes place at different spatiotemporal scales

    A parsimonious oscillatory model of handwriting

    Get PDF
    International audienceWe propose an oscillatory model that is theoretically parsimonious, empirically efficient and biologically plausible. Building on Hollerbach’s (Biol Cybern 39:139–156, 1981) model, our Parsimonious Oscillatory Model of Handwriting (POMH) overcomes the latter’s main shortcomings by making it possible to extract its parameters from the trace itself and by reinstating symmetry between the x and y coordinates. The benefit is a capacity to autonomously generate a smooth continuous trace that reproduces the dynamics of the handwriting movements through an extremely sparse model, whose efficiency matches that of other, more computationally expensive optimizing methods. Moreover, the model applies to 2D trajectories, irrespective of their shape, size, orientation and length. It is also independent of the endeffectors mobilized and of the writing direction

    Time Scale Hierarchies in the Functional Organization of Complex Behaviors

    Get PDF
    Traditional approaches to cognitive modelling generally portray cognitive events in terms of ‘discrete’ states (point attractor dynamics) rather than in terms of processes, thereby neglecting the time structure of cognition. In contrast, more recent approaches explicitly address this temporal dimension, but typically provide no entry points into cognitive categorization of events and experiences. With the aim to incorporate both these aspects, we propose a framework for functional architectures. Our approach is grounded in the notion that arbitrary complex (human) behaviour is decomposable into functional modes (elementary units), which we conceptualize as low-dimensional dynamical objects (structured flows on manifolds). The ensemble of modes at an agent’s disposal constitutes his/her functional repertoire. The modes may be subjected to additional dynamics (termed operational signals), in particular, instantaneous inputs, and a mechanism that sequentially selects a mode so that it temporarily dominates the functional dynamics. The inputs and selection mechanisms act on faster and slower time scales then that inherent to the modes, respectively. The dynamics across the three time scales are coupled via feedback, rendering the entire architecture autonomous. We illustrate the functional architecture in the context of serial behaviour, namely cursive handwriting. Subsequently, we investigate the possibility of recovering the contributions of functional modes and operational signals from the output, which appears to be possible only when examining the output phase flow (i.e., not from trajectories in phase space or time)

    Complex Processes from Dynamical Architectures with Time-Scale Hierarchy

    Get PDF
    The idea that complex motor, perceptual, and cognitive behaviors are composed of smaller units, which are somehow brought into a meaningful relation, permeates the biological and life sciences. However, no principled framework defining the constituent elementary processes has been developed to this date. Consequently, functional configurations (or architectures) relating elementary processes and external influences are mostly piecemeal formulations suitable to particular instances only. Here, we develop a general dynamical framework for distinct functional architectures characterized by the time-scale separation of their constituents and evaluate their efficiency. Thereto, we build on the (phase) flow of a system, which prescribes the temporal evolution of its state variables. The phase flow topology allows for the unambiguous classification of qualitatively distinct processes, which we consider to represent the functional units or modes within the dynamical architecture. Using the example of a composite movement we illustrate how different architectures can be characterized by their degree of time scale separation between the internal elements of the architecture (i.e. the functional modes) and external interventions. We reveal a tradeoff of the interactions between internal and external influences, which offers a theoretical justification for the efficient composition of complex processes out of non-trivial elementary processes or functional modes

    Brain simulation as a cloud service: The Virtual Brain on EBRAINS

    Get PDF
    The Virtual Brain (TVB) is now available as open-source services on the cloud research platform EBRAINS (ebrains.eu). It offers software for constructing, simulating and analysing brain network models including the TVB simulator; magnetic resonance imaging (MRI) processing pipelines to extract structural and functional brain networks; combined simulation of large-scale brain networks with small-scale spiking networks; automatic conversion of user-specified model equations into fast simulation code; simulation-ready brain models of patients and healthy volunteers; Bayesian parameter optimization in epilepsy patient models; data and software for mouse brain simulation; and extensive educational material. TVB cloud services facilitate reproducible online collaboration and discovery of data assets, models, and software embedded in scalable and secure workflows, a precondition for research on large cohort data sets, better generalizability, and clinical translation
    • …
    corecore