3 research outputs found

    Population Dynamics and Long-Term Trajectory of Dendritic Spines

    Get PDF
    Structural plasticity, characterized by the formation and elimination of synapses, plays a big role in learning and long-term memory formation in the brain. The majority of the synapses in the neocortex occur between the axonal boutons and dendritic spines. Therefore, understanding the dynamics of the dendritic spine growth and elimination can provide key insights to the mechanisms of structural plasticity. In addition to learning and memory formation, the connectivity of neural networks affects cognition, perception, and behavior. Unsurprisingly, psychiatric and neurological disorders such as schizophrenia and autism are accompanied by pathological alterations in spine morphology and synapse numbers. Hence, it is vital to develop a model to understand the mechanisms governing dendritic spine dynamics throughout the lifetime. Here, we applied the density dependent Ricker population model to investigate the feasibility of ecological population concepts and mathematical foundations in spine dynamics. The model includes “immigration,” which is based on the filopodia type transient spines, and we show how this effect can potentially stabilize the spine population theoretically. For the long-term dynamics we employed a time dependent carrying capacity based on the brain's metabolic energy allocation. The results show that the mathematical model can explain the spine density fluctuations in the short-term and also account for the long term trends in the developing brain during synaptogenesis and pruning

    Embedding Recurrent Neural Networks into Predator-Prey Models

    No full text
    We study changes of coordinates that allow the embedding of the ordinary differential equations describing continuous-time recurrent neural networks into differential equations describing predator-prey models also called Lotka-Volterra systems. We do this by transforming the equations for the neural network first into quasi-monomial form (Brenig, 1988), where we express the vector field of the dynamical system as a linear combination of products of powers of the variables. In practice, this transformation is possible only if the activation function is the hyperbolic tangent or the logistic sigmoïd. From this quasi-monomial form, we can directly transform the system further into Lotka-Volterra equations. The resulting Lotka-Volterra system is of higher dimension than the original system, but the behavior of its first variables is equivalent to the behavior of the original neural network. We expect that this transformation will permit the application of existing techniques for the analysis of..
    corecore