66 research outputs found
Fractals in the Nervous System: conceptual Implications for Theoretical Neuroscience
This essay is presented with two principal objectives in mind: first, to
document the prevalence of fractals at all levels of the nervous system, giving
credence to the notion of their functional relevance; and second, to draw
attention to the as yet still unresolved issues of the detailed relationships
among power law scaling, self-similarity, and self-organized criticality. As
regards criticality, I will document that it has become a pivotal reference
point in Neurodynamics. Furthermore, I will emphasize the not yet fully
appreciated significance of allometric control processes. For dynamic fractals,
I will assemble reasons for attributing to them the capacity to adapt task
execution to contextual changes across a range of scales. The final Section
consists of general reflections on the implications of the reviewed data, and
identifies what appear to be issues of fundamental importance for future
research in the rapidly evolving topic of this review
Model reduction for the material point method via an implicit neural representation of the deformation map
This work proposes a model-reduction approach for the material point method
on nonlinear manifolds. Our technique approximates the by
approximating the deformation map using an implicit neural representation that
restricts deformation trajectories to reside on a low-dimensional manifold. By
explicitly approximating the deformation map, its spatiotemporal gradients --
in particular the deformation gradient and the velocity -- can be computed via
analytical differentiation. In contrast to typical model-reduction techniques
that construct a linear or nonlinear manifold to approximate the (finite number
of) degrees of freedom characterizing a given spatial discretization, the use
of an implicit neural representation enables the proposed method to approximate
the deformation map. This allows the kinematic
approximation to remain agnostic to the discretization. Consequently, the
technique supports dynamic discretizations -- including resolution changes --
during the course of the online reduced-order-model simulation.
To generate for the generalized coordinates, we propose a
family of projection techniques. At each time step, these techniques: (1)
Calculate full-space kinematics at quadrature points, (2) Calculate the
full-space dynamics for a subset of `sample' material points, and (3) Calculate
the reduced-space dynamics by projecting the updated full-space position and
velocity onto the low-dimensional manifold and tangent space, respectively. We
achieve significant computational speedup via hyper-reduction that ensures all
three steps execute on only a small subset of the problem's spatial domain.
Large-scale numerical examples with millions of material points illustrate the
method's ability to gain an order of magnitude computational-cost saving --
indeed -- with negligible errors
Learning low-dimensional feature dynamics using convolutional recurrent autoencoders
Model reduction of high-dimensional dynamical systems alleviates computational burdens faced in various tasks from design optimization to model predictive control. One popular model reduction approach is based on projecting the governing equations onto a subspace spanned by basis functions obtained from the compression of a dataset of solution snapshots. However, this method is intrusive since the projection requires access to the system operators. Further, some systems may require special treatment of nonlinearities to ensure computational efficiency or additional modeling to preserve stability. In this work we propose a deep learning-based strategy for nonlinear model reduction that is inspired by projection-based model reduction where the idea is to identify some optimal low-dimensional representation and evolve it in time. Our approach constructs a modular model consisting of a deep convolutional autoencoder and a modified LSTM network. The deep convolutional autoencoder returns a low-dimensional representation in terms of coordinates on some expressive nonlinear data-supporting manifold. The dynamics on this manifold are then modeled by the modified LSTM network in a computationally efficient manner. An offline training strategy that exploits the model modularity is also developed. We demonstrate our model on three illustrative examples each highlighting the model's performance in prediction tasks for systems with large parameter-variations and its stability in long-term prediction
Regulation of spike timing in visual cortical circuits
A train of action potentials (a spike train) can carry information in both the average firing rate and the pattern of spikes in the train. But can such a spike-pattern code be supported by cortical circuits? Neurons in vitro produce a spike pattern in response to the injection of a fluctuating current. However, cortical neurons in vivo are modulated by local oscillatory neuronal activity and by top-down inputs. In a cortical circuit, precise spike patterns thus reflect the interaction between internally generated activity and sensory information encoded by input spike trains. We review the evidence for precise and reliable spike timing in the cortex and discuss its computational role
- …