1,246 research outputs found

    Graph Dynamical Networks for Unsupervised Learning of Atomic Scale Dynamics in Materials

    Full text link
    Understanding the dynamical processes that govern the performance of functional materials is essential for the design of next generation materials to tackle global energy and environmental challenges. Many of these processes involve the dynamics of individual atoms or small molecules in condensed phases, e.g. lithium ions in electrolytes, water molecules in membranes, molten atoms at interfaces, etc., which are difficult to understand due to the complexity of local environments. In this work, we develop graph dynamical networks, an unsupervised learning approach for understanding atomic scale dynamics in arbitrary phases and environments from molecular dynamics simulations. We show that important dynamical information can be learned for various multi-component amorphous material systems, which is difficult to obtain otherwise. With the large amounts of molecular dynamics data generated everyday in nearly every aspect of materials design, this approach provides a broadly useful, automated tool to understand atomic scale dynamics in material systems.Comment: 25 + 7 pages, 5 + 3 figure

    Kernel methods for detecting coherent structures in dynamical data

    Full text link
    We illustrate relationships between classical kernel-based dimensionality reduction techniques and eigendecompositions of empirical estimates of reproducing kernel Hilbert space (RKHS) operators associated with dynamical systems. In particular, we show that kernel canonical correlation analysis (CCA) can be interpreted in terms of kernel transfer operators and that it can be obtained by optimizing the variational approach for Markov processes (VAMP) score. As a result, we show that coherent sets of particle trajectories can be computed by kernel CCA. We demonstrate the efficiency of this approach with several examples, namely the well-known Bickley jet, ocean drifter data, and a molecular dynamics problem with a time-dependent potential. Finally, we propose a straightforward generalization of dynamic mode decomposition (DMD) called coherent mode decomposition (CMD). Our results provide a generic machine learning approach to the computation of coherent sets with an objective score that can be used for cross-validation and the comparison of different methods

    Generative learning for nonlinear dynamics

    Full text link
    Modern generative machine learning models demonstrate surprising ability to create realistic outputs far beyond their training data, such as photorealistic artwork, accurate protein structures, or conversational text. These successes suggest that generative models learn to effectively parametrize and sample arbitrarily complex distributions. Beginning half a century ago, foundational works in nonlinear dynamics used tools from information theory to infer properties of chaotic attractors from time series, motivating the development of algorithms for parametrizing chaos in real datasets. In this perspective, we aim to connect these classical works to emerging themes in large-scale generative statistical learning. We first consider classical attractor reconstruction, which mirrors constraints on latent representations learned by state space models of time series. We next revisit early efforts to use symbolic approximations to compare minimal discrete generators underlying complex processes, a problem relevant to modern efforts to distill and interpret black-box statistical models. Emerging interdisciplinary works bridge nonlinear dynamics and learning theory, such as operator-theoretic methods for complex fluid flows, or detection of broken detailed balance in biological datasets. We anticipate that future machine learning techniques may revisit other classical concepts from nonlinear dynamics, such as transinformation decay and complexity-entropy tradeoffs.Comment: 23 pages, 4 figure

    Observability and Synchronization of Neuron Models

    Full text link
    Observability is the property that enables to distinguish two different locations in nn-dimensional state space from a reduced number of measured variables, usually just one. In high-dimensional systems it is therefore important to make sure that the variable recorded to perform the analysis conveys good observability of the system dynamics. In the case of networks composed of neuron models, the observability of the network depends nontrivially on the observability of the node dynamics and on the topology of the network. The aim of this paper is twofold. First, a study of observability is conducted using four well-known neuron models by computing three different observability coefficients. This not only clarifies observability properties of the models but also shows the limitations of applicability of each type of coefficients in the context of such models. Second, a multivariate singular spectrum analysis (M-SSA) is performed to detect phase synchronization in networks composed by neuron models. This tool, to the best of the authors' knowledge has not been used in the context of networks of neuron models. It is shown that it is possible to detect phase synchronization i)~without having to measure all the state variables, but only one from each node, and ii)~without having to estimate the phase
    corecore