1,246 research outputs found
Graph Dynamical Networks for Unsupervised Learning of Atomic Scale Dynamics in Materials
Understanding the dynamical processes that govern the performance of
functional materials is essential for the design of next generation materials
to tackle global energy and environmental challenges. Many of these processes
involve the dynamics of individual atoms or small molecules in condensed
phases, e.g. lithium ions in electrolytes, water molecules in membranes, molten
atoms at interfaces, etc., which are difficult to understand due to the
complexity of local environments. In this work, we develop graph dynamical
networks, an unsupervised learning approach for understanding atomic scale
dynamics in arbitrary phases and environments from molecular dynamics
simulations. We show that important dynamical information can be learned for
various multi-component amorphous material systems, which is difficult to
obtain otherwise. With the large amounts of molecular dynamics data generated
everyday in nearly every aspect of materials design, this approach provides a
broadly useful, automated tool to understand atomic scale dynamics in material
systems.Comment: 25 + 7 pages, 5 + 3 figure
Recommended from our members
Statistical Machine Learning Methods for High-dimensional Neural Population Data Analysis
Advances in techniques have been producing increasingly complex neural recordings, posing significant challenges for data analysis. This thesis discusses novel statistical methods for analyzing high-dimensional neural data. Part one discusses two extensions of state space models tailored to neural data analysis. First, we propose using a flexible count data distribution family in the observation model to faithfully capture over-dispersion and under-dispersion of the neural observations. Second, we incorporate nonlinear observation models into state space models to improve the flexibility of the model and get a more concise representation of the data. For both extensions, novel variational inference techniques are developed for model fitting, and simulated and real experiments show the advantages of our extensions. Part two discusses a fast region of interest (ROI) detection method for large-scale calcium imaging data based on structured matrix factorization. Part three discusses a method for sampling from a maximum entropy distribution with complicated constraints, which is useful for hypothesis testing for neural data analysis and many other applications related to maximum entropy formulation. We conclude the thesis with discussions and future works
Kernel methods for detecting coherent structures in dynamical data
We illustrate relationships between classical kernel-based dimensionality
reduction techniques and eigendecompositions of empirical estimates of
reproducing kernel Hilbert space (RKHS) operators associated with dynamical
systems. In particular, we show that kernel canonical correlation analysis
(CCA) can be interpreted in terms of kernel transfer operators and that it can
be obtained by optimizing the variational approach for Markov processes (VAMP)
score. As a result, we show that coherent sets of particle trajectories can be
computed by kernel CCA. We demonstrate the efficiency of this approach with
several examples, namely the well-known Bickley jet, ocean drifter data, and a
molecular dynamics problem with a time-dependent potential. Finally, we propose
a straightforward generalization of dynamic mode decomposition (DMD) called
coherent mode decomposition (CMD). Our results provide a generic machine
learning approach to the computation of coherent sets with an objective score
that can be used for cross-validation and the comparison of different methods
Generative learning for nonlinear dynamics
Modern generative machine learning models demonstrate surprising ability to
create realistic outputs far beyond their training data, such as photorealistic
artwork, accurate protein structures, or conversational text. These successes
suggest that generative models learn to effectively parametrize and sample
arbitrarily complex distributions. Beginning half a century ago, foundational
works in nonlinear dynamics used tools from information theory to infer
properties of chaotic attractors from time series, motivating the development
of algorithms for parametrizing chaos in real datasets. In this perspective, we
aim to connect these classical works to emerging themes in large-scale
generative statistical learning. We first consider classical attractor
reconstruction, which mirrors constraints on latent representations learned by
state space models of time series. We next revisit early efforts to use
symbolic approximations to compare minimal discrete generators underlying
complex processes, a problem relevant to modern efforts to distill and
interpret black-box statistical models. Emerging interdisciplinary works bridge
nonlinear dynamics and learning theory, such as operator-theoretic methods for
complex fluid flows, or detection of broken detailed balance in biological
datasets. We anticipate that future machine learning techniques may revisit
other classical concepts from nonlinear dynamics, such as transinformation
decay and complexity-entropy tradeoffs.Comment: 23 pages, 4 figure
Observability and Synchronization of Neuron Models
Observability is the property that enables to distinguish two different
locations in -dimensional state space from a reduced number of measured
variables, usually just one. In high-dimensional systems it is therefore
important to make sure that the variable recorded to perform the analysis
conveys good observability of the system dynamics. In the case of networks
composed of neuron models, the observability of the network depends
nontrivially on the observability of the node dynamics and on the topology of
the network. The aim of this paper is twofold. First, a study of observability
is conducted using four well-known neuron models by computing three different
observability coefficients. This not only clarifies observability properties of
the models but also shows the limitations of applicability of each type of
coefficients in the context of such models. Second, a multivariate singular
spectrum analysis (M-SSA) is performed to detect phase synchronization in
networks composed by neuron models. This tool, to the best of the authors'
knowledge has not been used in the context of networks of neuron models. It is
shown that it is possible to detect phase synchronization i)~without having to
measure all the state variables, but only one from each node, and ii)~without
having to estimate the phase
- …