20 research outputs found
An Emergent Space for Distributed Data with Hidden Internal Order through Manifold Learning
Manifold-learning techniques are routinely used in mining complex
spatiotemporal data to extract useful, parsimonious data
representations/parametrizations; these are, in turn, useful in nonlinear model
identification tasks. We focus here on the case of time series data that can
ultimately be modelled as a spatially distributed system (e.g. a partial
differential equation, PDE), but where we do not know the space in which this
PDE should be formulated. Hence, even the spatial coordinates for the
distributed system themselves need to be identified - to emerge from - the data
mining process. We will first validate this emergent space reconstruction for
time series sampled without space labels in known PDEs; this brings up the
issue of observability of physical space from temporal observation data, and
the transition from spatially resolved to lumped (order-parameter-based)
representations by tuning the scale of the data mining kernels. We will then
present actual emergent space discovery illustrations. Our illustrative
examples include chimera states (states of coexisting coherent and incoherent
dynamics), and chaotic as well as quasiperiodic spatiotemporal dynamics,
arising in partial differential equations and/or in heterogeneous networks. We
also discuss how data-driven spatial coordinates can be extracted in ways
invariant to the nature of the measuring instrument. Such gauge-invariant data
mining can go beyond the fusion of heterogeneous observations of the same
system, to the possible matching of apparently different systems
Some of the variables, some of the parameters, some of the times, with some physics known: Identification with partial information
Experimental data is often comprised of variables measured independently, at
different sampling rates (non-uniform t between successive
measurements); and at a specific time point only a subset of all variables may
be sampled. Approaches to identifying dynamical systems from such data
typically use interpolation, imputation or subsampling to reorganize or modify
the training data to learning. Partial physical knowledge may
also be available (accurately or approximately), and
data-driven techniques can complement this knowledge. Here we exploit neural
network architectures based on numerical integration methods and physical knowledge to identify the right-hand side of the underlying
governing differential equations. Iterates of such neural-network models allow
for learning from data sampled at arbitrary time points data
modification. Importantly, we integrate the network with available partial
physical knowledge in "physics informed gray-boxes"; this enables learning
unknown kinetic rates or microbial growth functions while simultaneously
estimating experimental parameters.Comment: 25 pages, 15 figure
Learning effective stochastic differential equations from microscopic simulations: combining stochastic numerics and deep learning
We identify effective stochastic differential equations (SDE) for coarse
observables of fine-grained particle- or agent-based simulations; these SDE
then provide coarse surrogate models of the fine scale dynamics. We approximate
the drift and diffusivity functions in these effective SDE through neural
networks, which can be thought of as effective stochastic ResNets. The loss
function is inspired by, and embodies, the structure of established stochastic
numerical integrators (here, Euler-Maruyama and Milstein); our approximations
can thus benefit from error analysis of these underlying numerical schemes.
They also lend themselves naturally to "physics-informed" gray-box
identification when approximate coarse models, such as mean field equations,
are available. Our approach does not require long trajectories, works on
scattered snapshot data, and is designed to naturally handle different time
steps per snapshot. We consider both the case where the coarse collective
observables are known in advance, as well as the case where they must be found
in a data-driven manner.Comment: 19 pages, includes supplemental materia
LOCA: LOcal Conformal Autoencoder for standardized data coordinates
We propose a deep-learning based method for obtaining standardized data
coordinates from scientific measurements.Data observations are modeled as
samples from an unknown, non-linear deformation of an underlying Riemannian
manifold, which is parametrized by a few normalized latent variables. By
leveraging a repeated measurement sampling strategy, we present a method for
learning an embedding in that is isometric to the latent
variables of the manifold. These data coordinates, being invariant under smooth
changes of variables, enable matching between different instrumental
observations of the same phenomenon. Our embedding is obtained using a LOcal
Conformal Autoencoder (LOCA), an algorithm that constructs an embedding to
rectify deformations by using a local z-scoring procedure while preserving
relevant geometric information. We demonstrate the isometric embedding
properties of LOCA on various model settings and observe that it exhibits
promising interpolation and extrapolation capabilities. Finally, we apply LOCA
to single-site Wi-Fi localization data, and to -dimensional curved surface
estimation based on a -dimensional projection
Inversion‐recovery MR elastography of the human brain for improved stiffness quantification near fluid–solid boundaries
Purpose: In vivo MR elastography (MRE) holds promise as a neuroimaging marker. In cerebral MRE, shear waves are introduced into the brain, which also stimulate vibrations in adjacent CSF, resulting in blurring and biased stiffness values near brain surfaces. We here propose inversion-recovery MRE (IR-MRE) to suppress CSF signal and improve stiffness quantification in brain surface areas.
Methods: Inversion-recovery MRE was demonstrated in agar-based phantoms with solid-fluid interfaces and 11 healthy volunteers using 31.25-Hz harmonic vibrations. It was performed by standard single-shot, spin-echo EPI MRE following 2800-ms IR preparation. Wave fields were acquired in 10 axial slices and analyzed for shear wave speed (SWS) as a surrogate marker of tissue stiffness by wavenumber-based multicomponent inversion.
Results: Phantom SWS values near fluid interfaces were 7.5 ± 3.0% higher in IR-MRE than MRE (P = .01). In the brain, IR-MRE SNR was 17% lower than in MRE, without influencing parenchymal SWS (MRE: 1.38 ± 0.02 m/s; IR-MRE: 1.39 ± 0.03 m/s; P = .18). The IR-MRE tissue-CSF interfaces appeared sharper, showing 10% higher SWS near brain surfaces (MRE: 1.01 ± 0.03 m/s; IR-MRE: 1.11 ± 0.01 m/s; P < .001) and 39% smaller ventricle sizes than MRE (P < .001).
Conclusions: Our results show that brain MRE is affected by fluid oscillations that can be suppressed by IR-MRE, which improves the depiction of anatomy in stiffness maps and the quantification of stiffness values in brain surface areas. Moreover, we measured similar stiffness values in brain parenchyma with and without fluid suppression, which indicates that shear wavelengths in solid and fluid compartments are identical, consistent with the theory of biphasic poroelastic media