2,248 research outputs found
SimTune: bridging the simulator reality gap for resource management in edge-cloud computing
Industries and services are undergoing an Internet of Things centric transformation globally, giving rise to an explosion of multi-modal data generated each second. This, with the requirement of low-latency result delivery, has led to the ubiquitous adoption of edge and cloud computing paradigms. Edge computing follows the data gravity principle, wherein the computational devices move closer to the end-users to minimize data transfer and communication times. However, large-scale computation has exacerbated the problem of efficient resource management in hybrid edge-cloud platforms. In this regard, data-driven models such as deep neural networks (DNNs) have gained popularity to give rise to the notion of edge intelligence. However, DNNs face significant problems of data saturation when fed volatile data. Data saturation is when providing more data does not translate to improvements in performance. To address this issue, prior work has leveraged coupled simulators that, akin to digital twins, generate out-of-distribution training data alleviating the data-saturation problem. However, simulators face the reality-gap problem, which is the inaccuracy in the emulation of real computational infrastructure due to the abstractions in such simulators. To combat this, we develop a framework, SimTune, that tackles this challenge by leveraging a low-fidelity surrogate model of the high-fidelity simulator to update the parameters of the latter, so to increase the simulation accuracy. This further helps co-simulated methods to generalize to edge-cloud configurations for which human encoded parameters are not known apriori. Experiments comparing SimTune against state-of-the-art data-driven resource management solutions on a real edge-cloud platform demonstrate that simulator tuning can improve quality of service metrics such as energy consumption and response time by up to 14.7% and 7.6% respectively
Exploration of Reaction Pathways and Chemical Transformation Networks
For the investigation of chemical reaction networks, the identification of
all relevant intermediates and elementary reactions is mandatory. Many
algorithmic approaches exist that perform explorations efficiently and
automatedly. These approaches differ in their application range, the level of
completeness of the exploration, as well as the amount of heuristics and human
intervention required. Here, we describe and compare the different approaches
based on these criteria. Future directions leveraging the strengths of chemical
heuristics, human interaction, and physical rigor are discussed.Comment: 48 pages, 4 figure
GANs and Closures: Micro-Macro Consistency in Multiscale Modeling
Sampling the phase space of molecular systems -- and, more generally, of
complex systems effectively modeled by stochastic differential equations -- is
a crucial modeling step in many fields, from protein folding to materials
discovery. These problems are often multiscale in nature: they can be described
in terms of low-dimensional effective free energy surfaces parametrized by a
small number of "slow" reaction coordinates; the remaining "fast" degrees of
freedom populate an equilibrium measure on the reaction coordinate values.
Sampling procedures for such problems are used to estimate effective free
energy differences as well as ensemble averages with respect to the conditional
equilibrium distributions; these latter averages lead to closures for effective
reduced dynamic models. Over the years, enhanced sampling techniques coupled
with molecular simulation have been developed. An intriguing analogy arises
with the field of Machine Learning (ML), where Generative Adversarial Networks
can produce high dimensional samples from low dimensional probability
distributions. This sample generation returns plausible high dimensional space
realizations of a model state, from information about its low-dimensional
representation. In this work, we present an approach that couples physics-based
simulations and biasing methods for sampling conditional distributions with
ML-based conditional generative adversarial networks for the same task. The
"coarse descriptors" on which we condition the fine scale realizations can
either be known a priori, or learned through nonlinear dimensionality
reduction. We suggest that this may bring out the best features of both
approaches: we demonstrate that a framework that couples cGANs with
physics-based enhanced sampling techniques can improve multiscale SDE dynamical
systems sampling, and even shows promise for systems of increasing complexity.Comment: 21 pages, 10 figures, 3 table
A Survey on Physics Informed Reinforcement Learning: Review and Open Problems
The inclusion of physical information in machine learning frameworks has
revolutionized many application areas. This involves enhancing the learning
process by incorporating physical constraints and adhering to physical laws. In
this work we explore their utility for reinforcement learning applications. We
present a thorough review of the literature on incorporating physics
information, as known as physics priors, in reinforcement learning approaches,
commonly referred to as physics-informed reinforcement learning (PIRL). We
introduce a novel taxonomy with the reinforcement learning pipeline as the
backbone to classify existing works, compare and contrast them, and derive
crucial insights. Existing works are analyzed with regard to the
representation/ form of the governing physics modeled for integration, their
specific contribution to the typical reinforcement learning architecture, and
their connection to the underlying reinforcement learning pipeline stages. We
also identify core learning architectures and physics incorporation biases
(i.e., observational, inductive and learning) of existing PIRL approaches and
use them to further categorize the works for better understanding and
adaptation. By providing a comprehensive perspective on the implementation of
the physics-informed capability, the taxonomy presents a cohesive approach to
PIRL. It identifies the areas where this approach has been applied, as well as
the gaps and opportunities that exist. Additionally, the taxonomy sheds light
on unresolved issues and challenges, which can guide future research. This
nascent field holds great potential for enhancing reinforcement learning
algorithms by increasing their physical plausibility, precision, data
efficiency, and applicability in real-world scenarios
iMapD: intrinsic Map Dynamics exploration for uncharted effective free energy landscapes
We describe and implement iMapD, a computer-assisted approach for
accelerating the exploration of uncharted effective Free Energy Surfaces (FES),
and more generally for the extraction of coarse-grained, macroscopic
information from atomistic or stochastic (here Molecular Dynamics, MD)
simulations. The approach functionally links the MD simulator with nonlinear
manifold learning techniques. The added value comes from biasing the simulator
towards new, unexplored phase space regions by exploiting the smoothness of the
(gradually, as the exploration progresses) revealed intrinsic low-dimensional
geometry of the FES
Simulation Intelligence: Towards a New Generation of Scientific Methods
The original "Seven Motifs" set forth a roadmap of essential methods for the
field of scientific computing, where a motif is an algorithmic method that
captures a pattern of computation and data movement. We present the "Nine
Motifs of Simulation Intelligence", a roadmap for the development and
integration of the essential algorithms necessary for a merger of scientific
computing, scientific simulation, and artificial intelligence. We call this
merger simulation intelligence (SI), for short. We argue the motifs of
simulation intelligence are interconnected and interdependent, much like the
components within the layers of an operating system. Using this metaphor, we
explore the nature of each layer of the simulation intelligence operating
system stack (SI-stack) and the motifs therein: (1) Multi-physics and
multi-scale modeling; (2) Surrogate modeling and emulation; (3)
Simulation-based inference; (4) Causal modeling and inference; (5) Agent-based
modeling; (6) Probabilistic programming; (7) Differentiable programming; (8)
Open-ended optimization; (9) Machine programming. We believe coordinated
efforts between motifs offers immense opportunity to accelerate scientific
discovery, from solving inverse problems in synthetic biology and climate
science, to directing nuclear energy experiments and predicting emergent
behavior in socioeconomic settings. We elaborate on each layer of the SI-stack,
detailing the state-of-art methods, presenting examples to highlight challenges
and opportunities, and advocating for specific ways to advance the motifs and
the synergies from their combinations. Advancing and integrating these
technologies can enable a robust and efficient hypothesis-simulation-analysis
type of scientific method, which we introduce with several use-cases for
human-machine teaming and automated science
- …