33 research outputs found
Extending JumpProcess.jl for fast point process simulation with time-varying intensities
Co-lead author with G. ZagattiDMS-1902854 - National Science Foundation; National Science Foundationhttps://arxiv.org/abs/2306.06992First author draf
ACED: Accelerated Computational Electrochemical systems Discovery
Large-scale electrification is vital to addressing the climate crisis, but
many engineering challenges remain to fully electrifying both the chemical
industry and transportation. In both of these areas, new electrochemical
materials and systems will be critical, but developing these systems currently
relies heavily on computationally expensive first-principles simulations as
well as human-time-intensive experimental trial and error. We propose to
develop an automated workflow that accelerates these computational steps by
introducing both automated error handling in generating the first-principles
training data as well as physics-informed machine learning surrogates to
further reduce computational cost. It will also have the capacity to include
automated experiments "in the loop" in order to dramatically accelerate the
overall materials discovery pipeline.Comment: 4 pages, 1 figure, accepted to NeurIPS Climate Change and AI Workshop
2020, updating acknowledgements and citation
A differentiable, physics-informed ecosystem modeling and learning framework for large-scale inverse problems: demonstration with photosynthesis simulations
Photosynthesis plays an important role in carbon,
nitrogen, and water cycles. Ecosystem models for photosynthesis are
characterized by many parameters that are obtained from limited in situ
measurements and applied to the same plant types. Previous site-by-site
calibration approaches could not leverage big data and faced issues like
overfitting or parameter non-uniqueness. Here we developed an end-to-end
programmatically differentiable (meaning gradients of outputs to variables
used in the model can be obtained efficiently and accurately) version of the
photosynthesis process representation within the Functionally Assembled
Terrestrial Ecosystem Simulator (FATES) model. As a genre of
physics-informed machine learning (ML), differentiable models couple
physics-based formulations to neural networks (NNs) that learn parameterizations
(and potentially processes) from observations, here photosynthesis rates. We
first demonstrated that the framework was able to correctly recover multiple assumed
parameter values concurrently using synthetic training data. Then, using a
real-world dataset consisting of many different plant functional types (PFTs), we
learned parameters that performed substantially better and greatly reduced
biases compared to literature values. Further, the framework allowed us to
gain insights at a large scale. Our results showed that the carboxylation
rate at 25 ∘C (Vc,max25) was more impactful than a factor
representing water limitation, although tuning both was helpful in
addressing biases with the default values. This framework could potentially
enable substantial improvement in our capability to learn parameters and
reduce biases for ecosystem modeling at large scales.</p
A novel physics informed deep learning method for simulation-based modelling
In this paper, we present a brief review of the state of the art physics informed deep learning methodology and examine its applicability, limits, advantages, and disadvantages via several applications. The main advantage of this method is that it can predict the solution of the partial differential equations by using only boundary and initial conditions without the need for any training data or pre-process phase. Using physics informed neural network algorithms, it is possible to solve partial differential equations in many different problems encountered in engineering studies with a low cost and time instead of traditional numerical methodologies. A direct comparison between the initial results of the current model, analytical solutions, and computational fluid dynamics methods shows very good agreement. The proposed methodology provides a crucial basis for solution of more advance partial differential equation systems and offers a new analysis and mathematical modelling tool for aerospace application
Generalized physics-informed learning through language-wide differentiable programming
Copyright © 2020, for this paper by its authors. Scientific computing is increasingly incorporating the advancements in machine learning to allow for data-driven physics-informed modeling approaches. However, re-targeting existing scientific computing workloads to machine learning frameworks is both costly and limiting, as scientific simulations tend to use the full feature set of a general purpose programming language. In this manuscript we develop an infrastructure for incorporating deep learning into existing scientific computing code through Differentiable Programming (∂P). We describe a ∂P system that is able to take gradients of full Julia programs, making Automatic Differentiation a first class language feature and compatibility with deep learning pervasive. Our system utilizes the one-language nature of Julia package development to augment the existing package ecosystem with deep learning, supporting almost all language constructs (control flow, recursion, mutation, etc.) while generating high-performance code without requiring any user intervention or refactoring to stage computations. We showcase several examples of physics-informed learning which directly utilizes this extension to existing simulation code: neural surrogate models, machine learning on simulated quantum hardware, and data-driven stochastic dynamical model discovery with neural stochastic differential equations
SciML/DifferentialEquations.jl: v6.15.0
SciML/DifferentialEquations.jl: v6.15.0SciML/DifferentialEquations.jl: v6.15.06.15.
Neural ordinary differential equations for ecological and evolutionary time‐series analysis
Inferring the functional shape of ecological and evolutionary processes from time-series data can be challenging because processes are often not describable with simple equations. The dynamical coupling between variables in time series further complicates the identification of equations through model selection as the inference of a given process is contingent on the accurate depiction of all other processes.
We present a novel method, neural ordinary differential equations (NODEs), for learning ecological and evolutionary processes from time-series data by modelling dynamical systems as ordinary differential equations and dynamical functions with artificial neural networks (ANNs). Upon successful training, the ANNs converge to functional shapes that best describe the biological processes underlying the dynamics observed, in a way that is robust to mathematical misspecifications of the dynamical model.
We demonstrate NODEs in a population dynamic context and show how they can be used to infer ecological interactions, dynamical causation and equilibrium points. We tested NODEs by analysing well-understood hare and lynx time-series data, which revealed that prey–predator oscillations were mainly driven by the interspecific interaction, as well as intraspecific densitydependence, and characterised by a single equilibrium point at the centre of the oscillation.
Our approach is applicable to any system that can be modelled with differential equations, and particularly suitable for linking ecological, evolutionary and environmental dynamics where parametric approaches are too challenging to implement, opening new avenues for theoretical and empirical investigations
Beyond Deterministic Models in Drug Discovery and Development
The model-informed drug discovery and development paradigm is now well established among the pharmaceutical industry and regulatory agencies. This success has been mainly due to the ability of pharmacometrics to bring together different modeling strategies, such as population pharmacokinetics/pharmacodynamics (PK/PD) and systems biology/pharmacology. However, there are promising quantitative approaches that are still seldom used by pharmacometricians and that deserve consideration. One such case is the stochastic modeling approach, which can be important when modeling small populations because random events can have a huge impact on these systems. In this review, we aim to raise awareness of stochastic models and how to combine them with existing modeling techniques, with the ultimate goal of making future drug–disease models more versatile and realistic