5 research outputs found

    Uncertainty quantification for noisy inputs-outputs in physics-informed neural networks and neural operators

    Full text link
    Uncertainty quantification (UQ) in scientific machine learning (SciML) becomes increasingly critical as neural networks (NNs) are being widely adopted in addressing complex problems across various scientific disciplines. Representative SciML models are physics-informed neural networks (PINNs) and neural operators (NOs). While UQ in SciML has been increasingly investigated in recent years, very few works have focused on addressing the uncertainty caused by the noisy inputs, such as spatial-temporal coordinates in PINNs and input functions in NOs. The presence of noise in the inputs of the models can pose significantly more challenges compared to noise in the outputs of the models, primarily due to the inherent nonlinearity of most SciML algorithms. As a result, UQ for noisy inputs becomes a crucial factor for reliable and trustworthy deployment of these models in applications involving physical knowledge. To this end, we introduce a Bayesian approach to quantify uncertainty arising from noisy inputs-outputs in PINNs and NOs. We show that this approach can be seamlessly integrated into PINNs and NOs, when they are employed to encode the physical information. PINNs incorporate physics by including physics-informed terms via automatic differentiation, either in the loss function or the likelihood, and often take as input the spatial-temporal coordinate. Therefore, the present method equips PINNs with the capability to address problems where the observed coordinate is subject to noise. On the other hand, pretrained NOs are also commonly employed as equation-free surrogates in solving differential equations and Bayesian inverse problems, in which they take functions as inputs. The proposed approach enables them to handle noisy measurements for both input and output functions with UQ

    Leveraging Hamilton-Jacobi PDEs with time-dependent Hamiltonians for continual scientific machine learning

    Full text link
    We address two major challenges in scientific machine learning (SciML): interpretability and computational efficiency. We increase the interpretability of certain learning processes by establishing a new theoretical connection between optimization problems arising from SciML and a generalized Hopf formula, which represents the viscosity solution to a Hamilton-Jacobi partial differential equation (HJ PDE) with time-dependent Hamiltonian. Namely, we show that when we solve certain regularized learning problems with integral-type losses, we actually solve an optimal control problem and its associated HJ PDE with time-dependent Hamiltonian. This connection allows us to reinterpret incremental updates to learned models as the evolution of an associated HJ PDE and optimal control problem in time, where all of the previous information is intrinsically encoded in the solution to the HJ PDE. As a result, existing HJ PDE solvers and optimal control algorithms can be reused to design new efficient training approaches for SciML that naturally coincide with the continual learning framework, while avoiding catastrophic forgetting. As a first exploration of this connection, we consider the special case of linear regression and leverage our connection to develop a new Riccati-based methodology for solving these learning problems that is amenable to continual learning applications. We also provide some corresponding numerical examples that demonstrate the potential computational and memory advantages our Riccati-based approach can provide

    Cell transcriptomic atlas of the non-human primate Macaca fascicularis.

    Get PDF
    Studying tissue composition and function in non-human primates (NHPs) is crucial to understand the nature of our own species. Here we present a large-scale cell transcriptomic atlas that encompasses over 1 million cells from 45 tissues of the adult NHP Macaca fascicularis. This dataset provides a vast annotated resource to study a species phylogenetically close to humans. To demonstrate the utility of the atlas, we have reconstructed the cell-cell interaction networks that drive Wnt signalling across the body, mapped the distribution of receptors and co-receptors for viruses causing human infectious diseases, and intersected our data with human genetic disease orthologues to establish potential clinical associations. Our M. fascicularis cell atlas constitutes an essential reference for future studies in humans and NHPs.We thank W. Liu and L. Xu from the Huazhen Laboratory Animal Breeding Centre for helping in the collection of monkey tissues, D. Zhu and H. Li from the Bioland Laboratory (Guangzhou Regenerative Medicine and Health Guangdong Laboratory) for technical help, G. Guo and H. Sun from Zhejiang University for providing HCL and MCA gene expression data matrices, G. Dong and C. Liu from BGI Research, and X. Zhang, P. Li and C. Qi from the Guangzhou Institutes of Biomedicine and Health for experimental advice or providing reagents. This work was supported by the Shenzhen Basic Research Project for Excellent Young Scholars (RCYX20200714114644191), Shenzhen Key Laboratory of Single-Cell Omics (ZDSYS20190902093613831), Shenzhen Bay Laboratory (SZBL2019062801012) and Guangdong Provincial Key Laboratory of Genome Read and Write (2017B030301011). In addition, L.L. was supported by the National Natural Science Foundation of China (31900466), Y. Hou was supported by the Natural Science Foundation of Guangdong Province (2018A030313379) and M.A.E. was supported by a Changbai Mountain Scholar award (419020201252), the Strategic Priority Research Program of the Chinese Academy of Sciences (XDA16030502), a Chinese Academy of Sciences–Japan Society for the Promotion of Science joint research project (GJHZ2093), the National Natural Science Foundation of China (92068106, U20A2015) and the Guangdong Basic and Applied Basic Research Foundation (2021B1515120075). M.L. was supported by the National Key Research and Development Program of China (2021YFC2600200).S

    A Generative Modeling Framework for Inferring Families of Biomechanical Constitutive Laws in Data-Sparse Regimes

    Full text link
    Quantifying biomechanical properties of the human vasculature could deepen our understanding of cardiovascular diseases. Standard nonlinear regression in constitutive modeling requires considerable high-quality data and an explicit form of the constitutive model as prior knowledge. By contrast, we propose a novel approach that combines generative deep learning with Bayesian inference to efficiently infer families of constitutive relationships in data-sparse regimes. Inspired by the concept of functional priors, we develop a generative adversarial network (GAN) that incorporates a neural operator as the generator and a fully-connected neural network as the discriminator. The generator takes a vector of noise conditioned on measurement data as input and yields the predicted constitutive relationship, which is scrutinized by the discriminator in the following step. We demonstrate that this framework can accurately estimate means and standard deviations of the constitutive relationships of the murine aorta using data collected either from model-generated synthetic data or ex vivo experiments for mice with genetic deficiencies. In addition, the framework learns priors of constitutive models without explicitly knowing their functional form, providing a new model-agnostic approach to learning hidden constitutive behaviors from data

    Bayesian Physics-Informed Neural Networks for real-world nonlinear dynamical systems

    Full text link
    Understanding real-world dynamical phenomena remains a challenging task. Across various scientific disciplines, machine learning has advanced as the go-to technology to analyze nonlinear dynamical systems, identify patterns in big data, and make decision around them. Neural networks are now consistently used as universal function approximators for data with underlying mechanisms that are incompletely understood or exceedingly complex. However, neural networks alone ignore the fundamental laws of physics and often fail to make plausible predictions. Here we integrate data, physics, and uncertainties by combining neural networks, physics-informed modeling, and Bayesian inference to improve the predictive potential of traditional neural network models. We embed the physical model of a damped harmonic oscillator into a fully-connected feed-forward neural network to explore a simple and illustrative model system, the outbreak dynamics of COVID-19. Our Physics-Informed Neural Networks can seamlessly integrate data and physics, robustly solve forward and inverse problems, and perform well for both interpolation and extrapolation, even for a small amount of noisy and incomplete data. At only minor additional cost, they can self-adaptively learn the weighting between data and physics. Combined with Bayesian Neural Networks, they can serve as priors in a Bayesian Inference, and provide credible intervals for uncertainty quantification. Our study reveals the inherent advantages and disadvantages of Neural Networks, Bayesian Inference, and a combination of both and provides valuable guidelines for model selection. While we have only demonstrated these approaches for the simple model problem of a seasonal endemic infectious disease, we anticipate that the underlying concepts and trends generalize to more complex disease conditions and, more broadly, to a wide variety of nonlinear dynamical systems
    corecore