8 research outputs found

    Variational Integrator Graph Networks for Learning Energy Conserving Dynamical Systems

    Full text link
    Recent advances show that neural networks embedded with physics-informed priors significantly outperform vanilla neural networks in learning and predicting the long term dynamics of complex physical systems from noisy data. Despite this success, there has only been a limited study on how to optimally combine physics priors to improve predictive performance. To tackle this problem we unpack and generalize recent innovations into individual inductive bias segments. As such, we are able to systematically investigate all possible combinations of inductive biases of which existing methods are a natural subset. Using this framework we introduce Variational Integrator Graph Networks - a novel method that unifies the strengths of existing approaches by combining an energy constraint, high-order symplectic variational integrators, and graph neural networks. We demonstrate, across an extensive ablation, that the proposed unifying framework outperforms existing methods, for data-efficient learning and in predictive accuracy, across both single and many-body problems studied in recent literature. We empirically show that the improvements arise because high order variational integrators combined with a potential energy constraint induce coupled learning of generalized position and momentum updates which can be formalized via the Partitioned Runge-Kutta method.Comment: updated version that includes an extensive ablation across graph,non-graph methods as well as different integrators [under review

    Tuning Mixed Input Hyperparameters on the Fly for Efficient Population Based AutoRL

    Full text link
    Despite a series of recent successes in reinforcement learning (RL), many RL algorithms remain sensitive to hyperparameters. As such, there has recently been interest in the field of AutoRL, which seeks to automate design decisions to create more general algorithms. Recent work suggests that population based approaches may be effective AutoRL algorithms, by learning hyperparameter schedules on the fly. In particular, the PB2 algorithm is able to achieve strong performance in RL tasks by formulating online hyperparameter optimization as time varying GP-bandit problem, while also providing theoretical guarantees. However, PB2 is only designed to work for continuous hyperparameters, which severely limits its utility in practice. In this paper we introduce a new (provably) efficient hierarchical approach for optimizing both continuous and categorical variables, using a new time-varying bandit algorithm specifically designed for the population based training regime. We evaluate our approach on the challenging Procgen benchmark, where we show that explicitly modelling dependence between data augmentation and other hyperparameters improves generalization

    Physics-informed neural networks for data-efficient learning

    No full text
    The physical world around us is profoundly complex and for centuries we have sought to develop a deeper understanding of how it functions. Building models capable of forecasting the long term dynamics of multi-physics systems such as complex blood flow, chaotic oscillators and quantum mechanical systems thus continues to be a critical challenge within the sciences. While traditional and computational tools have dramatically improved to address parts of this open problem, they face numerous challenges, remain computationally resource intensive, and are susceptible to severe error accumulation. Now, modern machine learning techniques, augmented by a plethora of sensor data, are driving significant progress in this direction, helping us uncover sophisticated relationships from underlying physical processes. An emergent area within this domain is hybrid physics-informed machine learning where partial prior knowledge of the physical system is integrated into the machine learning pipeline to improve predictive performance and data-efficiency. In this thesis, we investigate how existing knowledge about the physical world can be used to improve and augment predictive performance of neural networks. First, we show that learning biases designed to preserve structure, connectivity and energy such as graphs, integrators and Hamiltonians can be effectively combined to learn the dynamics of complex many-body energy-conserving systems from sparse, noisy data. Secondly, by embedding a generalized formalism of port-Hamiltonians into neural networks, we accurately recover the dynamics of irreversible physical systems from data. Furthermore, we highlight how our models, by design, can discover the underlying force and damping terms from sparse data as well as reconstruct the Poincar\'e section of chaotic systems. Lastly, we show that physics-informed neural networks can be effectively exploited for efficient and accurate transfer learning - achieving orders of magnitude speed-up while maintaining high-fidelity on numerous well studied differential equations. Collectively, these innovations show case a new direction for scientific machine learning - one where existing knowledge is combined with machine learning methods. Many benefits naturally arise as a consequence of this including (1) accurate learning and long-term predictions (2) data-efficiency (3) reliability and (4) scalability. Such hybrid models are paramount to developing robust machine learning methods capable of modeling and forecasting complex multi-fidelity, multi-scale physical processes

    Computational Models in Mega Constellation Satellite Communications

    No full text
    Mega-Constellations are interconnected webs of thousands of satellites that deliver high-speed wireless communication to ground station clients. Major corporations, such as SpaceX and Amazon, utilize mega-constellations stationed at various altitudes in Low Earth Orbit (LEO). In recent years, the number of LEO satellites in orbit has greatly increased, resulting in conflicting bandwidth usage. Any such overlaps between various mega-constellations, known as interferences, are handled using the primitive “1/n” rule, where each interfering satellite receives an equal “1/nth” section of the shared bandwidth. However, the simplicity of this rule allows parties to intentionally create more interferences while accepting the same setbacks as independent systems operating in their own space. Through the creation of a Python Monte Carlo simulation, tradeoffs between interferences, successful transmissions, and signal coverage are investigated to generate new interference regulations and optimize satellite communications. The 2D simulation evaluates fixed satellites in time, analyzing the complex relationships and tradeoffs generated by different numbers of satellites and clients, transmission angles, satellite heights, and the number of companies. Both computational and parametric optimizations were implemented into the simulation, and 1D mathematical models based on ideal circumstances were investigated

    Port-Hamiltonian neural networks for learning explicit time-dependent dynamical systems

    No full text
    Accurately learning the temporal behavior of dynamical systems requires models with well-chosen learning biases. Recent innovations embed the Hamiltonian and Lagrangian formalisms into neural networks and demonstrate a significant improvement over other approaches in predicting trajectories of physical systems. These methods generally tackle autonomous systems that depend implicitly on time or systems for which a control signal is known a priori. Despite this success, many real world dynamical systems are nonautonomous, driven by time-dependent forces and experience energy dissipation. In this study, we address the challenge of learning from such nonautonomous systems by embedding the port-Hamiltonian formalism into neural networks, a versatile framework that can capture energy dissipation and time-dependent control forces. We show that the proposed port-Hamiltonian neural network can efficiently learn the dynamics of nonlinear physical systems of practical interest and accurately recover the underlying stationary Hamiltonian, time-dependent force, and dissipative coefficient. A promising outcome of our network is its ability to learn and predict chaotic systems such as the Duffing equation, for which the trajectories are typically hard to learn

    Efficacy of Xanthine Oxidase Inhibitors in Lowering Serum Uric Acid in Chronic Kidney Disease: A Systematic Review and Meta-Analysis

    No full text
    Objective: Current guidelines for gout recommend a treat-to-target approach with serum uric acid (SUA). However, there is little evidence for the dose-dependent effects of urate-lowering therapy (ULT). Herein, we analyzed the reported SUA-lowering effect and SUA target achievement differences for various doses of xanthine oxidase inhibitors. Methods: Approved ULT drugs were selected from the FDA Drug Database. We included prospective randomized controlled trials of ULT drugs from ClinicalTrials.gov, articles published in the journal “Drugs”, and Embase, a literature database. A meta-analysis was performed to determine the ability of different ULT drugs and doses to lower and maintain a target SUA < 6 mg/dL. Results: We identified 35 trials including 8172 patients with a baseline SUA of 8.92 mg/dL. The allopurinol, febuxostat, and topiroxostat showed dose-proportional SUA-lowering responses. Compared with allopurinol 300 mg daily, febuxostat 80 mg daily and 120 mg daily more effectively maintained SUA < 6 mg/dL. Conclusion: Allopurinol, febuxostat, and topiroxostat showed dose-proportional ability to lower and achieve a target SUA < 6 mg/dL. Significance and Innovations. We showed dose-dependent SUA lowering effects of allopurinol, febuxostat, and topiroxostat. Febuxostat is effective at ULT compared to allopurinol and could be potentially offered as an alternative agent when patients (1) have CKD, (2) have the human leukocyte antigen HLA-B*5801 allele, and (3) become refractory to allopurinol. Gradual allopurinol dose increase with a lower starting dose is needed in CKD
    corecore