43,243 research outputs found

    Bayesian Recurrent Neural Network Models for Forecasting and Quantifying Uncertainty in Spatial-Temporal Data

    Full text link
    Recurrent neural networks (RNNs) are nonlinear dynamical models commonly used in the machine learning and dynamical systems literature to represent complex dynamical or sequential relationships between variables. More recently, as deep learning models have become more common, RNNs have been used to forecast increasingly complicated systems. Dynamical spatio-temporal processes represent a class of complex systems that can potentially benefit from these types of models. Although the RNN literature is expansive and highly developed, uncertainty quantification is often ignored. Even when considered, the uncertainty is generally quantified without the use of a rigorous framework, such as a fully Bayesian setting. Here we attempt to quantify uncertainty in a more formal framework while maintaining the forecast accuracy that makes these models appealing, by presenting a Bayesian RNN model for nonlinear spatio-temporal forecasting. Additionally, we make simple modifications to the basic RNN to help accommodate the unique nature of nonlinear spatio-temporal data. The proposed model is applied to a Lorenz simulation and two real-world nonlinear spatio-temporal forecasting applications

    Deep Learning for Stable Monotone Dynamical Systems

    Full text link
    Monotone systems, originating from real-world (e.g., biological or chemical) applications, are a class of dynamical systems that preserves a partial order of system states over time. In this work, we introduce a feedforward neural networks (FNNs)-based method to learn the dynamics of unknown stable nonlinear monotone systems. We propose the use of nonnegative neural networks and batch normalization, which in general enables the FNNs to capture the monotonicity conditions without reducing the expressiveness. To concurrently ensure stability during training, we adopt an alternating learning method to simultaneously learn the system dynamics and corresponding Lyapunov function, while exploiting monotonicity of the system.~The combination of the monotonicity and stability constraints ensures that the learned dynamics preserves both properties, while significantly reducing learning errors. Finally, our techniques are evaluated on two complex biological and chemical systems

    Recurrent Equilibrium Networks: Flexible Dynamic Models with Guaranteed Stability and Robustness

    Full text link
    This paper introduces recurrent equilibrium networks (RENs), a new class of nonlinear dynamical models for applications in machine learning, system identification and control. The new model class has ``built in'' guarantees of stability and robustness: all models in the class are contracting - a strong form of nonlinear stability - and models can satisfy prescribed incremental integral quadratic constraints (IQC), including Lipschitz bounds and incremental passivity. RENs are otherwise very flexible: they can represent all stable linear systems, all previously-known sets of contracting recurrent neural networks and echo state networks, all deep feedforward neural networks, and all stable Wiener/Hammerstein models. RENs are parameterized directly by a vector in R^N, i.e. stability and robustness are ensured without parameter constraints, which simplifies learning since generic methods for unconstrained optimization can be used. The performance and robustness of the new model set is evaluated on benchmark nonlinear system identification problems, and the paper also presents applications in data-driven nonlinear observer design and control with stability guarantees.Comment: Journal submission, extended version of conference paper (v1 of this arxiv preprint

    Philosophy and theory of artificial intelligence 2017

    Get PDF
    This book reports on the results of the third edition of the premier conference in the field of philosophy of artificial intelligence, PT-AI 2017, held on November 4 - 5, 2017 at the University of Leeds, UK. It covers: advanced knowledge on key AI concepts, including complexity, computation, creativity, embodiment, representation and superintelligence; cutting-edge ethical issues, such as the AI impact on human dignity and society, responsibilities and rights of machines, as well as AI threats to humanity and AI safety; and cutting-edge developments in techniques to achieve AI, including machine learning, neural networks, dynamical systems. The book also discusses important applications of AI, including big data analytics, expert systems, cognitive architectures, and robotics. It offers a timely, yet very comprehensive snapshot of what is going on in the field of AI, especially at the interfaces between philosophy, cognitive science, ethics and computing
    • …
    corecore