4,190 research outputs found
Recommended from our members
Modeling and simulating of reservoir operation using the artificial neural network, support vector regression, deep learning algorithm
Reservoirs and dams are vital human-built infrastructures that play essential roles in flood control, hydroelectric power generation, water supply, navigation, and other functions. The realization of those functions requires efficient reservoir operation, and the effective controls on the outflow from a reservoir or dam. Over the last decade, artificial intelligence (AI) techniques have become increasingly popular in the field of streamflow forecasts, reservoir operation planning and scheduling approaches. In this study, three AI models, namely, the backpropagation (BP) neural network, support vector regression (SVR) technique, and long short-term memory (LSTM) model, are employed to simulate reservoir operation at monthly, daily, and hourly time scales, using approximately 30 years of historical reservoir operation records. This study aims to summarize the influence of the parameter settings on model performance and to explore the applicability of the LSTM model to reservoir operation simulation. The results show the following: (1) for the BP neural network and LSTM model, the effects of the number of maximum iterations on model performance should be prioritized; for the SVR model, the simulation performance is directly related to the selection of the kernel function, and sigmoid and RBF kernel functions should be prioritized; (2) the BP neural network and SVR are suitable for the model to learn the operation rules of a reservoir from a small amount of data; and (3) the LSTM model is able to effectively reduce the time consumption and memory storage required by other AI models, and demonstrate good capability in simulating low-flow conditions and the outflow curve for the peak operation period
Recurrent kernel machines : computing with infinite echo state networks
Echo state networks (ESNs) are large, random recurrent neural networks with a single trained linear readout layer. Despite the untrained nature of the recurrent weights, they are capable of performing universal computations on temporal input data, which makes them interesting for both theoretical research and practical applications. The key to their success lies in the fact that the network computes a broad set of nonlinear, spatiotemporal mappings of the input data, on which linear regression or classification can easily be performed. One could consider the reservoir as a spatiotemporal kernel, in which the mapping to a high-dimensional space is computed explicitly. In this letter, we build on this idea and extend the concept of ESNs to infinite-sized recurrent neural networks, which can be considered recursive kernels that subsequently can be used to create recursive support vector machines. We present the theoretical framework, provide several practical examples of recursive kernels, and apply them to typical temporal tasks
Machine learning in dam water research: an overview of applications and approaches
Dam plays a crucial role in water security. A sustainable dam intends to balance a range of resources involves within a dam operation. Among the factors to maintain sustainability is to maintain and manage the water assets in dams. Water asset management in dams includes a process to ensure the planned maintenance can be conducted and assets such as pipes, pumps and motors can be mended, substituted, or upgraded when needed within the allocated budgetary. Nowadays, most water asset management systems collect and process data for data analysis and decision-making. Machine learning (ML) is an emerging concept applied to fulfill the requirement in engineering applications such as dam water researches. ML can analyze vast volumes of data and through an ML model built from algorithms, ML can learn, recognize and produce accurate results and analysis. The result brings meaningful insights for water asset management specifically to strategize the optimal solution based on the forecast or prediction. For example, a preventive maintenance for replacing water assets according to the prediction from the ML model. We will discuss the approaches of machine learning in recent dam water research and review the emerging issues to manage water assets in dams in this paper
Spatio-temporal learning with the online finite and infinite echo-state Gaussian processes
Successful biological systems adapt to change. In this paper, we are principally concerned with adaptive systems that operate in environments where data arrives sequentially and is multivariate in nature, for example, sensory streams in robotic systems. We contribute two reservoir inspired methods: 1) the online echostate Gaussian process (OESGP) and 2) its infinite variant, the online infinite echostate Gaussian process (OIESGP) Both algorithms are iterative fixed-budget methods that learn from noisy time series. In particular, the OESGP combines the echo-state network with Bayesian online learning for Gaussian processes. Extending this to infinite reservoirs yields the OIESGP, which uses a novel recursive kernel with automatic relevance determination that enables spatial and temporal feature weighting. When fused with stochastic natural gradient descent, the kernel hyperparameters are iteratively adapted to better model the target system. Furthermore, insights into the underlying system can be gleamed from inspection of the resulting hyperparameters. Experiments on noisy benchmark problems (one-step prediction and system identification) demonstrate that our methods yield high accuracies relative to state-of-the-art methods, and standard kernels with sliding windows, particularly on problems with irrelevant dimensions. In addition, we describe two case studies in robotic learning-by-demonstration involving the Nao humanoid robot and the Assistive Robot Transport for Youngsters (ARTY) smart wheelchair
Online learning in financial time series
We wish to understand if additional learning forms can be combined with sequential optimisation to provide superior benefit over batch learning in various tasks operating in financial time series.
In chapter 4, Online learning with radial basis function networks, we provide multi-horizon forecasts on the returns of financial time series. Our sequentially optimised radial basis function network (RBFNet) outperforms a random-walk baseline and several powerful supervised learners. Our RBFNets naturally measure the similarity between test samples and prototypes that capture the characteristics of the feature space.
In chapter 5, Reinforcement learning for systematic FX trading, we perform feature representation transfer from an RBFNet to a direct, recurrent reinforcement learning (DRL) agent. Earlier academic work saw mixed results. We use better features, second-order optimisation methods and adapt our model parameters sequentially. As a result, our DRL agents cope better with statistical changes to the data distribution, achieving higher risk-adjusted returns than a funding and a momentum baseline.
In chapter 6, The recurrent reinforcement learning crypto agent, we construct a digital assets trading agent that performs feature space representation transfer from an echo state network to a DRL agent. The agent learns to trade the XBTUSD perpetual swap contract on BitMEX. Our meta-model can process data as a stream and learn sequentially; this helps it cope with the nonstationary environment.
In chapter 7, Sequential asset ranking in nonstationary time series, we create an online learning long/short portfolio selection algorithm that can detect the best and worst performing portfolio constituents that change over time; in particular, we successfully handle the higher transaction costs associated with using daily-sampled data, and achieve higher total and risk-adjusted returns than the long-only holding of the S&P 500 index with hindsight
Two methods to approximate the Koopman operator with a reservoir computer
The Koopman operator provides a powerful framework for data-driven analysis
of dynamical systems. In the last few years, a wealth of numerical methods
providing finite-dimensional approximations of the operator have been proposed
(e.g. extended dynamic mode decomposition (EDMD) and its variants). While
convergence results for EDMD require an infinite number of dictionary elements,
recent studies have shown that only few dictionary elements can yield an
efficient approximation of the Koopman operator, provided that they are
well-chosen through a proper training process. However, this training process
typically relies on nonlinear optimization techniques. In this paper, we
propose two novel methods based on a reservoir computer to train the
dictionary. These methods rely solely on linear convex optimization. We
illustrate the efficiency of the method with several numerical examples in the
context of data reconstruction, prediction, and computation of the Koopman
operator spectrum. These results pave the way to the use of the reservoir
computer in the Koopman operator framework
- …