639 research outputs found
Recommended from our members
Deep Learning Models for Irregularly Sampled and Incomplete Time Series
Irregularly sampled time series data arise naturally in many application domains including biology, ecology, climate science, astronomy, geology, finance, and health. Such data present fundamental challenges to many classical models from machine learning and statistics. The first challenge with modeling such data is the presence of variable time gaps between the observation time points. The second challenge is that the dimensionality of the inputs can be different for different data cases. This occurs naturally due to the fact that different data cases are likely to include different numbers of observations. The third challenge is that different irregularly sampled instances have observations recorded at different times. This results in a lack of temporal alignment across data cases. There could also be a lack of alignment of observation time points across different dimensions in the same multivariate time series. These features of irregularly sampled time series data invalidate the assumption of a coherent fully-observed fixed-dimensional feature space that underlies many basic supervised and unsupervised learning models.
In this thesis, we focus on the development of deep learning models for the problems of supervised and unsupervised learning from irregularly sampled time series data. We begin by introducing a computationally efficient architecture for whole time series classification and regression problems based on the use of a novel deterministic interpolation-based layer that acts as a bridge between multivariate irregularly sampled time series data instances and standard neural network layers that assume regularly-spaced or fixed-dimensional inputs. The architecture is based on the use of a radial basis function (RBF) kernel interpolation network followed by the application of a prediction network. Next, we show how the use of fixed RBF kernel functions can be relaxed through the use of a novel attention-based continuous-time interpolation framework. We show that using attention to learn temporal similarity results in improvements over fixed RBF kernels and other recent approaches in terms of both supervised and unsupervised tasks. Next, we present a novel deep learning framework for probabilistic interpolation that significantly improves uncertainty quantification in the output interpolations. Furthermore, we show that this framework is also able to improve classification performance. As our final contribution, we study fusion architectures for learning from text data combined with irregularly sampled time series data
Precipitation prediction using recurrent neural networks and long short-term memory
Prediction of meteorological variables such as precipitation, temperature, wind speed, and solar radiation is beneficial for human life. The variable observations data is available from time to time for more than thirty years, scattered each observation station makes the opportunity to map patterns into predictions. However, the complexity of weather variables is very high, one of which is influenced by Decadal phenomena such as El-Nino Southern Oscillation and IOD. Weather predictions can be reviewed for the duration, prediction variables, and observation stations. This research proposed precipitation prediction using recurrent neural networks and long short-term memory. Experiments were carried out using the prediction duration factor, the period as a feature and the amount of data set used, and the optimization model. The results showed that the time-lapse as a shorter feature gives good accuracy. Also, the duration of weekly predictions provides more accuracy than monthly, which is 85.71% compared to 83.33% of the validation data
Explainable Physics-informed Deep Learning for Rainfall-runoff Modeling and Uncertainty Assessment across the Continental United States
Hydrologic models provide a comprehensive tool to calibrate streamflow response to environmental variables. Various hydrologic modeling approaches, ranging from physically based to conceptual to entirely data-driven models, have been widely used for hydrologic simulation. During the recent years, however, Deep Learning (DL), a new generation of Machine Learning (ML), has transformed hydrologic simulation research to a new direction. DL methods have recently proposed for rainfall-runoff modeling that complement both distributed and conceptual hydrologic models, particularly in a catchment where data to support a process-based model is scared and limited.
This dissertation investigated the applicability of two advanced probabilistic physics-informed DL algorithms, i.e., deep autoregressive network (DeepAR) and temporal fusion transformer (TFT), for daily rainfall-runoff modeling across the continental United States (CONUS).
We benchmarked our proposed models against several physics-based hydrologic approaches such as the Sacramento Soil Moisture Accounting Model (SAC-SMA), Variable Infiltration Capacity (VIC), Framework for Understanding Structural Errors (FUSE), Hydrologiska Byråns Vattenbalansavdelning (HBV), and the mesoscale hydrologic model (mHM). These benchmark models can be distinguished into two different groups. The first group are the models calibrated for each basin individually (e.g., SAC-SMA, VIC, FUSE2, mHM and HBV) while the second group, including our physics-informed approaches, is made up of the models that were regionally calibrated. Models in this group share one parameter set for all basins in the dataset. All the approaches were implemented and tested using Catchment Attributes and Meteorology for Large-sample Studies (CAMELS)\u27s Maurer datasets.
We developed the TFT and DeepAR with two different configurations i.e., with (physics-informed model) and without (the original model) static attributes. Various catchment static and dynamic physical attributes were incorporated into the pipeline with various spatiotemporal variabilities to simulate how a drainage system responds to rainfall-runoff processes. To demonstrate how the model learned to differentiate between different rainfall–runoff behaviors across different catchments and to identify the dominant process, sensitivity and explainability analysis of modeling outcomes are also performed. Despite recent advancements, deep networks are perceived as being challenging to parameterize; thus, their simulation may propagate error and uncertainty in modeling. To address uncertainty, a quantile likelihood function was incorporated as the TFT loss function. The results suggest that the physics-informed TFT model was superior in predicting high and low flow fluctuations compared to the original TFT and DeepAR models (without static attributes) or even the physics-informed DeepAR. Physics-informed TFT model well recognized which static attributes more contributing to streamflow generation of each specific catchment considering its climate, topography, land cover, soil, and geological conditions. The interpretability and the ability of the physics-informed TFT model to assimilate the multisource of information and parameters make it a strong candidate for regional as well as continental-scale hydrologic simulations. It was noted that both physics-informed TFT and DeepAR were more successful in learning the intermediate flow and high flow regimes rather than the low flow regime. The advantage of the high flow can be attributed to learning a more generalizable mapping between static and dynamic attributes and runoff parameters. It seems both TFT and DeepAR may have enabled the learning of some true processes that are missing from both conceptual and physics-based models, possibly related to deep soil water storage (the layer where soil water is not sensitive to daily evapotranspiration), saturated hydraulic conductivity, and vegetation dynamics
Explainable Tensorized Neural Ordinary Differential Equations forArbitrary-step Time Series Prediction
We propose a continuous neural network architecture, termed Explainable
Tensorized Neural Ordinary Differential Equations (ETN-ODE), for multi-step
time series prediction at arbitrary time points. Unlike the existing
approaches, which mainly handle univariate time series for multi-step
prediction or multivariate time series for single-step prediction, ETN-ODE
could model multivariate time series for arbitrary-step prediction. In
addition, it enjoys a tandem attention, w.r.t. temporal attention and variable
attention, being able to provide explainable insights into the data.
Specifically, ETN-ODE combines an explainable Tensorized Gated Recurrent Unit
(Tensorized GRU or TGRU) with Ordinary Differential Equations (ODE). The
derivative of the latent states is parameterized with a neural network. This
continuous-time ODE network enables a multi-step prediction at arbitrary time
points. We quantitatively and qualitatively demonstrate the effectiveness and
the interpretability of ETN-ODE on five different multi-step prediction tasks
and one arbitrary-step prediction task. Extensive experiments show that ETN-ODE
can lead to accurate predictions at arbitrary time points while attaining best
performance against the baseline methods in standard multi-step time series
prediction
Variational Bayesian dropout with a Gaussian prior for recurrent neural networks application in rainfall–runoff modeling
Recurrent neural networks (RNNs) are a class of artificial neural networks capable of learning complicated nonlinear relationships and functions from a set of data. Catchment scale daily rainfall–runoff relationship is a nonlinear and sequential process that can potentially benefit from these intelligent algorithms. However, RNNs are perceived as being difficult to parameterize, thus translating into significant epistemic (lack of knowledge about a physical system) and aleatory (inherent randomness in a physical system) uncertainties in modeling. The current study investigates a variational Bayesian dropout (or Monte Carlo dropout (MC-dropout)) as a diagnostic approach to the RNNs evaluation that is able to learn a mapping function and account for data and model uncertainty. MC-dropout uncertainty technique is coupled with three different RNN networks, i.e. vanilla RNN, long short-term memory (LSTM), and gated recurrent unit (GRU) to approximate Bayesian inference in a deep Gaussian noise process and quantify both epistemic and aleatory uncertainties in daily rainfall–runoff simulation across a mixed urban and rural coastal catchment in North Carolina, USA. The variational Bayesian outcomes were then compared with the observed data as well as with a well-known Sacramento soil moisture accounting (SAC-SMA) model simulation results. Analysis suggested a considerable improvement in predictive log-likelihood using the MC-dropout technique with an inherent input data Gaussian noise term applied to the RNN layers to implicitly mitigate overfitting and simulate daily streamflow records. Our experiments on the three different RNN models across a broad range of simulation strategies demonstrated the superiority of LSTM and GRU approaches relative to the SAC-SMA conceptual hydrologic model
Recommended from our members
Machine learning to model health with multimodal mobile sensor data
The widespread adoption of smartphones and wearables has led to the accumulation of rich datasets, which could aid the understanding of behavior and health in unprecedented detail. At the same time, machine learning and specifically deep learning have reached impressive performance in a variety of prediction tasks, but their use on time-series data appears challenging. Existing models struggle to learn from this unique type of data due to noise, sparsity, long-tailed distributions of behaviors, lack of labels, and multimodality.
This dissertation addresses these challenges by developing new models that leverage multi-task learning for accurate forecasting, multimodal fusion for improved population subtyping, and self-supervision for learning generalized representations. We apply our proposed methods to challenging real-world tasks of predicting mental health and cardio-respiratory fitness through sensor data.
First, we study the relationship of passive data as collected from smartphones (movement and background audio) to momentary mood levels. Our new training pipeline, which combines different sensor data into a low-dimensional embedding and clusters longitudinal user trajectories as outcome, outperforms traditional approaches based solely on psychology questionnaires. Second, motivated by mood instability as a predictor of poor mental health, we propose encoder-decoder models for time-series forecasting which exploit the bi-modality of mood with multi-task learning.
Next, motivated by the success of general-purpose models in vision and language tasks, we propose a self-supervised neural network ready-to-use as a feature extractor for wearable data. To this end, we set the heart rate responses as the supervisory signal for activity data, leveraging their underlying physiological relationship and show that the resulting task-agnostic embeddings can generalize in predicting structurally different downstream outcomes through transfer learning (e.g. BMI, age, energy expenditure), outperforming unsupervised autoencoders and biomarkers. Finally, acknowledging fitness as a strong predictor of overall health, which, however, can only be measured with expensive instruments (e.g., a VO2max test), we develop models that enable accurate prediction of fine-grained fitness levels with wearables in the present, and more importantly, its direction and magnitude almost a decade later.
All proposed methods are evaluated on large longitudinal datasets with tens of thousands of participants in the wild. The models developed and the insights drawn in this dissertation provide evidence for a better understanding of high-dimensional behavioral and physiological data with implications for large-scale health and lifestyle monitoring.The Department of Computer Science and Technology at the University of Cambridge through the EPSRC through Grant DTP (EP/N509620/1), and the Embiricos Trust Scholarship of Jesus College Cambridg
- …