871 research outputs found
Recommended from our members
Deep Learning Models for Irregularly Sampled and Incomplete Time Series
Irregularly sampled time series data arise naturally in many application domains including biology, ecology, climate science, astronomy, geology, finance, and health. Such data present fundamental challenges to many classical models from machine learning and statistics. The first challenge with modeling such data is the presence of variable time gaps between the observation time points. The second challenge is that the dimensionality of the inputs can be different for different data cases. This occurs naturally due to the fact that different data cases are likely to include different numbers of observations. The third challenge is that different irregularly sampled instances have observations recorded at different times. This results in a lack of temporal alignment across data cases. There could also be a lack of alignment of observation time points across different dimensions in the same multivariate time series. These features of irregularly sampled time series data invalidate the assumption of a coherent fully-observed fixed-dimensional feature space that underlies many basic supervised and unsupervised learning models.
In this thesis, we focus on the development of deep learning models for the problems of supervised and unsupervised learning from irregularly sampled time series data. We begin by introducing a computationally efficient architecture for whole time series classification and regression problems based on the use of a novel deterministic interpolation-based layer that acts as a bridge between multivariate irregularly sampled time series data instances and standard neural network layers that assume regularly-spaced or fixed-dimensional inputs. The architecture is based on the use of a radial basis function (RBF) kernel interpolation network followed by the application of a prediction network. Next, we show how the use of fixed RBF kernel functions can be relaxed through the use of a novel attention-based continuous-time interpolation framework. We show that using attention to learn temporal similarity results in improvements over fixed RBF kernels and other recent approaches in terms of both supervised and unsupervised tasks. Next, we present a novel deep learning framework for probabilistic interpolation that significantly improves uncertainty quantification in the output interpolations. Furthermore, we show that this framework is also able to improve classification performance. As our final contribution, we study fusion architectures for learning from text data combined with irregularly sampled time series data
Novel Techniques Using Graph Neural Networks (GNNS) for Anomaly Detection
This paper explores 2 new mechanisms that leverage graphs for anomaly detection. The novelty in approach one is to leverage the global attention capability of transformer architecture using a Graph Attention Network (GAT) with Chebyshev Laplacian for representation. This method leverages the GAT to learn attention weights for the graph features obtained through
Chebyshev expansion of the Laplacian. This method focuses on capturing higher-order graph features with reduced computational complexity and utilizing attention mechanisms for improved feature relevance in detecting anomalies.
The second approach leverages Fisher information to find anomalous graphs with ChebNet module for graph analysis. The ChebNet module allows for deep learning on graphs, capturing complex patterns and relationships that can help in detecting fraud more accurately. Using Fisher information improves model interpretability while ChebNet modules help leverage
spectral properties
- …