1,241 research outputs found
Multi-Scale Information, Network, Causality, and Dynamics: Mathematical Computation and Bayesian Inference to Cognitive Neuroscience and Aging
The human brain is estimated to contain 100 billion or so neurons and 10 thousand times as many connections. Neurons never function in isolation: each of them is connected to 10, 000 others and they interact extensively every millisecond. Brain cells are organized into neural circuits often in a dynamic way, processing specific types of information and providing th
Leaning Robust Sequence Features via Dynamic Temporal Pattern Discovery
As a major type of data, time series possess invaluable latent knowledge for describing the real world and human society. In order to improve the ability of intelligent systems for understanding the world and people, it is critical to design sophisticated machine learning algorithms for extracting robust time series features from such latent knowledge. Motivated by the successful applications of deep learning in computer vision, more and more machine learning researchers put their attentions on the topic of applying deep learning techniques to time series data. However, directly employing current deep models in most time series domains could be problematic. A major reason is that temporal pattern types that current deep models are aiming at are very limited, which cannot meet the requirement of modeling different underlying patterns of data coming from various sources. In this study we address this problem by designing different network structures explicitly based on specific domain knowledge such that we can extract features via most salient temporal patterns. More specifically, we mainly focus on two types of temporal patterns: order patterns and frequency patterns. For order patterns, which are usually related to brain and human activities, we design a hashing-based neural network layer to globally encode the ordinal pattern information into the resultant features. It is further generalized into a specially designed Recurrent Neural Networks (RNN) cell which can learn order patterns in an online fashion. On the other hand, we believe audio-related data such as music and speech can benefit from modeling frequency patterns. Thus, we do so by developing two types of RNN cells. The first type tries to directly learn the long-term dependencies on frequency domain rather than time domain. The second one aims to dynamically filter out the noise frequencies based on temporal contexts. By proposing various deep models based on different domain knowledge and evaluating them on extensive time series tasks, we hope this work can provide inspirations for others and increase the community\u27s interests on the problem of applying deep learning techniques to more time series tasks
A Systematic Review for Transformer-based Long-term Series Forecasting
The emergence of deep learning has yielded noteworthy advancements in time
series forecasting (TSF). Transformer architectures, in particular, have
witnessed broad utilization and adoption in TSF tasks. Transformers have proven
to be the most successful solution to extract the semantic correlations among
the elements within a long sequence. Various variants have enabled transformer
architecture to effectively handle long-term time series forecasting (LTSF)
tasks. In this article, we first present a comprehensive overview of
transformer architectures and their subsequent enhancements developed to
address various LTSF tasks. Then, we summarize the publicly available LTSF
datasets and relevant evaluation metrics. Furthermore, we provide valuable
insights into the best practices and techniques for effectively training
transformers in the context of time-series analysis. Lastly, we propose
potential research directions in this rapidly evolving field
Brain connectivity analysis: a short survey
This short survey the reviews recent literature on brain connectivity studies. It encompasses all forms of static and dynamic
connectivity whether anatomical, functional, or effective. The last decade has seen an ever increasing number of studies devoted
to deduce functional or effective connectivity, mostly from functional neuroimaging experiments. Resting state conditions have
become a dominant experimental paradigm, and a number of resting state networks, among them the prominent default mode
network, have been identified. Graphical models represent a convenient vehicle to formalize experimental findings and to closely
and quantitatively characterize the various networks identified. Underlying these abstract concepts are anatomical networks, the
so-called connectome, which can be investigated by functional imaging techniques as well. Future studies have to bridge the gap between anatomical neuronal connections and related functional or effective connectivities
Spatiotemporal Graph Neural Networks with Uncertainty Quantification for Traffic Incident Risk Prediction
Predicting traffic incident risks at granular spatiotemporal levels is
challenging. The datasets predominantly feature zero values, indicating no
incidents, with sporadic high-risk values for severe incidents. Notably, a
majority of current models, especially deep learning methods, focus solely on
estimating risk values, overlooking the uncertainties arising from the
inherently unpredictable nature of incidents. To tackle this challenge, we
introduce the Spatiotemporal Zero-Inflated Tweedie Graph Neural Networks
(STZITD-GNNs). Our model merges the reliability of traditional statistical
models with the flexibility of graph neural networks, aiming to precisely
quantify uncertainties associated with road-level traffic incident risks. This
model strategically employs a compound model from the Tweedie family, as a
Poisson distribution to model risk frequency and a Gamma distribution to
account for incident severity. Furthermore, a zero-inflated component helps to
identify the non-incident risk scenarios. As a result, the STZITD-GNNs
effectively capture the dataset's skewed distribution, placing emphasis on
infrequent but impactful severe incidents. Empirical tests using real-world
traffic data from London, UK, demonstrate that our model excels beyond current
benchmarks. The forte of STZITD-GNN resides not only in its accuracy but also
in its adeptness at curtailing uncertainties, delivering robust predictions
over short (7 days) and extended (14 days) timeframes
- …