1,422 research outputs found

    Quantifying the Effect of Registration Error on Spatio-Temporal Fusion

    Get PDF
    It is challenging to acquire satellite sensor data with both fine spatial and fine temporal resolution, especially for monitoring at global scales. Among the widely used global monitoring satellite sensors, Landsat data have a coarse temporal resolution, but fine spatial resolution, while moderate resolution imaging spectroradiometer (MODIS) data have fine temporal resolution, but coarse spatial resolution. One solution to this problem is to blend the two types of data using spatio-temporal fusion, creating images with both fine temporal and fine spatial resolution. However, reliable geometric registration of images acquired by different sensors is a prerequisite of spatio-temporal fusion. Due to the potentially large differences between the spatial resolutions of the images to be fused, the geometric registration process always contains some degree of uncertainty. This article analyzes quantitatively the influence of geometric registration error on spatio-temporal fusion. The relationship between registration error and the accuracy of fusion was investigated under the influence of different temporal distances between images, different spatial patterns within the images and using different methods (i.e., spatial and temporal adaptive reflectance fusion model (STARFM), and Fit-FC; two typical spatio-temporal fusion methods). The results show that registration error has a significant impact on the accuracy of spatio-temporal fusion; as the registration error increased, the accuracy decreased monotonically. The effect of registration error in a heterogeneous region was greater than that in a homogeneous region. Moreover, the accuracy of fusion was not dependent on the temporal distance between images to be fused, but rather on their statistical correlation. Finally, the Fit-FC method was found to be more accurate than the STARFM method, under all registration error scenarios. © 2008-2012 IEEE

    Temporal data fusion in multisensor systems using dynamic time warping

    Full text link
    Data acquired from multiple sensors can be fused at a variety of levels: the raw data level, the feature level, or the decision level. An additional dimension to the fusion process is temporal fusion, which is fusion of data or information acquired from multiple sensors of different types over a period of time. We propose a technique that can perform such temporal fusion. The core of the system is the fusion processor that uses Dynamic Time Warping (DTW) to perform temporal fusion. We evaluate the performance of the fusion system on two real world datasets: 1) accelerometer data acquired from performing two hand gestures and 2) NOKIA&rsquo;s benchmark dataset for context recognition. The results of the first experiment show that the system can perform temporal fusion on both raw data and features derived from the raw data. The system can also recognize the same class of multisensor temporal sequences even though they have different lengths e.g. the same human gestures can be performed at different speeds. In addition, the fusion processor can infer decisions from the temporal sequences fast and accurately. The results of the second experiment show that the system can perform fusion on temporal sequences that have large dimensions and are a mix of discrete and continuous variables. The proposed fusion system achieved good classification rates efficiently in both experiments<br /

    Spatial grouping determines temporal integration

    Get PDF
    To make sense out of a continuously changing visual world, people need to integrate features across space and time. Despite more than a century of research, the mechanisms of features integration are still a matter of debate. To examine how temporal and spatial integration interact, the authors measured the amount of temporal fusion (a measure of temporal integration) for different spatial layouts. They found that spatial grouping by proximity and similarity can completely block temporal integration. Computer simulations with a simple neural network capture these findings very well, suggesting that the proposed spatial grouping operations may occur already at an early stage of visual information processing

    Multi-Spatio-temporal Fusion Graph Recurrent Network for Traffic forecasting

    Full text link
    Traffic forecasting is essential for the traffic construction of smart cities in the new era. However, traffic data's complex spatial and temporal dependencies make traffic forecasting extremely challenging. Most existing traffic forecasting methods rely on the predefined adjacency matrix to model the Spatio-temporal dependencies. Nevertheless, the road traffic state is highly real-time, so the adjacency matrix should change dynamically with time. This article presents a new Multi-Spatio-temporal Fusion Graph Recurrent Network (MSTFGRN) to address the issues above. The network proposes a data-driven weighted adjacency matrix generation method to compensate for real-time spatial dependencies not reflected by the predefined adjacency matrix. It also efficiently learns hidden Spatio-temporal dependencies by performing a new two-way Spatio-temporal fusion operation on parallel Spatio-temporal relations at different moments. Finally, global Spatio-temporal dependencies are captured simultaneously by integrating a global attention mechanism into the Spatio-temporal fusion module. Extensive trials on four large-scale, real-world traffic datasets demonstrate that our method achieves state-of-the-art performance compared to alternative baselines

    Spatio-Temporal Fusion Networks for Action Recognition

    Full text link
    The video based CNN works have focused on effective ways to fuse appearance and motion networks, but they typically lack utilizing temporal information over video frames. In this work, we present a novel spatio-temporal fusion network (STFN) that integrates temporal dynamics of appearance and motion information from entire videos. The captured temporal dynamic information is then aggregated for a better video level representation and learned via end-to-end training. The spatio-temporal fusion network consists of two set of Residual Inception blocks that extract temporal dynamics and a fusion connection for appearance and motion features. The benefits of STFN are: (a) it captures local and global temporal dynamics of complementary data to learn video-wide information; and (b) it is applicable to any network for video classification to boost performance. We explore a variety of design choices for STFN and verify how the network performance is varied with the ablation studies. We perform experiments on two challenging human activity datasets, UCF101 and HMDB51, and achieve the state-of-the-art results with the best network

    Comparing deep learning models for volatility prediction using multivariate data

    Full text link
    This study aims at comparing several deep learning-based forecasters in the task of volatility prediction using multivariate data, proceeding from simpler or shallower to deeper and more complex models and compare them to the naive prediction and variations of classical GARCH models. Specifically, the volatility of five assets (i.e., S\&P500, NASDAQ100, gold, silver, and oil) was predicted with the GARCH models, Multi-Layer Perceptrons, recurrent neural networks, Temporal Convolutional Networks, and the Temporal Fusion Transformer. In most cases the Temporal Fusion Transformer followed by variants of Temporal Convolutional Network outperformed classical approaches and shallow networks. These experiments were repeated, and the difference between competing models was shown to be statistically significant, therefore encouraging their use in practice

    A rigorous statistical framework for spatio-temporal pollution prediction and estimation of its long-term impact on health

    Get PDF
    In the United Kingdom, air pollution is linked to around 40000 premature deaths each year, but estimating its health effects is challenging in a spatio-temporal study. The challenges include spatial misalignment between the pollution and disease data; uncertainty in the estimated pollution surface; and complex residual spatio-temporal autocorrelation in the disease data. This article develops a two-stage model that addresses these issues. The first stage is a spatio-temporal fusion model linking modeled and measured pollution data, while the second stage links these predictions to the disease data. The methodology is motivated by a new five-year study investigating the effects of multiple pollutants on respiratory hospitalizations in England between 2007 and 2011, using pollution and disease data relating to local and unitary authorities on a monthly time scale
    corecore