1,461 research outputs found
Correlating sparse sensing for large-scale traffic speed estimation: A Laplacian-enhanced low-rank tensor kriging approach
Traffic speed is central to characterizing the fluidity of the road network.
Many transportation applications rely on it, such as real-time navigation,
dynamic route planning, and congestion management. Rapid advances in sensing
and communication techniques make traffic speed detection easier than ever.
However, due to sparse deployment of static sensors or low penetration of
mobile sensors, speeds detected are incomplete and far from network-wide use.
In addition, sensors are prone to error or missing data due to various kinds of
reasons, speeds from these sensors can become highly noisy. These drawbacks
call for effective techniques to recover credible estimates from the incomplete
data. In this work, we first identify the issue as a spatiotemporal kriging
problem and propose a Laplacian enhanced low-rank tensor completion (LETC)
framework featuring both lowrankness and multi-dimensional correlations for
large-scale traffic speed kriging under limited observations. To be specific,
three types of speed correlation including temporal continuity, temporal
periodicity, and spatial proximity are carefully chosen and simultaneously
modeled by three different forms of graph Laplacian, named temporal graph
Fourier transform, generalized temporal consistency regularization, and
diffusion graph regularization. We then design an efficient solution algorithm
via several effective numeric techniques to scale up the proposed model to
network-wide kriging. By performing experiments on two public million-level
traffic speed datasets, we finally draw the conclusion and find our proposed
LETC achieves the state-of-the-art kriging performance even under low
observation rates, while at the same time saving more than half computing time
compared with baseline methods. Some insights into spatiotemporal traffic data
modeling and kriging at the network level are provided as well
Matrix Completion With Variational Graph Autoencoders: Application in Hyperlocal Air Quality Inference
Inferring air quality from a limited number of observations is an essential
task for monitoring and controlling air pollution. Existing inference methods
typically use low spatial resolution data collected by fixed monitoring
stations and infer the concentration of air pollutants using additional types
of data, e.g., meteorological and traffic information. In this work, we focus
on street-level air quality inference by utilizing data collected by mobile
stations. We formulate air quality inference in this setting as a graph-based
matrix completion problem and propose a novel variational model based on graph
convolutional autoencoders. Our model captures effectively the spatio-temporal
correlation of the measurements and does not depend on the availability of
additional information apart from the street-network topology. Experiments on a
real air quality dataset, collected with mobile stations, shows that the
proposed model outperforms state-of-the-art approaches
Graph signal reconstruction techniques for IoT air pollution monitoring platforms
Air pollution monitoring platforms play a very important role in preventing and mitigating the effects of pollution. Recent advances in the field of graph signal processing have made it possible to describe and analyze air pollution monitoring networks using graphs. One of the main applications is the reconstruction of the measured signal in a graph using a subset of sensors. Reconstructing the signal using information from neighboring sensors is a key technique for maintaining network data quality, with examples including filling in missing data with correlated neighboring nodes, creating virtual sensors, or correcting a drifting sensor with neighboring sensors that are more accurate. This paper proposes a signal reconstruction framework for air pollution monitoring data where a graph signal reconstruction model is superimposed on a graph learned from the data. Different graph signal reconstruction methods are compared on actual air pollution data sets measuring O3, NO2, and PM10. The ability of the methods to reconstruct the signal of a pollutant is shown, as well as the computational cost of this reconstruction. The results indicate the superiority of methods based on kernel-based graph signal reconstruction, as well as the difficulties of the methods to scale in an air pollution monitoring network with a large number of low-cost sensors. However, we show that the scalability of the framework can be improved with simple methods, such as partitioning the network using a clustering algorithm.This work is supported by the National Spanish funding PID2019-107910RB-I00, by regional project 2017SGR-990, and with the support of Secretaria dâUniversitats i Recerca de la Generalitat de Catalunya i del Fons Social Europeu.Peer ReviewedPostprint (author's final draft
Load curve data cleansing and imputation via sparsity and low rank
The smart grid vision is to build an intelligent power network with an
unprecedented level of situational awareness and controllability over its
services and infrastructure. This paper advocates statistical inference methods
to robustify power monitoring tasks against the outlier effects owing to faulty
readings and malicious attacks, as well as against missing data due to privacy
concerns and communication errors. In this context, a novel load cleansing and
imputation scheme is developed leveraging the low intrinsic-dimensionality of
spatiotemporal load profiles and the sparse nature of "bad data.'' A robust
estimator based on principal components pursuit (PCP) is adopted, which effects
a twofold sparsity-promoting regularization through an -norm of the
outliers, and the nuclear norm of the nominal load profiles. Upon recasting the
non-separable nuclear norm into a form amenable to decentralized optimization,
a distributed (D-) PCP algorithm is developed to carry out the imputation and
cleansing tasks using networked devices comprising the so-termed advanced
metering infrastructure. If D-PCP converges and a qualification inequality is
satisfied, the novel distributed estimator provably attains the performance of
its centralized PCP counterpart, which has access to all networkwide data.
Computer simulations and tests with real load curve data corroborate the
convergence and effectiveness of the novel D-PCP algorithm.Comment: 8 figures, submitted to IEEE Transactions on Smart Grid - Special
issue on "Optimization methods and algorithms applied to smart grid
Towards better traffic volume estimation: Tackling both underdetermined and non-equilibrium problems via a correlation-adaptive graph convolution network
Traffic volume is an indispensable ingredient to provide fine-grained
information for traffic management and control. However, due to limited
deployment of traffic sensors, obtaining full-scale volume information is far
from easy. Existing works on this topic primarily focus on improving the
overall estimation accuracy of a particular method and ignore the underlying
challenges of volume estimation, thereby having inferior performances on some
critical tasks. This paper studies two key problems with regard to traffic
volume estimation: (1) underdetermined traffic flows caused by undetected
movements, and (2) non-equilibrium traffic flows arise from congestion
propagation. Here we demonstrate a graph-based deep learning method that can
offer a data-driven, model-free and correlation adaptive approach to tackle the
above issues and perform accurate network-wide traffic volume estimation.
Particularly, in order to quantify the dynamic and nonlinear relationships
between traffic speed and volume for the estimation of underdetermined flows, a
speed patternadaptive adjacent matrix based on graph attention is developed and
integrated into the graph convolution process, to capture non-local
correlations between sensors. To measure the impacts of non-equilibrium flows,
a temporal masked and clipped attention combined with a gated temporal
convolution layer is customized to capture time-asynchronous correlations
between upstream and downstream sensors. We then evaluate our model on a
real-world highway traffic volume dataset and compare it with several benchmark
models. It is demonstrated that the proposed model achieves high estimation
accuracy even under 20% sensor coverage rate and outperforms other baselines
significantly, especially on underdetermined and non-equilibrium flow
locations. Furthermore, comprehensive quantitative model analysis are also
carried out to justify the model designs
"Sticky Hands": learning and generalization for cooperative physical interactions with a humanoid robot
"Sticky Hands" is a physical game for two people involving gentle contact with the hands. The aim is to develop relaxed and elegant motion together, achieve physical sensitivity-improving reactions, and experience an interaction at an intimate yet comfortable level for spiritual development and physical relaxation. We developed a control system for a humanoid robot allowing it to play Sticky Hands with a human partner. We present a real implementation including a physical system, robot control, and a motion learning algorithm based on a generalizable intelligent system capable itself of generalizing observed trajectories' translation, orientation, scale and velocity to new data, operating with scalable speed and storage efficiency bounds, and coping with contact trajectories that evolve over time. Our robot control is capable of physical cooperation in a force domain, using minimal sensor input. We analyze robot-human interaction and relate characteristics of our motion learning algorithm with recorded motion profiles. We discuss our results in the context of realistic motion generation and present a theoretical discussion of stylistic and affective motion generation based on, and motivating cross-disciplinary research in computer graphics, human motion production and motion perception
- âŠ