1,877 research outputs found
Route Planning in Transportation Networks
We survey recent advances in algorithms for route planning in transportation
networks. For road networks, we show that one can compute driving directions in
milliseconds or less even at continental scale. A variety of techniques provide
different trade-offs between preprocessing effort, space requirements, and
query time. Some algorithms can answer queries in a fraction of a microsecond,
while others can deal efficiently with real-time traffic. Journey planning on
public transportation systems, although conceptually similar, is a
significantly harder problem due to its inherent time-dependent and
multicriteria nature. Although exact algorithms are fast enough for interactive
queries on metropolitan transit systems, dealing with continent-sized instances
requires simplifications or heavy preprocessing. The multimodal route planning
problem, which seeks journeys combining schedule-based transportation (buses,
trains) with unrestricted modes (walking, driving), is even harder, relying on
approximate solutions even for metropolitan inputs.Comment: This is an updated version of the technical report MSR-TR-2014-4,
previously published by Microsoft Research. This work was mostly done while
the authors Daniel Delling, Andrew Goldberg, and Renato F. Werneck were at
Microsoft Research Silicon Valle
Predicting Urban Dispersal Events: A Two-Stage Framework through Deep Survival Analysis on Mobility Data
Urban dispersal events are processes where an unusually large number of
people leave the same area in a short period. Early prediction of dispersal
events is important in mitigating congestion and safety risks and making better
dispatching decisions for taxi and ride-sharing fleets. Existing work mostly
focuses on predicting taxi demand in the near future by learning patterns from
historical data. However, they fail in case of abnormality because dispersal
events with abnormally high demand are non-repetitive and violate common
assumptions such as smoothness in demand change over time. Instead, in this
paper we argue that dispersal events follow a complex pattern of trips and
other related features in the past, which can be used to predict such events.
Therefore, we formulate the dispersal event prediction problem as a survival
analysis problem. We propose a two-stage framework (DILSA), where a deep
learning model combined with survival analysis is developed to predict the
probability of a dispersal event and its demand volume. We conduct extensive
case studies and experiments on the NYC Yellow taxi dataset from 2014-2016.
Results show that DILSA can predict events in the next 5 hours with F1-score of
0.7 and with average time error of 18 minutes. It is orders of magnitude better
than the state-ofthe-art deep learning approaches for taxi demand prediction.Comment: To appear in AAAI-19 proceedings. The reason for the replacement was
the misspelled author name in the meta-data field. Author name was corrected
from "Ynahua Li" to "Yanhua Li". The author list in the paper was correct and
remained unchange
Dynamic Time-Dependent Route Planning in Road Networks with User Preferences
There has been tremendous progress in algorithmic methods for computing
driving directions on road networks. Most of that work focuses on
time-independent route planning, where it is assumed that the cost on each arc
is constant per query. In practice, the current traffic situation significantly
influences the travel time on large parts of the road network, and it changes
over the day. One can distinguish between traffic congestion that can be
predicted using historical traffic data, and congestion due to unpredictable
events, e.g., accidents. In this work, we study the \emph{dynamic and
time-dependent} route planning problem, which takes both prediction (based on
historical data) and live traffic into account. To this end, we propose a
practical algorithm that, while robust to user preferences, is able to
integrate global changes of the time-dependent metric~(e.g., due to traffic
updates or user restrictions) faster than previous approaches, while allowing
subsequent queries that enable interactive applications
Bayesian approach to Spatio-temporally Consistent Simulation of Daily Monsoon Rainfall over India
Simulation of rainfall over a region for long time-sequences can be very
useful for planning and policy-making, especially in India where the economy is
heavily reliant on monsoon rainfall. However, such simulations should be able
to preserve the known spatial and temporal characteristics of rainfall over
India. General Circulation Models (GCMs) are unable to do so, and various
rainfall generators designed by hydrologists using stochastic processes like
Gaussian Processes are also difficult to apply over the vast and highly diverse
landscape of India. In this paper, we explore a series of Bayesian models based
on conditional distributions of latent variables that describe weather
conditions at specific locations and over the whole country. During parameter
estimation from observed data, we use spatio-temporal smoothing using Markov
Random Field so that the parameters learnt are spatially and temporally
coherent. Also, we use a nonparametric spatial clustering based on Chinese
Restaurant Process to identify homogeneous regions, which are utilized by some
of the proposed models to improve spatial correlations of the simulated
rainfall. The models are able to simulate daily rainfall across India for
years, and can also utilize contextual information for conditional simulation.
We use two datasets of different spatial resolutions over India, and focus on
the period 2000-2015. We propose a large number of metrics to study the
spatio-temporal properties of the simulations by the models, and compare them
with the observed data to evaluate the strengths and weaknesses of the models
3D oceanographic data compression using 3D-ODETLAP
This paper describes a 3D environmental data compression technique for oceanographic datasets. With proper point selection, our method approximates uncompressed marine data using an over-determined system of linear equations based on, but essentially different from, the Laplacian partial differential equation. Then this approximation is refined via an error metric. These two steps work alternatively until a predefined satisfying approximation is found. Using several different datasets and metrics, we demonstrate that our method has an excellent compression ratio. To further evaluate our method, we compare it with 3D-SPIHT. 3D-ODETLAP averages 20% better compression than 3D-SPIHT on our eight test datasets, from World Ocean Atlas 2005. Our method provides up to approximately six times better compression on datasets with relatively small variance. Meanwhile, with the same approximate mean error, we demonstrate a significantly smaller maximum error compared to 3D-SPIHT and provide a feature to keep the maximum error under a user-defined limit
- …