8,476 research outputs found
Incident detection using data from social media
This is an accepted manuscript of an article published by IEEE in 2017 IEEE 20th International Conference on Intelligent Transportation Systems (ITSC) on 15/03/2018, available online: https://ieeexplore.ieee.org/document/8317967/citations#citations
The accepted version of the publication may differ from the final published version.© 2017 IEEE. Due to the rapid growth of population in the last 20 years, an increased number of instances of heavy recurrent traffic congestion has been observed in cities around the world. This rise in traffic has led to greater numbers of traffic incidents and subsequent growth of non-recurrent congestion. Existing incident detection techniques are limited to the use of sensors in the transportation network. In this paper, we analyze the potential of Twitter for supporting real-time incident detection in the United Kingdom (UK). We present a methodology for retrieving, processing, and classifying public tweets by combining Natural Language Processing (NLP) techniques with a Support Vector Machine algorithm (SVM) for text classification. Our approach can detect traffic related tweets with an accuracy of 88.27%.Published versio
Recommended from our members
Explainable and Advisable Learning for Self-driving Vehicles
Deep neural perception and control networks are likely to be a key component of self-driving vehicles. These models need to be explainable - they should provide easy-to-interpret rationales for their behavior - so that passengers, insurance companies, law enforcement, developers, etc., can understand what triggered a particular behavior. Explanations may be triggered by the neural controller, namely introspective explanations, or informed by the neural controller's output, namely rationalizations. Our work has focused on the challenge of generating introspective explanations of deep models for self-driving vehicles. In Chapter 3, we begin by exploring the use of visual explanations. These explanations take the form of real-time highlighted regions of an image that causally influence the network's output (steering control). In the first stage, we use a visual attention model to train a convolution network end-to-end from images to steering angle. The attention model highlights image regions that potentially influence the network's output. Some of these are true influences, but some are spurious. We then apply a causal filtering step to determine which input regions actually influence the output. This produces more succinct visual explanations and more accurately exposes the network's behavior. In Chapter 4, we add an attention-based video-to-text model to produce textual explanations of model actions, e.g. "the car slows down because the road is wet". The attention maps of controller and explanation model are aligned so that explanations are grounded in the parts of the scene that mattered to the controller. We explore two approaches to attention alignment, strong- and weak-alignment. These explainable systems represent an externalization of tacit knowledge. The network's opaque reasoning is simplified to a situation-specific dependence on a visible object in the image. This makes them brittle and potentially unsafe in situations that do not match training data. In Chapter 5, we propose to address this issue by augmenting training data with natural language advice from a human. Advice includes guidance about what to do and where to attend. We present the first step toward advice-giving, where we train an end-to-end vehicle controller that accepts advice. The controller adapts the way it attends to the scene (visual attention) and the control (steering and speed). Further, in Chapter 6, we propose a new approach that learns vehicle control with the help of long-term (global) human advice. Specifically, our system learns to summarize its visual observations in natural language, predict an appropriate action response (e.g. "I see a pedestrian crossing, so I stop"), and predict the controls, accordingly
Traffic signal detection from in-vehicle GPS speed profiles using functional data analysis and machine learning
International audienceThe increasing availability of large-scale Global Positioning System (GPS) data stemming from in-vehicle embedded terminal devices enables the design of methods deriving road network cartographic information from drivers' recorded traces. Some machine learning approaches have been proposed in the past to train automatic road network map inference, and recently this approach has been successfully extended to infer road attributes as well, such as speed limitation or number of lanes. In this paper, we address the problem of detecting traffic signals from a set of vehicle speed profiles, under a classification perspective. Each data instance is a speed versus distance plot depicting over a hundred profiles on a 100-meter-long road span. We proposed three different ways of deriving features: the first one relies on the raw speed measurements; the second one uses image recognition techniques; and the third one is based on functional data analysis. We input them into most commonly used classification algorithms and a comparative analysis demonstrated that a functional description of speed profiles with wavelet transforms seems to outperform the other approaches with most of the tested classifiers. It also highlighted that Random Forests yield an accurate detection of traffic signals, regardless of the chosen feature extraction method, while keeping a remarkably low confusion rate with stop signs
- …