4,800 research outputs found
Evaluation of the Overheight Detection System Effectiveness at Eklutna Bridge
The Eklutna River/Glenn Highway bridge has sustained repeated impacts from overheight trucks. In 2006, ADOT&PF installed an overheight vehicle warning system. The system includes laser detectors, alarms, and message boards. Since installation, personnel have
seen no new damage, and no sign that the alarm system has been triggered. Although this is good news, the particulars are a mystery: Is the system working? Is the presence of the equipment enough to deter drivers from gambling with a vehicle that might be over the
height limit? Is it worth installing similar systems at other overpasses? This project is examining the bridge for any evidence of damage, and is fitting the system with a datalogger to record and video any events that trigger the warning system. Finally, just to be sure, researchers will test the system with (officially) overheight vehicles. Project results will help ADOT&PF determine if this system is functioning, and if a similar system installed at other bridges would be cost-effective.Fairbanks North Star Boroug
Recommended from our members
Explainable and Advisable Learning for Self-driving Vehicles
Deep neural perception and control networks are likely to be a key component of self-driving vehicles. These models need to be explainable - they should provide easy-to-interpret rationales for their behavior - so that passengers, insurance companies, law enforcement, developers, etc., can understand what triggered a particular behavior. Explanations may be triggered by the neural controller, namely introspective explanations, or informed by the neural controller's output, namely rationalizations. Our work has focused on the challenge of generating introspective explanations of deep models for self-driving vehicles. In Chapter 3, we begin by exploring the use of visual explanations. These explanations take the form of real-time highlighted regions of an image that causally influence the network's output (steering control). In the first stage, we use a visual attention model to train a convolution network end-to-end from images to steering angle. The attention model highlights image regions that potentially influence the network's output. Some of these are true influences, but some are spurious. We then apply a causal filtering step to determine which input regions actually influence the output. This produces more succinct visual explanations and more accurately exposes the network's behavior. In Chapter 4, we add an attention-based video-to-text model to produce textual explanations of model actions, e.g. "the car slows down because the road is wet". The attention maps of controller and explanation model are aligned so that explanations are grounded in the parts of the scene that mattered to the controller. We explore two approaches to attention alignment, strong- and weak-alignment. These explainable systems represent an externalization of tacit knowledge. The network's opaque reasoning is simplified to a situation-specific dependence on a visible object in the image. This makes them brittle and potentially unsafe in situations that do not match training data. In Chapter 5, we propose to address this issue by augmenting training data with natural language advice from a human. Advice includes guidance about what to do and where to attend. We present the first step toward advice-giving, where we train an end-to-end vehicle controller that accepts advice. The controller adapts the way it attends to the scene (visual attention) and the control (steering and speed). Further, in Chapter 6, we propose a new approach that learns vehicle control with the help of long-term (global) human advice. Specifically, our system learns to summarize its visual observations in natural language, predict an appropriate action response (e.g. "I see a pedestrian crossing, so I stop"), and predict the controls, accordingly
RDD2022: A multi-national image dataset for automatic Road Damage Detection
The data article describes the Road Damage Dataset, RDD2022, which comprises
47,420 road images from six countries, Japan, India, the Czech Republic,
Norway, the United States, and China. The images have been annotated with more
than 55,000 instances of road damage. Four types of road damage, namely
longitudinal cracks, transverse cracks, alligator cracks, and potholes, are
captured in the dataset. The annotated dataset is envisioned for developing
deep learning-based methods to detect and classify road damage automatically.
The dataset has been released as a part of the Crowd sensing-based Road Damage
Detection Challenge (CRDDC2022). The challenge CRDDC2022 invites researchers
from across the globe to propose solutions for automatic road damage detection
in multiple countries. The municipalities and road agencies may utilize the
RDD2022 dataset, and the models trained using RDD2022 for low-cost automatic
monitoring of road conditions. Further, computer vision and machine learning
researchers may use the dataset to benchmark the performance of different
algorithms for other image-based applications of the same type (classification,
object detection, etc.).Comment: 16 pages, 20 figures, IEEE BigData Cup - Crowdsensing-based Road
damage detection challenge (CRDDC'2022
SignalGuru: Leveraging mobile phones for collaborative traffic signal schedule advisory
While traffic signals are necessary to safely control competing flows of traffic, they inevitably enforce a stop-and-go movement pattern that increases fuel consumption, reduces traffic flow and causes traffic jams. These side effects can be alleviated by providing drivers and their onboard computational devices (e.g., vehicle computer, smartphone) with information about the schedule of the traffic signals ahead. Based on when the signal ahead will turn green, drivers can then adjust speed so as to avoid coming to a complete halt. Such information is called Green Light Optimal Speed Advisory (GLOSA). Alternatively, the onboard computational device may suggest an efficient detour that will save the driver from stops and long waits at red lights ahead.
This paper introduces and evaluates SignalGuru, a novel software service that relies solely on a collection of mobile phones to detect and predict the traffic signal schedule, enabling GLOSA and other novel applications. Our SignalGuru leverages windshield-mounted phones to opportunistically detect current traffic signals with their cameras, collaboratively communicate and learn traffic signal schedule patterns, and predict their future schedule.
Results from two deployments of SignalGuru, using iPhones in cars in Cambridge (MA, USA) and Singapore, show that traffic signal schedules can be predicted accurately. On average, SignalGuru comes within 0.66s, for pre-timed traffic signals and within 2.45s, for traffic-adaptive traffic signals. Feeding SignalGuru's predicted traffic schedule to our GLOSA application, our vehicle fuel consumption measurements show savings of 20.3%, on average.National Science Foundation (U.S.). (Grant number CSR-EHS-0615175)Singapore-MIT Alliance for Research and Technology Center. Future Urban Mobilit
Visualizing Road Appearance Properties in Driving Video
With the increasing videos taken from driving recorders on thousands of cars, it is a challenging task to retrieve these videos and search for important information. The goal of this work is to mine certain critical road properties in a large scale driving video data set for traffic accident analysis, sensing algorithm development, and testing benchmark. Our aim is to condense video data to compact road profiles, which contain visual features of the road environment. By visualizing road edge and lane marks in the feature space with the reduced dimension, we will further explore the road edge models influenced by road and off-road materials, weather, lighting condition, etc
- …