4,916 research outputs found
A Learning-Based Framework for Two-Dimensional Vehicle Maneuver Prediction over V2V Networks
Situational awareness in vehicular networks could be substantially improved
utilizing reliable trajectory prediction methods. More precise situational
awareness, in turn, results in notably better performance of critical safety
applications, such as Forward Collision Warning (FCW), as well as comfort
applications like Cooperative Adaptive Cruise Control (CACC). Therefore,
vehicle trajectory prediction problem needs to be deeply investigated in order
to come up with an end to end framework with enough precision required by the
safety applications' controllers. This problem has been tackled in the
literature using different methods. However, machine learning, which is a
promising and emerging field with remarkable potential for time series
prediction, has not been explored enough for this purpose. In this paper, a
two-layer neural network-based system is developed which predicts the future
values of vehicle parameters, such as velocity, acceleration, and yaw rate, in
the first layer and then predicts the two-dimensional, i.e. longitudinal and
lateral, trajectory points based on the first layer's outputs. The performance
of the proposed framework has been evaluated in realistic cut-in scenarios from
Safety Pilot Model Deployment (SPMD) dataset and the results show a noticeable
improvement in the prediction accuracy in comparison with the kinematics model
which is the dominant employed model by the automotive industry. Both ideal and
nonideal communication circumstances have been investigated for our system
evaluation. For non-ideal case, an estimation step is included in the framework
before the parameter prediction block to handle the drawbacks of packet drops
or sensor failures and reconstruct the time series of vehicle parameters at a
desirable frequency
Recommended from our members
Explainable and Advisable Learning for Self-driving Vehicles
Deep neural perception and control networks are likely to be a key component of self-driving vehicles. These models need to be explainable - they should provide easy-to-interpret rationales for their behavior - so that passengers, insurance companies, law enforcement, developers, etc., can understand what triggered a particular behavior. Explanations may be triggered by the neural controller, namely introspective explanations, or informed by the neural controller's output, namely rationalizations. Our work has focused on the challenge of generating introspective explanations of deep models for self-driving vehicles. In Chapter 3, we begin by exploring the use of visual explanations. These explanations take the form of real-time highlighted regions of an image that causally influence the network's output (steering control). In the first stage, we use a visual attention model to train a convolution network end-to-end from images to steering angle. The attention model highlights image regions that potentially influence the network's output. Some of these are true influences, but some are spurious. We then apply a causal filtering step to determine which input regions actually influence the output. This produces more succinct visual explanations and more accurately exposes the network's behavior. In Chapter 4, we add an attention-based video-to-text model to produce textual explanations of model actions, e.g. "the car slows down because the road is wet". The attention maps of controller and explanation model are aligned so that explanations are grounded in the parts of the scene that mattered to the controller. We explore two approaches to attention alignment, strong- and weak-alignment. These explainable systems represent an externalization of tacit knowledge. The network's opaque reasoning is simplified to a situation-specific dependence on a visible object in the image. This makes them brittle and potentially unsafe in situations that do not match training data. In Chapter 5, we propose to address this issue by augmenting training data with natural language advice from a human. Advice includes guidance about what to do and where to attend. We present the first step toward advice-giving, where we train an end-to-end vehicle controller that accepts advice. The controller adapts the way it attends to the scene (visual attention) and the control (steering and speed). Further, in Chapter 6, we propose a new approach that learns vehicle control with the help of long-term (global) human advice. Specifically, our system learns to summarize its visual observations in natural language, predict an appropriate action response (e.g. "I see a pedestrian crossing, so I stop"), and predict the controls, accordingly
Vision-Based Lane-Changing Behavior Detection Using Deep Residual Neural Network
Accurate lane localization and lane change detection are crucial in advanced
driver assistance systems and autonomous driving systems for safer and more
efficient trajectory planning. Conventional localization devices such as Global
Positioning System only provide road-level resolution for car navigation, which
is incompetent to assist in lane-level decision making. The state of art
technique for lane localization is to use Light Detection and Ranging sensors
to correct the global localization error and achieve centimeter-level accuracy,
but the real-time implementation and popularization for LiDAR is still limited
by its computational burden and current cost. As a cost-effective alternative,
vision-based lane change detection has been highly regarded for affordable
autonomous vehicles to support lane-level localization. A deep learning-based
computer vision system is developed to detect the lane change behavior using
the images captured by a front-view camera mounted on the vehicle and data from
the inertial measurement unit for highway driving. Testing results on
real-world driving data have shown that the proposed method is robust with
real-time working ability and could achieve around 87% lane change detection
accuracy. Compared to the average human reaction to visual stimuli, the
proposed computer vision system works 9 times faster, which makes it capable of
helping make life-saving decisions in time
The State-of-the-art of Coordinated Ramp Control with Mixed Traffic Conditions
Ramp metering, a traditional traffic control strategy for conventional
vehicles, has been widely deployed around the world since the 1960s. On the
other hand, the last decade has witnessed significant advances in connected and
automated vehicle (CAV) technology and its great potential for improving
safety, mobility and environmental sustainability. Therefore, a large amount of
research has been conducted on cooperative ramp merging for CAVs only. However,
it is expected that the phase of mixed traffic, namely the coexistence of both
human-driven vehicles and CAVs, would last for a long time. Since there is
little research on the system-wide ramp control with mixed traffic conditions,
the paper aims to close this gap by proposing an innovative system architecture
and reviewing the state-of-the-art studies on the key components of the
proposed system. These components include traffic state estimation, ramp
metering, driving behavior modeling, and coordination of CAVs. All reviewed
literature plot an extensive landscape for the proposed system-wide coordinated
ramp control with mixed traffic conditions.Comment: 8 pages, 1 figure, IEEE INTELLIGENT TRANSPORTATION SYSTEMS CONFERENCE
- ITSC 201
- …