19,107 research outputs found
Intentions of Vulnerable Road Users - Detection and Forecasting by Means of Machine Learning
Avoiding collisions with vulnerable road users (VRUs) using sensor-based
early recognition of critical situations is one of the manifold opportunities
provided by the current development in the field of intelligent vehicles. As
especially pedestrians and cyclists are very agile and have a variety of
movement options, modeling their behavior in traffic scenes is a challenging
task. In this article we propose movement models based on machine learning
methods, in particular artificial neural networks, in order to classify the
current motion state and to predict the future trajectory of VRUs. Both model
types are also combined to enable the application of specifically trained
motion predictors based on a continuously updated pseudo probabilistic state
classification. Furthermore, the architecture is used to evaluate
motion-specific physical models for starting and stopping and video-based
pedestrian motion classification. A comprehensive dataset consisting of 1068
pedestrian and 494 cyclist scenes acquired at an urban intersection is used for
optimization, training, and evaluation of the different models. The results
show substantial higher classification rates and the ability to earlier
recognize motion state changes with the machine learning approaches compared to
interacting multiple model (IMM) Kalman Filtering. The trajectory prediction
quality is also improved for all kinds of test scenes, especially when starting
and stopping motions are included. Here, 37\% and 41\% lower position errors
were achieved on average, respectively
Deep Grid Net (DGN): A Deep Learning System for Real-Time Driving Context Understanding
Grid maps obtained from fused sensory information are nowadays among the most
popular approaches for motion planning for autonomous driving cars. In this
paper, we introduce Deep Grid Net (DGN), a deep learning (DL) system designed
for understanding the context in which an autonomous car is driving. DGN
incorporates a learned driving environment representation based on Occupancy
Grids (OG) obtained from raw Lidar data and constructed on top of the
Dempster-Shafer (DS) theory. The predicted driving context is further used for
switching between different driving strategies implemented within EB robinos,
Elektrobit's Autonomous Driving (AD) software platform. Based on genetic
algorithms (GAs), we also propose a neuroevolutionary approach for learning the
tuning hyperparameters of DGN. The performance of the proposed deep network has
been evaluated against similar competing driving context estimation
classifiers
Review on Computer Vision Techniques in Emergency Situation
In emergency situations, actions that save lives and limit the impact of
hazards are crucial. In order to act, situational awareness is needed to decide
what to do. Geolocalized photos and video of the situations as they evolve can
be crucial in better understanding them and making decisions faster. Cameras
are almost everywhere these days, either in terms of smartphones, installed
CCTV cameras, UAVs or others. However, this poses challenges in big data and
information overflow. Moreover, most of the time there are no disasters at any
given location, so humans aiming to detect sudden situations may not be as
alert as needed at any point in time. Consequently, computer vision tools can
be an excellent decision support. The number of emergencies where computer
vision tools has been considered or used is very wide, and there is a great
overlap across related emergency research. Researchers tend to focus on
state-of-the-art systems that cover the same emergency as they are studying,
obviating important research in other fields. In order to unveil this overlap,
the survey is divided along four main axes: the types of emergencies that have
been studied in computer vision, the objective that the algorithms can address,
the type of hardware needed and the algorithms used. Therefore, this review
provides a broad overview of the progress of computer vision covering all sorts
of emergencies.Comment: 25 page
Dynamic Risk Assessment for Vehicles of Higher Automation Levels by Deep Learning
Vehicles of higher automation levels require the creation of situation
awareness. One important aspect of this situation awareness is an understanding
of the current risk of a driving situation. In this work, we present a novel
approach for the dynamic risk assessment of driving situations based on images
of a front stereo camera using deep learning. To this end, we trained a deep
neural network with recorded monocular images, disparity maps and a risk metric
for diverse traffic scenes. Our approach can be used to create the
aforementioned situation awareness of vehicles of higher automation levels and
can serve as a heterogeneous channel to systems based on radar or lidar sensors
that are used traditionally for the calculation of risk metrics
Drive Video Analysis for the Detection of Traffic Near-Miss Incidents
Because of their recent introduction, self-driving cars and advanced driver
assistance system (ADAS) equipped vehicles have had little opportunity to
learn, the dangerous traffic (including near-miss incident) scenarios that
provide normal drivers with strong motivation to drive safely. Accordingly, as
a means of providing learning depth, this paper presents a novel traffic
database that contains information on a large number of traffic near-miss
incidents that were obtained by mounting driving recorders in more than 100
taxis over the course of a decade. The study makes the following two main
contributions: (i) In order to assist automated systems in detecting near-miss
incidents based on database instances, we created a large-scale traffic
near-miss incident database (NIDB) that consists of video clip of dangerous
events captured by monocular driving recorders. (ii) To illustrate the
applicability of NIDB traffic near-miss incidents, we provide two primary
database-related improvements: parameter fine-tuning using various near-miss
scenes from NIDB, and foreground/background separation into motion
representation. Then, using our new database in conjunction with a monocular
driving recorder, we developed a near-miss recognition method that provides
automated systems with a performance level that is comparable to a human-level
understanding of near-miss incidents (64.5% vs. 68.4% at near-miss recognition,
61.3% vs. 78.7% at near-miss detection).Comment: Accepted to ICRA 201
Machine Learning for Vehicular Networks
The emerging vehicular networks are expected to make everyday vehicular
operation safer, greener, and more efficient, and pave the path to autonomous
driving in the advent of the fifth generation (5G) cellular system. Machine
learning, as a major branch of artificial intelligence, has been recently
applied to wireless networks to provide a data-driven approach to solve
traditionally challenging problems. In this article, we review recent advances
in applying machine learning in vehicular networks and attempt to bring more
attention to this emerging area. After a brief overview of the major concept of
machine learning, we present some application examples of machine learning in
solving problems arising in vehicular networks. We finally discuss and
highlight several open issues that warrant further research.Comment: Accepted by IEEE Vehicular Technology Magazin
AI Solutions for MDS: Artificial Intelligence Techniques for Misuse Detection and Localisation in Telecommunication Environments
This report considers the application of Articial Intelligence (AI) techniques to
the problem of misuse detection and misuse localisation within telecommunications
environments. A broad survey of techniques is provided, that covers inter alia
rule based systems, model-based systems, case based reasoning, pattern matching,
clustering and feature extraction, articial neural networks, genetic algorithms, arti
cial immune systems, agent based systems, data mining and a variety of hybrid
approaches. The report then considers the central issue of event correlation, that
is at the heart of many misuse detection and localisation systems. The notion of
being able to infer misuse by the correlation of individual temporally distributed
events within a multiple data stream environment is explored, and a range of techniques,
covering model based approaches, `programmed' AI and machine learning
paradigms. It is found that, in general, correlation is best achieved via rule based approaches,
but that these suffer from a number of drawbacks, such as the difculty of
developing and maintaining an appropriate knowledge base, and the lack of ability
to generalise from known misuses to new unseen misuses. Two distinct approaches
are evident. One attempts to encode knowledge of known misuses, typically within
rules, and use this to screen events. This approach cannot generally detect misuses
for which it has not been programmed, i.e. it is prone to issuing false negatives.
The other attempts to `learn' the features of event patterns that constitute normal
behaviour, and, by observing patterns that do not match expected behaviour, detect
when a misuse has occurred. This approach is prone to issuing false positives,
i.e. inferring misuse from innocent patterns of behaviour that the system was not
trained to recognise. Contemporary approaches are seen to favour hybridisation,
often combining detection or localisation mechanisms for both abnormal and normal
behaviour, the former to capture known cases of misuse, the latter to capture
unknown cases. In some systems, these mechanisms even work together to update
each other to increase detection rates and lower false positive rates. It is concluded
that hybridisation offers the most promising future direction, but that a rule or state
based component is likely to remain, being the most natural approach to the correlation
of complex events. The challenge, then, is to mitigate the weaknesses of
canonical programmed systems such that learning, generalisation and adaptation
are more readily facilitated
nn-dependability-kit: Engineering Neural Networks for Safety-Critical Autonomous Driving Systems
Can engineering neural networks be approached in a disciplined way similar to
how engineers build software for civil aircraft? We present
nn-dependability-kit, an open-source toolbox to support safety engineering of
neural networks for autonomous driving systems. The rationale behind
nn-dependability-kit is to consider a structured approach (via Goal Structuring
Notation) to argue the quality of neural networks. In particular, the tool
realizes recent scientific results including (a) novel dependability metrics
for indicating sufficient elimination of uncertainties in the product life
cycle, (b) formal reasoning engine for ensuring that the generalization does
not lead to undesired behaviors, and (c) runtime monitoring for reasoning
whether a decision of a neural network in operation is supported by prior
similarities in the training data. A proprietary version of
nn-dependability-kit has been used to improve the quality of a level-3
autonomous driving component developed by Audi for highway maneuvers.Comment: Tool available at
https://github.com/dependable-ai/nn-dependability-ki
VSSA-NET: Vertical Spatial Sequence Attention Network for Traffic Sign Detection
Although traffic sign detection has been studied for years and great progress
has been made with the rise of deep learning technique, there are still many
problems remaining to be addressed. For complicated real-world traffic scenes,
there are two main challenges. Firstly, traffic signs are usually small size
objects, which makes it more difficult to detect than large ones; Secondly, it
is hard to distinguish false targets which resemble real traffic signs in
complex street scenes without context information. To handle these problems, we
propose a novel end-to-end deep learning method for traffic sign detection in
complex environments. Our contributions are as follows: 1) We propose a
multi-resolution feature fusion network architecture which exploits densely
connected deconvolution layers with skip connections, and can learn more
effective features for the small size object; 2) We frame the traffic sign
detection as a spatial sequence classification and regression task, and propose
a vertical spatial sequence attention (VSSA) module to gain more context
information for better detection performance. To comprehensively evaluate the
proposed method, we do experiments on several traffic sign datasets as well as
the general object detection dataset and the results have shown the
effectiveness of our proposed method
Robust Lane Detection from Continuous Driving Scenes Using Deep Neural Networks
Lane detection in driving scenes is an important module for autonomous
vehicles and advanced driver assistance systems. In recent years, many
sophisticated lane detection methods have been proposed. However, most methods
focus on detecting the lane from one single image, and often lead to
unsatisfactory performance in handling some extremely-bad situations such as
heavy shadow, severe mark degradation, serious vehicle occlusion, and so on. In
fact, lanes are continuous line structures on the road. Consequently, the lane
that cannot be accurately detected in one current frame may potentially be
inferred out by incorporating information of previous frames. To this end, we
investigate lane detection by using multiple frames of a continuous driving
scene, and propose a hybrid deep architecture by combining the convolutional
neural network (CNN) and the recurrent neural network (RNN). Specifically,
information of each frame is abstracted by a CNN block, and the CNN features of
multiple continuous frames, holding the property of time-series, are then fed
into the RNN block for feature learning and lane prediction. Extensive
experiments on two large-scale datasets demonstrate that, the proposed method
outperforms the competing methods in lane detection, especially in handling
difficult situations.Comment: IEEE Transactions on Vehicular Technology, 69(1), 202
- …