5,822 research outputs found
Ontology-based framework for risk assessment in road scenes using videos
Recent advances in autonomous vehicle technology pose an important problem of automatic risk assessment in road scenes. This article addresses the problem by proposing a novel ontology tool for assessment of risk in unpredictable road traffic environment, as it does not assume that the road users always obey the traffic rules. A framework for video-based assessment of the risk in a road scene encompassing the above ontology is also presented in the paper. The framework uses as input the video from a monocular video camera only, avoiding the need for additional sometimes expensive sensors. The key entities in the road scene (vehicles, pedestrians, environment objects etc.) are organised into an ontology which encodes their hierarchy, relations and interactions. The ontology tool infers the degree of risk in a given scene using as knowledge video-based features, related to the key entities. The evaluation of the proposed framework focuses on scenarios in which risk results from pedestrian behaviour. A dataset consisting of real-world videos illustrating pedestrian movement is built. Features related to the key entities in the road scene are extracted and fed to the ontology, which evaluates the degree of risk in the scene. The experimental results indicate that the proposed framework is capable of assessing risk resulting from pedestrian behaviour in various road scenes accurately
Ontology based Scene Creation for the Development of Automated Vehicles
The introduction of automated vehicles without permanent human supervision
demands a functional system description, including functional system boundaries
and a comprehensive safety analysis. These inputs to the technical development
can be identified and analyzed by a scenario-based approach. Furthermore, to
establish an economical test and release process, a large number of scenarios
must be identified to obtain meaningful test results. Experts are doing well to
identify scenarios that are difficult to handle or unlikely to happen. However,
experts are unlikely to identify all scenarios possible based on the knowledge
they have on hand. Expert knowledge modeled for computer aided processing may
help for the purpose of providing a wide range of scenarios. This contribution
reviews ontologies as knowledge-based systems in the field of automated
vehicles, and proposes a generation of traffic scenes in natural language as a
basis for a scenario creation.Comment: Accepted at the 2018 IEEE Intelligent Vehicles Symposium, 8 pages, 10
figure
Video-based situation assessment for road safety
In recent decades, situational awareness (SA) has been a major research
subject in connection with autonomous vehicles and intelligent transportation
systems. Situational awareness concerns the safety of road
users, including drivers, passengers, pedestrians and animals. Moreover,
it holds key information regarding the nature of upcoming situations.
In order to build robust automatic SA systems that sense
the environment, a variety of sensors, such as global positioning systems,
radars and cameras, have been used. However, due to the high
cost, complex installation procedures and high computational load of
automatic situational awareness systems, they are unlikely to become
standard for vehicles in the near future.
In this thesis, a novel video-based framework for the automatic assessment
of risk of collision in a road scene is proposed. The framework
uses as input the video from a monocular video camera only, avoiding
the need for additional, and frequently expensive, sensors. The framework
has two main parts: a novel ontology tool for the assessment of
risk of collision, and semantic feature extraction based on computervision
methods.
The ontology tool is designed to represent the various relations between
the most important risk factors, such as risk from object and
road environmental risk. The semantic features related to these factors
iii
Abstract iv
are based on computer vision methods, such as pedestrian detection
and tracking, road-region detection and road-type classification. The
quality of these methods is important for achieving accurate results,
especially with respect to video segmentation. This thesis, therefore,
proposes a new criterion of high-quality video segmentation: the inclusion
of temporal-region consistency. On the basis of the new criteria, an
online method for the evaluation of video segmentation quality is proposed.
This method is more consistent than the state-of-the-art method
in terms of perceptual-segmentation quality, for both synthetic and real
video datasets. Furthermore, using the Gaussian mixture model for
video segmentation, one of the successful video segmentation methods
in this area, new online methods for both road-type classification and
road-region detection are proposed.
The proposed vision-based road-type classification method achieves
higher classification accuracy than the state-of-the-art method, for each
road type individually. Consequently, it achieves higher overall classi-
fication accuracy. Likewise, the proposed vision-based road-region detection
method achieves high performance accuracy compared to the
state-of-the-art methods, according to two measures: pixel-wise percentage
accuracy and area under the receiver operating characteristic
(ROC) curve (AUC).
Finally, the evaluation performance of the automatic risk-assessment
framework is measured. At this stage, the framework includes only the
assessment of pedestrian risk in the road scene. Using the semantic
information obtained via computer-vision methods, the framework's
performance is assessed for two datasets: first, a new dataset proposed
in Chapter 7, which comprises six videos, and second, a dataset comAbstract
v
prising five examples selected from an established, publicly available
dataset. Both datasets consist of real-world videos illustrating pedestrian
movement. The experimental results show that the proposed
framework achieves high accuracy in the assessment of risk resulting
from pedestrian behaviour in road scenes
Review of graph-based hazardous event detection methods for autonomous driving systems
Automated and autonomous vehicles are often required to operate in complex road environments with potential hazards that may lead to hazardous events causing injury or even death. Therefore, a reliable autonomous hazardous event detection system is a key enabler for highly autonomous vehicles (e.g., Level 4 and 5 autonomous vehicles) to operate without human supervision for significant periods of time. One promising solution to the problem is the use of graph-based methods that are powerful tools for relational reasoning. Using graphs to organise heterogeneous knowledge about the operational environment, link scene entities (e.g., road users, static objects, traffic rules) and describe how they affect each other. Due to a growing interest and opportunity presented by graph-based methods for autonomous hazardous event detection, this paper provides a comprehensive review of the state-of-the-art graph-based methods that we categorise as rule-based, probabilistic, and machine learning-driven. Additionally, we present an in-depth overview of the available datasets to facilitate hazardous event training and evaluation metrics to assess model performance. In doing so, we aim to provide a thorough overview and insight into the key research opportunities and open challenges
One Ontology to Rule Them All: Corner Case Scenarios for Autonomous Driving
The core obstacle towards a large-scale deployment of autonomous vehicles
currently lies in the long tail of rare events. These are extremely challenging
since they do not occur often in the utilized training data for deep neural
networks. To tackle this problem, we propose the generation of additional
synthetic training data, covering a wide variety of corner case scenarios. As
ontologies can represent human expert knowledge while enabling computational
processing, we use them to describe scenarios. Our proposed master ontology is
capable to model scenarios from all common corner case categories found in the
literature. From this one master ontology, arbitrary scenario-describing
ontologies can be derived. In an automated fashion, these can be converted into
the OpenSCENARIO format and subsequently executed in simulation. This way, also
challenging test and evaluation scenarios can be generated.Comment: Daniel Bogdoll and Stefani Guneshka contributed equally. Accepted for
publication at ECCV 2022 SAIAD worksho
High-level feature detection from video in TRECVid: a 5-year retrospective of achievements
Successful and effective content-based access to digital
video requires fast, accurate and scalable methods to determine the video content automatically. A variety of contemporary approaches to this rely on text taken from speech within the video, or on matching one video frame against others using low-level characteristics like
colour, texture, or shapes, or on determining and matching objects appearing within the video. Possibly the most important technique, however, is one which determines the presence or absence of a high-level or semantic feature, within a video clip or shot. By utilizing dozens, hundreds or even thousands of such semantic features we can support many kinds of content-based video navigation. Critically however, this depends on being able to determine whether each feature is or is not present in a video clip.
The last 5 years have seen much progress in the development of techniques to determine the presence of semantic features within video. This progress can be tracked in the annual TRECVid benchmarking activity where dozens of research groups measure the effectiveness of their techniques on common data and using an open, metrics-based approach. In this chapter we summarise the work
done on the TRECVid high-level feature task, showing the
progress made year-on-year. This provides a fairly comprehensive statement on where the state-of-the-art is regarding this important task, not just for one research group or for one approach, but across the spectrum. We then use this past and on-going work as a basis for highlighting the trends that are emerging in this area, and the questions which remain to be addressed before we can
achieve large-scale, fast and reliable high-level feature detection on video
Fireground location understanding by semantic linking of visual objects and building information models
This paper presents an outline for improved localization and situational awareness in fire emergency situations based on semantic technology and computer vision techniques. The novelty of our methodology lies in the semantic linking of video object recognition results from visual and thermal cameras with Building Information Models (BIM). The current limitations and possibilities of certain building information streams in the context of fire safety or fire incident management are addressed in this paper. Furthermore, our data management tools match higher-level semantic metadata descriptors of BIM and deep-learning based visual object recognition and classification networks. Based on these matches, estimations can be generated of camera, objects and event positions in the BIM model, transforming it from a static source of information into a rich, dynamic data provider. Previous work has already investigated the possibilities to link BIM and low-cost point sensors for fireground understanding, but these approaches did not take into account the benefits of video analysis and recent developments in semantics and feature learning research. Finally, the strengths of the proposed approach compared to the state-of-the-art is its (semi -)automatic workflow, generic and modular setup and multi-modal strategy, which allows to automatically create situational awareness, to improve localization and to facilitate the overall fire understanding
Identification of Challenging Highway-Scenarios for the Safety Validation of Automated Vehicles Based on Real Driving Data
For a successful market launch of automated vehicles (AVs), proof of their
safety is essential. Due to the open parameter space, an infinite number of
traffic situations can occur, which makes the proof of safety an unsolved
problem. With the so-called scenario-based approach, all relevant test
scenarios must be identified. This paper introduces an approach that finds
particularly challenging scenarios from real driving data (\RDDwo) and assesses
their difficulty using a novel metric. Starting from the highD data, scenarios
are extracted using a hierarchical clustering approach and then assigned to one
of nine pre-defined functional scenarios using rule-based classification. The
special feature of the subsequent evaluation of the concrete scenarios is that
it is independent of the performance of the test vehicle and therefore valid
for all AVs. Previous evaluation metrics are often based on the criticality of
the scenario, which is, however, dependent on the behavior of the test vehicle
and is therefore only conditionally suitable for finding "good" test cases in
advance. The results show that with this new approach a reduced number of
particularly challenging test scenarios can be derived.Comment: Accepted at 2020 Fifteenth International Conference on Ecological
Vehicles and Renewable Energies (EVER
- …