7 research outputs found

    survAIval: Survival Analysis with the Eyes of AI

    Full text link
    In this study, we propose a novel approach to enrich the training data for automated driving by using a self-designed driving simulator and two human drivers to generate safety-critical corner cases in a short period of time, as already presented in~\cite{kowol22simulator}. Our results show that incorporating these corner cases during training improves the recognition of corner cases during testing, even though, they were recorded due to visual impairment. Using the corner case triggering pipeline developed in the previous work, we investigate the effectiveness of using expert models to overcome the domain gap due to different weather conditions and times of day, compared to a universal model from a development perspective. Our study reveals that expert models can provide significant benefits in terms of performance and efficiency, and can reduce the time and effort required for model training. Our results contribute to the progress of automated driving, providing a pathway for safer and more reliable autonomous vehicles on the road in the future

    Perception Datasets for Anomaly Detection in Autonomous Driving: A Survey

    Full text link
    Deep neural networks (DNN) which are employed in perception systems for autonomous driving require a huge amount of data to train on, as they must reliably achieve high performance in all kinds of situations. However, these DNN are usually restricted to a closed set of semantic classes available in their training data, and are therefore unreliable when confronted with previously unseen instances. Thus, multiple perception datasets have been created for the evaluation of anomaly detection methods, which can be categorized into three groups: real anomalies in real-world, synthetic anomalies augmented into real-world and completely synthetic scenes. This survey provides a structured and, to the best of our knowledge, complete overview and comparison of perception datasets for anomaly detection in autonomous driving. Each chapter provides information about tasks and ground truth, context information, and licenses. Additionally, we discuss current weaknesses and gaps in existing datasets to underline the importance of developing further data.Comment: Accepted for publication at IV 202

    Space, Time, and Interaction: A Taxonomy of Corner Cases in Trajectory Datasets for Automated Driving

    Get PDF
    Trajectory data analysis is an essential component for highly automated driving. Complex models developed with these data predict other road users\u27 movement and behavior patterns. Based on these predictions - and additional contextual information such as the course of the road, (traffic) rules, and interaction with other road users - the highly automated vehicle (HAV) must be able to reliably and safely perform the task assigned to it, e.g., moving from point A to B. Ideally, the HAV moves safely through its environment, just as we would expect a human driver to do. However, if unusual trajectories occur, so-called trajectory corner cases, a human driver can usually cope well, but an HAV can quickly get into trouble. In the definition of trajectory corner cases, which we provide in this work, we will consider the relevance of unusual trajectories with respect to the task at hand. Based on this, we will also present a taxonomy of different trajectory corner cases. The categorization of corner cases into the taxonomy will be shown with examples and is done by cause and required data sources. To illustrate the complexity between the machine learning (ML) model and the corner case cause, we present a general processing chain underlying the taxonomy

    Space, Time, and Interaction: A Taxonomy of Corner Cases in Trajectory Datasets for Automated Driving

    Get PDF
    Trajectory data analysis is an essential component for highly automated driving. Complex models developed with these data predict other road users' movement and behavior patterns. Based on these predictions — and additional contextual information such as the course of the road, (traffic) rules, and interaction with other road users — the highly automated vehicle (HAV) must be able to reliably and safely perform the task assigned to it, e.g., moving from point A to B. Ideally, the HAV moves safely through its environment, just as we would expect a human driver to do. However, if unusual trajectories occur, so-called trajectory corner cases, a human driver can usually cope well, but an HAV can quickly get into trouble. In the definition of trajectory corner cases, which we provide in this work, we will consider the relevance of unusual trajectories with respect to the task at hand. Based on this, we will also present a taxonomy of different trajectory corner cases. The categorization of corner cases into the taxonomy will be shown with examples and is done by cause and required data sources. To illustrate the complexity between the machine learning (ML) model and the corner case cause, we present a general processing chain underlying the taxonomy

    Linear Actuators in a Haptic Feedback Joystick System for Electric Vehicles

    No full text
    Several strategies for navigation in unfamiliar environments have been explored, notably leveraging advanced sensors and control algorithms for obstacle recognition in autonomous vehicles. This study introduces a novel approach featuring a redesigned joystick equipped with stepper motors and linear drives, facilitating WiFi communication with a four-wheel omnidirectional electric vehicle. The system’s drive units integrated into the joystick and the encompassing control algorithms are thoroughly examined, including analysis of stick deflection measurement and inter-component communication within the joystick assembly. Unlike conventional setups in which the joystick is tilted by the operator, two independent linear drives are employed to generate ample tensile force, effectively “overpowering” the operator’s input. Running on a Raspberry Pi, the software utilizes Python programming to enable joystick tilt control and to transmit orientation and axis deflection data to an Arduino unit. A fundamental haptic effect is achieved by elevating the minimum pressure required to deflect the joystick rod. Test measurements encompass detection of obstacles along the primary directions perpendicular to the electric vehicle’s trajectory, determination of the maximum achievable speed, and evaluation of the joystick’s maximum operational range within an illuminated environment

    Two Video Data Sets for Tracking and Retrieval of Out of Distribution Objects

    No full text
    Maag K, Chan RK-W, Uhlemeyer S, Kowol K, Gottschalk H. Two Video Data Sets for Tracking and Retrieval of Out of Distribution Objects. In: Wang L, Gall J, Chin T-J, Sato I, Chellappa R, eds. Computer Vision – ACCV 2022. 16th Asian Conference on Computer Vision, Macao, China, December 4–8, 2022, Proceedings, Part V. Lecture Notes in Computer Science. Vol 13845. Cham: Springer Nature Switzerland; 2023: 476-494.In this work we present two video test data sets for the novel computer vision (CV) task of out of distribution tracking (OOD tracking). Here, OOD objects are understood as objects with a semantic class outside the semantic space of an underlying image segmentation algorithm, or an instance within the semantic space which however looks decisively different from the instances contained in the training data. OOD objects occurring on video sequences should be detected on single frames as early as possible and tracked over their time of appearance as long as possible. During the time of appearance, they should be segmented as precisely as possible. We present the SOS data set containing 20 video sequences of street scenes and more than 1000 labeled frames with up to two OOD objects. We furthermore publish the synthetic CARLA-WildLife data set that consists of 26 video sequences containing up to four OOD objects on a single frame. We propose metrics to measure the success of OOD tracking and develop a baseline algorithm that efficiently tracks the OOD objects. As an application that benefits from OOD tracking, we retrieve OOD sequences from unlabeled videos of street scenes containing OOD objects
    corecore