74 research outputs found

    Pedestrian and Passenger Interaction with Autonomous Vehicles: Field Study in a Crosswalk Scenario

    Full text link
    This study presents the outcomes of empirical investigations pertaining to human-vehicle interactions involving an autonomous vehicle equipped with both internal and external Human Machine Interfaces (HMIs) within a crosswalk scenario. The internal and external HMIs were integrated with implicit communication techniques, incorporating a combination of gentle and aggressive braking maneuvers within the crosswalk. Data were collected through a combination of questionnaires and quantifiable metrics, including pedestrian decision to cross related to the vehicle distance and speed. The questionnaire responses reveal that pedestrians experience enhanced safety perceptions when the external HMI and gentle braking maneuvers are used in tandem. In contrast, the measured variables demonstrate that the external HMI proves effective when complemented by the gentle braking maneuver. Furthermore, the questionnaire results highlight that the internal HMI enhances passenger confidence only when paired with the aggressive braking maneuver.Comment: Submitted to the IEEE TIV; 13 pages, 13 figures, 7 tables. arXiv admin note: text overlap with arXiv:2307.1270

    Realistic pedestrian behaviour in the CARLA simulator using VR and mocap

    Full text link
    Simulations are gaining increasingly significance in the field of autonomous driving due to the demand for rapid prototyping and extensive testing. Employing physics-based simulation brings several benefits at an affordable cost, while mitigating potential risks to prototypes, drivers, and vulnerable road users. However, there exit two primary limitations. Firstly, the reality gap which refers to the disparity between reality and simulation and prevents the simulated autonomous driving systems from having the same performance in the real world. Secondly, the lack of empirical understanding regarding the behavior of real agents, such as backup drivers or passengers, as well as other road users such as vehicles, pedestrians, or cyclists. Agent simulation is commonly implemented through deterministic or randomized probabilistic pre-programmed models, or generated from real-world data; but it fails to accurately represent the behaviors adopted by real agents while interacting within a specific simulated scenario. This paper extends the description of our proposed framework to enable real-time interaction between real agents and simulated environments, by means immersive virtual reality and human motion capture systems within the CARLA simulator for autonomous driving. We have designed a set of usability examples that allow the analysis of the interactions between real pedestrians and simulated autonomous vehicles and we provide a first measure of the user's sensation of presence in the virtual environment.Comment: This is a pre-print of the following work: Communications in Computer and Information Science (CCIS, volume 1882), 2023, Computer-Human Interaction Research and Applications reproduced with permission of Springer Nature. The final authenticated version is available online at: https://link.springer.com/chapter/10.1007/978-3-031-41962-1_5. arXiv admin note: substantial text overlap with arXiv:2206.0033

    High-Level Interpretation of Urban Road Maps Fusing Deep Learning-Based Pixelwise Scene Segmentation and Digital Navigation Maps

    Get PDF
    This paper addresses the problem of high-level road modeling for urban environments. Current approaches are based on geometric models that fit well to the road shape for narrow roads. However, urban environments are more complex and those models are not suitable for inner city intersections or other urban situations. The approach presented in this paper generates a model based on the information provided by a digital navigation map and a vision-based sensing module. On the one hand, the digital map includes data about the road type (residential, highway, intersection, etc.), road shape, number of lanes, and other context information such as vegetation areas, parking slots, and railways. On the other hand, the sensing module provides a pixelwise segmentation of the road using a ResNet-101 CNN with random data augmentation, as well as other hand-crafted features such as curbs, road markings, and vegetation. The high-level interpretation module is designed to learn the best set of parameters of a function that maps all the available features to the actual parametric model of the urban road, using a weighted F-score as a cost function to be optimized. We show that the presented approach eases the maintenance of digital maps using crowd-sourcing, due to the small number of data to send, and adds important context information to traditional road detection systems

    Digital twin in virtual reality for human-vehicle interactions in the context of autonomous driving

    Full text link
    This paper presents the results of tests of interactions between real humans and simulated vehicles in a virtual scenario. Human activity is inserted into the virtual world via a virtual reality interface for pedestrians. The autonomous vehicle is equipped with a virtual Human-Machine interface (HMI) and drives through the digital twin of a real crosswalk. The HMI was combined with gentle and aggressive braking maneuvers when the pedestrian intended to cross. The results of the interactions were obtained through questionnaires and measurable variables such as the distance to the vehicle when the pedestrian initiated the crossing action. The questionnaires show that pedestrians feel safer whenever HMI is activated and that varying the braking maneuver does not influence their perception of danger as much, while the measurable variables show that both HMI activation and the gentle braking maneuver cause the pedestrian to cross earlier.Comment: 26th IEEE International Conference on Intelligent Transportation Systems ITSC 202

    Towards trustworthy multi-modal motion prediction: Holistic evaluation and interpretability of outputs

    Get PDF
    Predicting the motion of other road agents enables autonomous vehicles to perform safe and efficient path planning. This task is very complex, as the behaviour of road agents depends on many factors and the number of possible future trajectories can be considerable (multi-modal). Most prior approaches proposed to address multi-modal motion prediction are based on complex machine learning systems that have limited interpretability. Moreover, the metrics used in current benchmarks do not evaluate all aspects of the problem, such as the diversity and admissibility of the output. In this work, we aim to advance towards the design of trustworthy motion prediction systems, based on some of the requirements for the design of Trustworthy Artificial Intelligence. We focus on evaluation criteria, robustness, and interpretability of outputs. First, we comprehensively analyse the evaluation metrics, identify the main gaps of current benchmarks, and propose a new holistic evaluation framework. We then introduce a method for the assessment of spatial and temporal robustness by simulating noise in the perception system. To enhance the interpretability of the outputs and generate more balanced results in the proposed evaluation framework, we propose an intent prediction layer that can be attached to multi-modal motion prediction models. The effectiveness of this approach is assessed through a survey that explores different elements in the visualization of the multi-modal trajectories and intentions. The proposed approach and findings make a significant contribution to the development of trustworthy motion prediction systems for autonomous vehicles, advancing the field towards greater safety and reliability.Comment: 16 pages, 7 figures, 6 table

    Autonomous navigation and obstacle avoidance of a micro-bus: Regular paper

    Get PDF
    At present, the topic of automated vehicles is one of the most promising research areas in the field of Intelligent Transportation Systems (ITS). The use of automated vehicles for public transportation also contributes to reductions in congestion levels and to improvements in traffic flow. Moreover, electrical public autonomous vehicles are environmentally friendly, provide better air quality and contribute to energy conservation. The driverless public transportation systems, which are at present operating in some airports and train stations, are restricted to dedicated roads and exhibit serious trouble dynamically avoiding obstacles in the trajectory. In this paper, an electric autonomous mini-bus is presented. All datasets used in this article were collected during the experiments carried out in the demonstration event of the 2012 IEEE Intelligent Vehicles Symposium that took place in Alcalá de Henares (Spain). The demonstration consisted of a route 725 metres long containing a list of latitude-longitude points (waypoints). The mini-bus was capable of driving autonomously from one waypoint to another using a GPS sensor. Furthermore, the vehicle is provided with a multi-beam Laser Imaging Detection and Ranging (LIDAR) sensor for surrounding reconstruction and obstacle detection. When an obstacle is detected in the planned path, the planned route is modified in order to avoid the obstacle and continue ist way to the end of the mission. On the demonstration day, a total of 196 attendees had the opportunity to get a ride on the vehicles. A total of 28 laps were successfully completed in full autonomous mode in a private circuit located in the National Institute for Aerospace Research (INTA), Spain. In other words, the system completed 20.3 km of driverless navigation and obstacle avoidance

    Vision-based active safety system for automatic stopping

    Full text link
    ntelligent systems designed to reduce highway fatalities have been widely applied in the automotive sector in the last decade. Of all users of transport systems, pedestrians are the most vulnerable in crashes as they are unprotected. This paper deals with an autonomous intelligent emergency system designed to avoid collisions with pedestrians. The system consists of a fuzzy controller based on the time-to-collision estimate – obtained via a vision-based system – and the wheel-locking probability – obtained via the vehicle’s CAN bus – that generates a safe braking action. The system has been tested in a real car – a convertible Citroën C3 Pluriel – equipped with an automated electro-hydraulic braking system capable of working in parallel with the vehicle’s original braking circuit. The system is used as a last resort in the case that an unexpected pedestrian is in the lane and all the warnings have failed to produce a response from the driver
    corecore