10,144 research outputs found

    Towards trusted autonomous vehicles from vulnerable road users perspective

    Full text link

    Towards Identifying and closing Gaps in Assurance of autonomous Road vehicleS - a collection of Technical Notes Part 1

    Get PDF
    This report provides an introduction and overview of the Technical Topic Notes (TTNs) produced in the Towards Identifying and closing Gaps in Assurance of autonomous Road vehicleS (Tigars) project. These notes aim to support the development and evaluation of autonomous vehicles. Part 1 addresses: Assurance-overview and issues, Resilience and Safety Requirements, Open Systems Perspective and Formal Verification and Static Analysis of ML Systems. Part 2: Simulation and Dynamic Testing, Defence in Depth and Diversity, Security-Informed Safety Analysis, Standards and Guidelines

    Acceptance, a Mandatory Requirement for the Transport of the Future

    Get PDF
    Acceptance is an underestimated element in the adoption of new technologies. User's effective needs become a requirement as far as transport is concerned. The investments in transport from public and private companies/agencies in next-generation tools and infrastructure continues. These, seen as emerging mobility services and automated vehicles (AVs) optimistically expect customer’s adoption. Yet, it is public acceptance that can potentialize the expected benefits of a connected and automated vehicles world. While stakeholders foresee to overcome established premises about traditional public transportation and vehicle ownership, the perceived usability of such advancements are the domain of daily travelers. Consequently, this paper addresses the issue of acceptance of connected automated vehicles by presenting a general view and a practical example from Drive2TheFuture project, where the needs and wants of future “drivers” are mandatory; The findings include relevant risks that affect users’ acceptance and exemplary recommendations to an affective, persuasive, and trusted HMI

    Intent prediction of vulnerable road users for trusted autonomous vehicles

    Full text link
    This study investigated how future autonomous vehicles could be further trusted by vulnerable road users (such as pedestrians and cyclists) that they would be interacting with in urban traffic environments. It focused on understanding the behaviours of such road users on a deeper level by predicting their future intentions based solely on vehicle-based sensors and AI techniques. The findings showed that personal/body language attributes of vulnerable road users besides their past motion trajectories and physics attributes in the environment led to more accurate predictions about their intended actions

    Computational interaction models for automated vehicles and cyclists

    Get PDF
    Cyclists’ safety is crucial for a sustainable transport system. Cyclists are considered vulnerableroad users because they are not protected by a physical compartment around them. In recentyears, passenger car occupants’ share of fatalities has been decreasing, but that of cyclists hasactually increased. Most of the conflicts between cyclists and motorized vehicles occur atcrossings where they cross each other’s path. Automated vehicles (AVs) are being developedto increase traffic safety and reduce human errors in driving tasks, including when theyencounter cyclists at intersections. AVs use behavioral models to predict other road user’sbehaviors and then plan their path accordingly. Thus, there is a need to investigate how cyclistsinteract and communicate with motorized vehicles at conflicting scenarios like unsignalizedintersections. This understanding will be used to develop accurate computational models ofcyclists’ behavior when they interact with motorized vehicles in conflict scenarios.The overall goal of this thesis is to investigate how cyclists communicate and interact withmotorized vehicles in the specific conflict scenario of an unsignalized intersection. In the firstof two studies, naturalistic data was used to model the cyclists’ decision whether to yield to apassenger car at an unsignalized intersection. Interaction events were extracted from thetrajectory dataset, and cyclists’ behavioral cues were added from the sensory data. Bothcyclists’ kinematics and visual cues were found to be significant in predicting who crossed theintersection first. The second study used a cycling simulator to acquire in-depth knowledgeabout cyclists’ behavioral patterns as they interacted with an approaching vehicle at theunsignalized intersection. Two independent variables were manipulated across the trials:difference in time to arrival at the intersection (DTA) and visibility condition (field of viewdistance). Results from the mixed effect logistic model showed that only DTA affected thecyclist’s decision to cross before the vehicle. However, increasing the visibility at theintersection reduced the severity of the cyclists’ braking profiles. Both studies contributed tothe development of computational models of cyclist behavior that may be used to support safeautomated driving.Future work aims to find differences in cyclists’ interactions with different vehicle types, suchas passenger cars, taxis, and trucks. In addition, the interaction process may also be evaluatedfrom the driver’s perspective by using a driving simulator instead of a riding simulator. Thissetup would allow us to investigate how drivers respond to cyclists at the same intersection.The resulting data will contribute to the development of accurate predictive models for AVs

    User trust here and now but not necessarily there and then - A Design Perspective on Appropriate Trust in Automated Vehicles (AVs)

    Get PDF
    Automation may carry out functions previously conducted only by humans. In the past, interaction with automation was primarily designed for, and used by, users with special training (pilots in aviation or operators in the process industry for example) but since automation has developed and matured, it has also become more available to users who have no additional training on automation such as users of automated vehicles (AVs). However, before we can reap the benefits of AV use, users must first trust the vehicles. According to earlier studies on trust in automation (TiA), user trust is a precondition for the use of automated systems not only because it is essential to user acceptance, but also because it is a prerequisite for a good user experience. Furthermore, that user trust is appropriate in relation to the actual performance of the AV, that is, user trust is calibrated to the capabilities and limitations of the AV. Otherwise, it may lead to misuse or disuse of the AV.\ua0\ua0\ua0\ua0 The issue of how to design for appropriate user trust was approached from a user-centred design perspective based on earlier TiA theories and was addressed in four user studies using mixed-method research designs. The four studies involved three types of AVs; an automated car, an automated public transport bus as well as an automated delivery bot for last-mile deliveries (LMD) of parcels. The users’ ranged from ordinary car drivers, bus drivers, public transport commuters and logistic personnel.\ua0\ua0\ua0\ua0 The findings show that user trust in the AVs was primarily affected by information relating to the performance of the AV. That is factors such as, how predictable, reliable and capable the AV was perceived to be conducting for instance a task, as well as how appropriate the behaviour of the AV was perceived to be for conducting the task and whether or not the user understood why the AV behaved as it did when conducting the task. Secondly, it was also found that contextual aspects influenced user trust in AVs. This primarily related to the users’ perception of risk for oneself and others as well as perceptions of task difficulty. That is, user trust was affected by the perception of risk for oneself but also by the possible risks the AV could impose on other e.g. road users. The perception of task difficulty influenced user trust in situations when a task was perceived as (too) easy, the user could not judge the trustworthiness of the AV or when the AV increased the task difficulty for the user thus adding to negative outcomes. Therefore, AV-related trust factors and contextual aspects are important to consider when designing for appropriate user trust in different types of AVs operating in different domains.\ua0\ua0\ua0\ua0 However, from a more in-depth cross-study analysis and consequent synthesis it was found that when designing for appropriate user trust the earlier mentioned factors and aspects should be considered but should not be the focus. They are effects, that is the user’s interpretation of information originating from the behaviour of the AV in a particular context which in turn are the consequence of the following design variables: (I) The Who i.e. the AV, (II) What the AV does, (III) by What Means the AV does something, (IV) When the AV does something, (V) Why the AV does something and(VI) Where the AV does something, as well as the interplay between them. Furthermore, it was found that user trust is affected by the interdependency between (II) What the AV does and (VI) Where the AV does something; this was always assessed together by the user in turn affecting user trust. From these findings a tentative Framework of Trust Analysis & Design was developed. The framework can be used as a ‘tool-for-thought’ and accounts for the activity conducted by the AV, the context as well as their interdependence that ultimately affect user trust
    • 

    corecore