14 research outputs found
Trust in an autonomously driven simulator and vehicle performing maneuvers at a T-junction with and without other vehicles
Autonomous vehicle (AV) technology is developing rapidly. Level 3 automation assumes the user might need to respond to requests to retake control. Levels 4 (high automation) and 5 (full automation) do not require human monitoring of the driving task or systems [1]: the AV handles driving functions and makes decisions based on continuously updated information. A gradual switch in the role of the human within the vehicle from active controller to passive passenger comes with uncertainty in terms of trust, which will likely be a key barrier to acceptability, adoption and continued use [2]. Few studies have investigated trust in AVs and these have tended to use driving simulators with Level 3 automation [3, 4]. The current study used both a driving simulator and autonomous road vehicle. Both were operating at Level 3 autonomy although did not require intervention from the user; much like Level 4 systems. Forty-six participants completed road circuits (UK-based) with both platforms. Trust was measured immediately after different types of turns at a priority T-junction, increasing in complexity: e.g., driving left or right out of a T-junction; turning right into a T-junction; presence of oncoming/crossing vehicles. Trust was high across platforms: higher in the simulator for some events and higher in the road AV for others. Generally, and often irrespective of platform, trust was higher for turns involving an oncoming/crossing vehicle(s) than without traffic, possibly because the turn felt more controlled as the simulator and road AVs always yielded, resulting in a delayed maneuver. We also found multiple positive relationships between trust in automation and technology, and trust ratings for most T-junction turn events across platforms. The assessment of trust was successful and the novel findings are important to those designing, developing and testing AVs with users in mind. Undertaking a trial of this scale is complex and caution should be exercised about over-generalizing the findings
Theoretical considerations and development of a questionnaire to measure trust in automation
The increasing number of interactions with automated systems has sparked the interest of researchers in trust in automation because it predicts not only whether but also how an operator interacts with an automation. In this work, a theoretical model of trust in automation is established and the development and evaluation of a corresponding questionnaire (Trust in Automation, TiA) are described.
Building on the model of organizational trust by Mayer, Davis, and Schoorman (1995) and the theoretical account by Lee and See (2004), a model for trust in automation containing six underlying dimensions was established. Following a deductive approach, an initial set of 57 items was generated. In a first online study, these items were analyzed and based on the criteria item difficulty, standard deviation, item-total correlation, internal consistency, overlap with other items in content, and response quote, 40 items were eliminated and two scales were merged, leaving six scales (Reliability/Competence, Understandability/Predictability, Propensity to Trust, Intention of Developers, Familiarity, and Trust in Automation) containing a total of 19 items.
The internal structure of the resulting questionnaire was analyzed in a subsequent second online study by means of an exploratory factor analysis. The results show sufficient preliminary evidence for the proposed factor structure and demonstrate that further pursuit of the model is reasonable but certain revisions may be necessary. The calculated omega coefficients indicated good to excellent reliability for all scales. The results also provide evidence for the questionnaire’s criterion validity: Consistent with the expectations, an unreliable automated driving system received lower trust ratings as a reliably functioning system. In a subsequent empirical driving simulator study, trust ratings could predict reliance on an automated driving system and monitoring in form of gaze behavior. Possible steps for revisions are discussed and recommendations for the application of the questionnaire are given