2,500 research outputs found

    Modeling and "smart" prototyping human-in-the-loop interactions for AmI environments

    Get PDF
    [EN] Autonomous capabilities are required in AmI environments in order to adapt systems to new environmental conditions and situations. However, keeping the human in the loop and in control of such systems is still necessary because of the diversity of systems, domains, environments, context situations, and social and legal constraints, which makes full autonomy a utopia within the short or medium term. Human-system integration introduces an important number of challenges and problems that have to be solved. On the one hand, humans should interact with systems even in those situations where their attentional, cognitive, and physical resources are limited in order to perform the interaction. On the other hand, systems must avoid overwhelming the user with unnecessary actions. Therefore, appropriate user-centered methods for AmI development should be used to help designers analyze and design human-in-the-loop interactions in AmI environments. This paper presents a user-centered design method that defines a process with a set of tools and techniques that supports the process steps in order to systematically design, prototype, and validate human-in-the-loop (HiL) solutions. The process starts with the definition of the HiL design, which defines how the system cooperates with the human. This HiL design is built using a conceptual framework that focuses on achieving human-system interactions that get human attention and avoid obtrusiveness. Then, we provide a software infrastructure to generate a prototype based on the HiL design and validate it by having end-users use a web simulator. The feedback data generated during the prototype user validation is gathered and used by a machine learning tool that infers the user's needs and preferences. Finally, these inferences are used to automatically enhance the human-in-the-loop designs and prototypes. We have validated the proposed method through a twofold perspective: an experiment to analyze the perception of interaction designers regarding their acceptance of the design method and another experiment to evaluate the usefulness of the "smart" prototyping technique. The results obtained point out the acceptability of the proposed method by designers and the useful adaptations provided by the "smart" prototyping technique to achieve a HiL design that adapts well to users' preferences and needs.This work has been developed with the financial support of the Spanish State Research Agency and the Generalitat Valenciana under the projects TIN2017-84094-R and AICO/2019/009 and co-financed with ERDF.Gil, M.; Albert Albiol, M.; Fons Cors, J.; Pelechano Ferragud, V. (2022). Modeling and "smart" prototyping human-in-the-loop interactions for AmI environments. Personal and Ubiquitous Computing. 26:1413-1444. https://doi.org/10.1007/s00779-020-01508-x141314442

    User expectations of partial driving automation capabilities and their effect on information design preferences in the vehicle

    Get PDF
    Partially automated vehicles present interface design challenges in ensuring the driver remains alert should the vehicle need to hand back control at short notice, but without exposing the driver to cognitive overload. To date, little is known about driver expectations of partial driving automation and whether this affects the information they require inside the vehicle. Twenty-five participants were presented with five partially automated driving events in a driving simulator. After each event, a semi-structured interview was conducted. The interview data was coded and analysed using grounded theory. From the results, two groupings of driver expectations were identified: High Information Preference (HIP) and Low Information Preference (LIP) drivers; between these two groups the information preferences differed. LIP drivers did not want detailed information about the vehicle presented to them, but the definition of partial automation means that this kind of information is required for safe use. Hence, the results suggest careful thought as to how information is presented to them is required in order for LIP drivers to safely using partial driving automation. Conversely, HIP drivers wanted detailed information about the system's status and driving and were found to be more willing to work with the partial automation and its current limitations. It was evident that the drivers' expectations of the partial automation capability differed, and this affected their information preferences. Hence this study suggests that HMI designers must account for these differing expectations and preferences to create a safe, usable system that works for everyone. [Abstract copyright: Copyright © 2019 The Authors. Published by Elsevier Ltd.. All rights reserved.

    Driver Trust in Automated Driving Systems

    Get PDF
    Vehicle automation is a prominent example of safety-critical AI-based task automation. Recent digital innovations have led to the introduction of partial vehicle automation, which can already give vehicle drivers a sense of what fully automated driving would feel like. In the context of current imperfect vehicle automation, establishing an appropriate level of driver trust in automated driving systems (ADS) is seen as a key factor for their safe use and long-term acceptance. This paper thoroughly reviews and synthesizes the literature on driver trust in ADS, covering a wide range of academic disciplines. Pulling together knowledge on trustful user interaction with ADS, this paper offers a first classification of the main trust calibrators. Guided by this analysis, the paper identifies a lack of studies on adaptive, contextual trust calibration in contrast to numerous studies that focus on general trust calibration

    Analysis of Disengagements in Semi-Autonomous Vehicles: Drivers’ Takeover Performance and Operational Implications

    Get PDF
    This report analyzes the reactions of human drivers placed in simulated Autonomous Technology disengagement scenarios. The study was executed in a human-in-the-loop setting, within a high-fidelity integrated car simulator capable of handling both manual and autonomous driving. A population of 40 individuals was tested, with metrics for control takeover quantification given by: i) response times (considering inputs of steering, throttle, and braking); ii) vehicle drift from the lane centerline after takeover as well as overall (integral) drift over an S-turn curve compared to a baseline obtained in manual driving; and iii) accuracy metrics to quantify human factors associated with the simulation experiment. Independent variables considered for the study were the age of the driver, the speed at the time of disengagement, and the time at which the disengagement occurred (i.e., how long automation was engaged for). The study shows that changes in the vehicle speed significantly affect all the variables investigated, pointing to the importance of setting up thresholds for maximum operational speed of vehicles driven in autonomous mode when the human driver serves as back-up. The results shows that the establishment of an operational threshold could reduce the maximum drift and lead to better control during takeover, perhaps warranting a lower speed limit than conventional vehicles. With regards to the age variable, neither the response times analysis nor the drift analysis provide support for any claim to limit the age of drivers of semi-autonomous vehicles

    A proposed psychological model of driving automation

    Get PDF
    This paper considers psychological variables pertinent to driver automation. It is anticipated that driving with automated systems is likely to have a major impact on the drivers and a multiplicity of factors needs to be taken into account. A systems analysis of the driver, vehicle and automation served as the basis for eliciting psychological factors. The main variables to be considered were: feed-back, locus of control, mental workload, driver stress, situational awareness and mental representations. It is expected that anticipating the effects on the driver brought about by vehicle automation could lead to improved design strategies. Based on research evidence in the literature, the psychological factors were assembled into a model for further investigation

    Automated driving: A literature review of the take over request in conditional automation

    Get PDF
    This article belongs to the Special Issue Autonomous Vehicles TechnologyIn conditional automation (level 3), human drivers can hand over the Driving Dynamic Task (DDT) to the Automated Driving System (ADS) and only be ready to resume control in emergency situations, allowing them to be engaged in non-driving related tasks (NDRT) whilst the vehicle operates within its Operational Design Domain (ODD). Outside the ODD, a safe transition process from the ADS engaged mode to manual driving should be initiated by the system through the issue of an appropriate Take Over Request (TOR). In this case, the driver's state plays a fundamental role, as a low attention level might increase driver reaction time to take over control of the vehicle. This paper summarizes and analyzes previously published works in the field of conditional automation and the TOR process. It introduces the topic in the appropriate context describing as well a variety of concerns that are associated with the TOR. It also provides theoretical foundations on implemented designs, and report on concrete examples that are targeted towards designers and the general public. Moreover, it compiles guidelines and standards related to automation in driving and highlights the research gaps that need to be addressed in future research, discussing also approaches and limitations and providing conclusions.This work was funded by the Austrian Ministry for Climate Action, Environment, Energy, Mobility, Innovation, and Technology (BMK) Endowed Professorship for Sustainable Transport Logistics 4.0; the Spanish Ministry of Economy, Industry and Competitiveness under the TRA201563708-R and TRA2016-78886-C3-1-R project; open access funding by the Johannes Kepler University Linz

    How can humans understand their automated cars? HMI principles, problems and solutions

    Get PDF
    As long as vehicles do not provide full automation, the design and function of the Human Machine Interface (HMI) is crucial for ensuring that the human “driver” and the vehicle-based automated systems collaborate in a safe manner. When the driver is decoupled from active control, the design of the HMI becomes even more critical. Without mutual understanding, the two agents (human and vehicle) will fail to accurately comprehend each other’s intentions and actions. This paper proposes a set of design principles for in-vehicle HMI and reviews some current HMI designs in the light of those principles. We argue that in many respects, the current designs fall short of best practice and have the potential to confuse the driver. This can lead to a mismatch between the operation of the automation in the light of the current external situation and the driver’s awareness of how well the automation is currently handling that situation. A model to illustrate how the various principles are interrelated is proposed. Finally, recommendations are made on how, building on each principle, HMI design solutions can be adopted to address these challenges
    • 

    corecore