6 research outputs found

    What´s happening right now? Passenger Understanding of Highly Automated Shuttle’s Minimal Risk Maneuvers by Internal Human-Machine Interfaces

    Get PDF
    Remote Operation (RO) is a promising technology, that could close the gap between current vehicle automation functionalities and their expected capabilities especially when focusing on the „Unknowable Unknowns” problem of hidden operational design domain (ODD) borders within AI-based highly automated vehicles (HAVs) (Koopman & Wagner, 2017). It is unlikely that vehicle automation systems will be created in the foreseeable future, that are capable of solving every possible situation they are confronted with (Schneider et al., 2023). A possible way to overcome these technological limitations is to incorporate a human operator and benefiting from his problem-solving skills in novel situations (Cummings et al., 2020). New communication technologies allow for this human support by remotely interacting with vehicles and therefore supporting numerous vehicles at once (Zhang et al., 2021). On the other hand, this would lead to the inability of HAV passengers to interact with a human driver, unlike in manually driven vehicles, where no human driver will be inside the shuttle to support and inform passengers, if necessary (Meurer et al., 2020). The absence of a human diver, who can reassure and comfort passengers in these automated systems, could lead to novel passenger insecurity patterns (Meurer et al., 2020). This is particularly relevant in unfamiliar situations with a high level of uncertainty like minimal risk maneuvers (MRMs). MRMs are maneuvers in which the vehicle tries to minimize risk, for example by stopping in a safe manner. These controlled stopping maneuvers can be triggered by “Unknowable Unknowns” outside the vehicles ODD and are likely to lead to passengers´ confusion and uncertainty (Koo et al., 2015). The passengers confusion may further increase due to the opaque nature of AI-based automation systems, where it is not always clear, what the vehicle´s/AI´s reasoning behind its action is (Cysneiros et al., 2018). The issues in understanding complicated AI-based systems, are in part the result of increasingly complex algorithms (Eschenbach, 2021). One possible scenario is, that passengers in automated Shuttles, will be confronted with an HMI, which doesn´t depict the systems reasoning behind its actions well enough. As a result, passengers will not be able to explain HAV behavior, especially during MRMs, which will result in lower trust and acceptance towards HAVs in general. In order to better understand a HAV´s behavior, the reasoning of its algorithms need to be more explainable to the vehicle´s users (Schmidt et al., 2021). In part, this may be achieved by giving certain information about the AI´s decision making (Guidotti et al., 2018) or by giving examples as an explanation to certain behavioral patterns (Cai et al., 2019). In order to deal with this confusion and uncertainty, Human-machine interfaces (HMIs) could be utilized, by giving information regarding these MRMs and reduce these insecurities by increasing the passengers understanding of the HAV-system. (Koo et al., 2015; L.F. Penin et al., 2000). Though for the individual user this information about specific algorithms and their existence is not central, the existence of an AI-system and relevant key information might be sufficient for an informed user and should be incorporated in the HMI information (Dahl, 2018). The present study investigates how systemic explanatory transparency via different approaches of onboard HMI of an automated shuttle bus (ASB) is able to reduce this uncertainty and may lead to a better understanding of the vehicle´s AI´s reasoning and its behavior during an MRM that might result in higher trust, understanding and subjective safety (Oliveira et al., 2020). For this purpose, we designed several interfaces for the communication between the HAV and the passengers with varying degrees of (exemplary) information, concerning the situation that led to the MRM and the vehicle´s interpretation of that situation. The MRM information consisted of vehicle status, delay times, specific MRM information and the involvement of a teleoperator. The involvement of a teleoperator was explained as a process consisting of multiple steps for supporting the vehicle´s automation. The MRM information was incorporated in a basic mapbased interface and gave information about the ASB´s route, passengers destinations, time and the vehicle´s operational status. The resulting HMI variants were presented via pictures in an online questionnaire study and evaluated in regards to understandability and usability. In addition to the varying amounts of given information, different design choices were evaluated as well. Results of the study aim to provide insights in the informative needs of SAV passengers during the performance of MRMs. This research aims to improve future designs of HAV HMIs and to support passenger experiences while using highly automated shuttle busses

    Chapter 15 Matching Brain–Machine Interface Performance to Space Applications

    Get PDF
    A brain-machine interface (BMI) is a particular class of human-machine interface (HMI). BMIs have so far been studied mostly as a communication means for people who have little or no voluntary control of muscle activity. For able-bodied users, such as astronauts, a BMI would only be practical if conceived as an augmenting interface. A method is presented for pointing out effective combinations of HMIs and applications of robotics and automation to space. Latency and throughput are selected as performance measures for a hybrid bionic system (HBS), that is, the combination of a user, a device, and a HMI. We classify and briefly describe HMIs and space applications and then compare the performance of classes of interfaces with the requirements of classes of applications, both in terms of latency and throughput. Regions of overlap correspond to effective combinations. Devices requiring simpler control, such as a rover, a robotic camera, or environmental controls are suitable to be driven by means of BMI technology. Free flyers and other devices with six degrees of freedom can be controlled, but only at low-interactivity levels. More demanding applications require conventional interfaces, although they could be controlled by BMIs once the same levels of performance as currently recorded in animal experiments are attained. Robotic arms and manipulators could be the next frontier for noninvasive BMIs. Integrating smart controllers in HBSs could improve interactivity and boost the use of BMI technology in space applications. © 2009 Elsevier Inc. All rights reserved

    Teleoperación [de robots]: técnicas, aplicaciones, entorno sensorial y teleoperación inteligente

    Get PDF
    En este trabajo centraremos la atención en los sistemas robóticos teleoperados, especialmente analizaremos los sistemas teleoperados desde internet, veremos una clasificación de las metodologías de teleoperación, los diferentes sistemas de control y daremos una visión del estado del arte en este ámbito de conocimiento

    Simultaneous Capture and Detumble of a Resident Space Object by a Free-Flying Spacecraft-Manipulator System

    Get PDF
    A maneuver to capture and detumble an orbiting space object using a chaser spacecraft equipped with a robotic manipulator is presented. In the proposed maneuver, the capture and detumble objectives are integrated into a unified set of terminal constraints. Terminal constraints on the end-effector's position and velocity ensure a successful capture, and a terminal constraint on the chaser's momenta ensures a post-capture chaser-target system with zero angular momentum. The manipulator motion required to achieve a smooth, impact-free grasp is gradually stopped after capture, equalizing the momenta across all bodies, rigidly connecting the two vehicles, and completing the detumble of the newly formed chaser-target system without further actuation. To guide this maneuver, an optimization-based approach that enforces the capture and detumble terminal constraints, avoids collisions, and satisfies actuation limits is used. The solution to the guidance problem is obtained by solving a collection of convex programming problems, making the proposed guidance approach suitable for onboard implementation and real-time use. This simultaneous capture and detumble maneuver is evaluated through numerical simulations and hardware-in-the-loop experiments. Videos of the numerically simulated and experimentally demonstrated maneuvers are included as Supplementary Material

    Neural Network-Based Control of Networked Trilateral Teleoperation With Geometrically Unknown Constraints

    Full text link
    corecore