11 research outputs found

    Implications of Personality on Cognitive Workload, Affect, and Task Performance in Remote Robot Control

    Full text link
    This paper explores how the personality traits of robot operators can influence their task performance during remote control of robots. It is essential to explore the impact of personal dispositions on information processing, both directly and indirectly, when working with robots on specific tasks. To investigate this relationship, we utilize the open-access multi-modal dataset MOCAS to examine the robot operator's personality traits, affect, cognitive load, and task performance. Our objective is to confirm if personality traits have a total effect, including both direct and indirect effects, that could significantly impact the performance levels of operators. Specifically, we examine the relationship between personality traits such as extroversion, conscientiousness, and agreeableness, and task performance. We conduct a correlation analysis between cognitive load, self-ratings of workload and affect, and quantified individual personality traits along with their experimental scores. The findings show that personality traits do not have a total effect on task performance.Comment: 8 pages, 6 figures, accepted to IROS 2023. A link to a supplementary video is in the abstrac

    Beacon-based Distributed Structure Formation in Multi-agent Systems

    Full text link
    Autonomous shape and structure formation is an important problem in the domain of large-scale multi-agent systems. In this paper, we propose a 3D structure representation method and a distributed structure formation strategy where settled agents guide free moving agents to a prescribed location to settle in the structure. Agents at the structure formation frontier looking for neighbors to settle act as beacons, generating a surface gradient throughout the formed structure propagated by settled agents. Free-moving agents follow the surface gradient along the formed structure surface to the formation frontier, where they eventually reach the closest beacon and settle to continue the structure formation following a local bidding process. Agent behavior is governed by a finite state machine implementation, along with potential field-based motion control laws. We also discuss appropriate rules for recovering from stagnation points. Simulation experiments are presented to show planar and 3D structure formations with continuous and discontinuous boundary/surfaces, which validate the proposed strategy, followed by a scalability analysis.Comment: 8 pages, 6 figures, accepted for publication in IROS 2023. A link to the simulation videos is provided under the Validation sectio

    Husformer: A Multi-Modal Transformer for Multi-Modal Human State Recognition

    Full text link
    Human state recognition is a critical topic with pervasive and important applications in human-machine systems.Multi-modal fusion, the combination of metrics from multiple data sources, has been shown as a sound method for improving the recognition performance. However, while promising results have been reported by recent multi-modal-based models, they generally fail to leverage the sophisticated fusion strategies that would model sufficient cross-modal interactions when producing the fusion representation; instead, current methods rely on lengthy and inconsistent data preprocessing and feature crafting. To address this limitation, we propose an end-to-end multi-modal transformer framework for multi-modal human state recognition called Husformer.Specifically, we propose to use cross-modal transformers, which inspire one modality to reinforce itself through directly attending to latent relevance revealed in other modalities, to fuse different modalities while ensuring sufficient awareness of the cross-modal interactions introduced. Subsequently, we utilize a self-attention transformer to further prioritize contextual information in the fusion representation. Using two such attention mechanisms enables effective and adaptive adjustments to noise and interruptions in multi-modal signals during the fusion process and in relation to high-level features. Extensive experiments on two human emotion corpora (DEAP and WESAD) and two cognitive workload datasets (MOCAS and CogLoad) demonstrate that in the recognition of human state, our Husformer outperforms both state-of-the-art multi-modal baselines and the use of a single modality by a large margin, especially when dealing with raw multi-modal signals. We also conducted an ablation study to show the benefits of each component in Husformer

    Toward Personalized Tour-Guide Robot: Adaptive Content Planner based on Visitor’s Engagement

    Full text link
    In the evolving landscape of human-robot interactions, tour-guide robots are increasingly being integrated into various settings. However, the existing paradigm of these robots relies heavily on prerecorded content, which limits effective engagement with visitors. We propose to address this issue of visitor engagement by transforming tour-guide robots into dynamic, adaptable companions that cater to individual visitor needs and preferences. Our primary objective is to enhance visitor engagement during tours through a robotic system capable of assessing and reacting to visitor preferences and engagement. Leveraging this data, the system can calibrate and adapt the tour-guide robot’s content in real-time to meet individual visitor preferences. Through this research, we aim to enhance the tour-guide robots’ impact in delivering engaging and personalized visitor experiences by providing an adaptive tour-guide robot solution that can learn from humans’ preferences and adapt its behaviors by itself.Peer Reviewedhttp://deepblue.lib.umich.edu/bitstream/2027.42/192072/1/Lin et al. 2024.pdfDescription of Lin et al. 2024.pdf : Published VersionSEL

    Spot Report: An Open-Source and Real-Time Secondary Task for Human-Robot Interaction User Experiments

    Full text link
    The human-robot interaction (HRI) community is interested in a range of research questions, many of which are investigated through user experiments. Robots that occasionally require human input allow for humans to engage in secondary tasks. However, few secondary tasks transmit data in real-time and are openly available, which hinders interaction with the primary task and limits the ability of the community to build upon others’ research. Also, the need for a secondary task relevant to the military was identified by subject matter experts. To address these concerns, this paper presents the spot report task as an open-source secondary task with real-time communication for use in HRI experiments. The spot report task requires counting target objects in static images. This paper includes details of the spot report task and real-time communication with a primary task. We developed the spot report task considering the military domain, but the software architecture is domain-independent. We hope others can leverage the spot report task in their own user experiments.Automotive Research Center (ARC) by Cooperative Agreement W56HZV-19-2-0001 U.S. Army DEVCOM Ground Vehicle Systems Center (GVSC) Warren, MI.Peer Reviewedhttp://deepblue.lib.umich.edu/bitstream/2027.42/192466/1/HRI_LBR_2024_FINAL (1).pdfDescription of HRI_LBR_2024_FINAL (1).pdf : Final PaperSEL
    corecore