4,748 research outputs found

    A Classification Model for Sensing Human Trust in Machines Using EEG and GSR

    Full text link
    Today, intelligent machines \emph{interact and collaborate} with humans in a way that demands a greater level of trust between human and machine. A first step towards building intelligent machines that are capable of building and maintaining trust with humans is the design of a sensor that will enable machines to estimate human trust level in real-time. In this paper, two approaches for developing classifier-based empirical trust sensor models are presented that specifically use electroencephalography (EEG) and galvanic skin response (GSR) measurements. Human subject data collected from 45 participants is used for feature extraction, feature selection, classifier training, and model validation. The first approach considers a general set of psychophysiological features across all participants as the input variables and trains a classifier-based model for each participant, resulting in a trust sensor model based on the general feature set (i.e., a "general trust sensor model"). The second approach considers a customized feature set for each individual and trains a classifier-based model using that feature set, resulting in improved mean accuracy but at the expense of an increase in training time. This work represents the first use of real-time psychophysiological measurements for the development of a human trust sensor. Implications of the work, in the context of trust management algorithm design for intelligent machines, are also discussed.Comment: 20 page

    Towards the Safety of Human-in-the-Loop Robotics: Challenges and Opportunities for Safety Assurance of Robotic Co-Workers

    Get PDF
    The success of the human-robot co-worker team in a flexible manufacturing environment where robots learn from demonstration heavily relies on the correct and safe operation of the robot. How this can be achieved is a challenge that requires addressing both technical as well as human-centric research questions. In this paper we discuss the state of the art in safety assurance, existing as well as emerging standards in this area, and the need for new approaches to safety assurance in the context of learning machines. We then focus on robotic learning from demonstration, the challenges these techniques pose to safety assurance and indicate opportunities to integrate safety considerations into algorithms "by design". Finally, from a human-centric perspective, we stipulate that, to achieve high levels of safety and ultimately trust, the robotic co-worker must meet the innate expectations of the humans it works with. It is our aim to stimulate a discussion focused on the safety aspects of human-in-the-loop robotics, and to foster multidisciplinary collaboration to address the research challenges identified

    Biocybernetic Adaptation Strategies: Machine awareness of human state for improved operational performance

    Get PDF
    Human operators interacting with machines or computers continually adapt to the needs of the system ideally resulting in optimal performance. In some cases, however, deteriorated performance is an outcome. Adaptation to the situation is a strength expected of the human operator which is often accomplished by the human through self-regulation of mental state. Adaptation is at the core of the human operator’s activity, and research has demonstrated that the implementation of a feedback loop can enhance this natural skill to improve training and human/machine interaction. Biocybernetic adaptation involves a “loop upon a loop,” which may be visualized as a superimposed loop which senses a physiological signal and influences the operator’s task at some point. Biocybernetic adaptation in, for example, physiologically adaptive automation employs the “steering” sense of “cybernetic,” and serves a transitory adaptive purpose – to better serve the human operator by more fully representing their responses to the system. The adaptation process usually makes use of an assessment of transient cognitive state to steer a functional aspect of a system that is external to the operator’s physiology from which the state assessment is derived. Therefore, the objective of this paper is to detail the structure of biocybernetic systems regarding the level of engagement of interest for adaptive systems, their processing pipeline, and the adaptation strategies employed for training purposes, in an effort to pave the way towards machine awareness of human state for self-regulation and improved operational performance

    Biocybernetic Adaptation Strategies: Machine Awareness of Human Engagement for Improved Operational Performance

    Get PDF
    Human operators interacting with machines or computers continually adapt to the needs of the system ideally resulting in optimal performance. In some cases, however, deteriorated performance is an outcome. Adaptation to the situation is a strength expected of the human operator which is often accomplished by the human through self-regulation of mental state. Adaptation is at the core of the human operator's activity, and research has demonstrated that the implementation of a feedback loop can enhance this natural skill to improve training and human/machine interaction. Biocybernetic adaptation involves a loop upon a loop, which may be visualized as a superimposed loop which senses a physiological signal and influences the operators task at some point. Biocybernetic adaptation in, for example, physiologically adaptive automation employs the steering sense of cybernetic, and serves a transitory adaptive purpose to better serve the human operator by more fully representing their responses to the sys- tem. The adaptation process usually makes use of an assessment of transient cog- nitive state to steer a functional aspect of a system that is external to the operators physiology from which the state assessment is derived. Therefore, the objective of this paper is to detail the structure of biocybernetic systems regarding the level of engagement of interest for adaptive systems, their processing pipeline, and the adaptation strategies employed for training purposes, in an effort to pave the way towards machine awareness of human state for self-regulation and improved operational performance

    MODEL-BASED ASSESSMENT OF ADAPTIVE AUTOMATION’S UNINTENDED CONSEQUENCES

    Get PDF
    Recent technological advances require development of human-centered principles for their inclusion into complex systems. While such programs incorporate revolutionary hardware and software advances, there is a necessary space for including human operator design considerations, such as cognitive workload. As technologies mature, it is essential to understand the impacts that these emerging systems will have on cognitive workload. Adaptive automation is a solution that seeks to manage cognitive workload at optimal levels. Human performance modeling shows potential for modeling the effects of adaptive automation on cognitive workload. However, the introduction of adaptive automation into a system can also present unintended negative consequences to an operator. This dissertation investigated potential negative unintended consequences of adaptive automation through the development of human performance models of a multi-tasking simulation. One hundred twenty participants were enrolled in three human-in-the-loop experimental studies (forty participants each) that collected objective and subjective surrogate measures of cognitive workload to validate the models. Results from this research indicate that there are residual increases in operator workload after transitions in system states between manual and automatic control of a task that need to be included in human performance models and in system design considerations.Approved for public release. Distribution is unlimited.Lieutenant Colonel, United States ArmyCommanding Officer, U.S. Army Combat Capabilities Development Command, Aviation and Missile Center Agency, Redstone Arsenal, Alabama 35898-500

    Incorporating Cognitive Neuroscience Techniques to Enhance User Experience Research Practices

    Get PDF
    User Experience (UX) involves every interaction that customers have with products, and it plays a crucial role in determining the success of a product in the market. While there are numerous methods available in literature for assessing UX, they often overlook the emotional aspect of the user\u27s experience. As a result, cognitive neuroscience methods are gaining popularity, but they have certain limitations such as difficulty in collecting neurophysiological data, potential for errors, and lengthy procedures. This article aims to examine the most effective research practices using cognitive neuroscience techniques and develop a standardized procedure for conducting UX research. To achieve this objective, the study conducts a comprehensive review of UX research that employs cognitive neuroscience methods published between 2017 and 2022

    Disentangling Emotional and Cognitive Factors of Escalation of Commitment: Evidence for a Psychophysiological Link

    Get PDF
    Escalation of Commitment (EoC) - the tendency to persist with failing courses of action - can determine whether a distressed Information Systems (IS) project can be turned around. To disentangle the emotional and cognitive factors that give rise to EoC we conducted a between-subject randomized controlled laboratory experiment with 75 Master, MBA, and Ph.D. students including data triangulation between neurophysiological and behavioral measures. This study successfully replicates the EoC bias in the context of IS project distress, provides evidence for a psychophysiological link, supports the predictions on the role of negative and complex emotional states of self-justification theory over coping theory, and adds to a better understanding of how escalation tendency changes over time due to learning effects. Our findings contribute to enhancing decision-making in uncertain environments by using cognitive and emotional markers and thereby provide the foundation for developing neuro-adaptive de-escalation strategies
    • …
    corecore