51,197 research outputs found

    Using myoelectric signals for gesture detection: a feasibility study

    Get PDF
    Farshid Amirabdollahian, Michael Walter, Rory Heffernan, Sarah Fletcher, and Phil Webb, ‘Using myoelectric signals for gesture detection: a feasibility study’. Paper presented at the Ergonomics and Human Factors 2017 Conference, 25 – 27 April 2017, Daventry, United Kingdom.Abstract The propose of this study was to assess the feasibility of using myoelectric signals acquired using an off the shelf device, the Myo armband from Thalmic Lab. Background: With the technological advances in sensing human motion, and its potential to drive and control mechanical interfaces remotely, a multitude of input mechanisms are used to link actions between the human and the robot. In this study we explored the feasibility of using human arm’s myoelectric signals with the aim of identifying a number of gestures automatically. Material and methods: Participants (n = 26) took part in a study with the aim to assess the gesture detection accuracy using myoelectric signals. The Myo armband was used worn on the forearm. The session was divided into three phases, familiarisation: where participant learned how to use the armband, training: when participants reproduced a number of requested gestures to train our machine learning algorithm and recognition: when gestures presented on screen where reproduced by participants, and simultaneously recognised using the machine learning routines. Results: One participant did not complete the study due to technical errors during the session. The remaining (n = 25) participants completed the study allowing to calculate individual accuracy for grasp detection using this medium. Our overall accuracy was 65.06%, with the cylindrical grasp achieving the highest accuracy of around 7.20% and the tripod grasp achieving lowest recognition accuracy of 60.15%. Discussions: The recognition accuracy for the grasp performed is significantly lower compared to our earlier work where a mechatronic device was used. This could be due to the choice of grasps for this study, as it is not ideal to the placement of the armband. While tripod, cylindrical and lateral grasps have different finger and wrist articulations, their demand on supporting forearm muscles (mainly biceps and triceps) is less definite and therefore their myoelectric signals are less distinct. Furthermore, drop in accuracy could be caused by the fact that human muscles and consequently the myoelectric signals are substantially variable over time. Muscles change their relative intensity based on the speed of the produced gesture. In our earlier study, the gesture production speed was damped by the worn orthosis, leading to normalising the speed of gestures. This is while in our current study, hand motion is not restricted. Despite these, the recognition accuracy is still significant. Future work: There are remaining questions related to the feasibility of using myoelectric signals as an input to a remote controlled robot in a factory floor as it is anticipated that such a system would enhance control and efficiency in production processes. These questions therefore require further investigations regarding usability of the armband in its intended context, to ensure users are able to effectively control and manipulate the robot using the myoelectric system and enjoy a positive user experience. Future studies will focus on the choice of gestures, so that they are distinct and better identifiable, but also on other key human factors and system design features that will enhance performance, in compliance with relevant standards such as ISO 9241-210:2010 (standards for human-system interaction ergonomic design principles) . Furthermore, aspects of whether a machine learning algorithm should use individually learned events in order to recognise an individual’s gestures, or if it is possible to use normative representation of a substantial set of learnt events, to achieve higher accuracy remains an interesting area for our future work.Peer reviewe

    Affordances and Safe Design of Assistance Wearable Virtual Environment of Gesture

    Get PDF
    Safety and reliability are the main issues for designing assistance wearable virtual environment of technical gesture in aerospace, or health application domains. That needs the integration in the same isomorphic engineering framework of human requirements, systems requirements and the rationale of their relation to the natural and artifactual environment.To explore coupling integration and design functional organization of support technical gesture systems, firstly ecological psychologyprovides usa heuristicconcept: the affordance. On the other hand mathematical theory of integrative physiology provides us scientific concepts: the stabilizing auto-association principle and functional interaction.After demonstrating the epistemological consistence of these concepts, we define an isomorphic framework to describe and model human systems integration dedicated to human in-the-loop system engineering.We present an experimental approach of safe design of assistance wearable virtual environment of gesture based in laboratory and parabolic flights. On the results, we discuss the relevance of our conceptual approach and the applications to future assistance of gesture wearable systems engineering

    Navigation and interaction in a real-scale digital mock-up using natural language and user gesture

    Get PDF
    This paper tries to demonstrate a very new real-scale 3D system and sum up some firsthand and cutting edge results concerning multi-modal navigation and interaction interfaces. This work is part of the CALLISTO-SARI collaborative project. It aims at constructing an immersive room, developing a set of software tools and some navigation/interaction interfaces. Two sets of interfaces will be introduced here: 1) interaction devices, 2) natural language (speech processing) and user gesture. The survey on this system using subjective observation (Simulator Sickness Questionnaire, SSQ) and objective measurements (Center of Gravity, COG) shows that using natural languages and gesture-based interfaces induced less cyber-sickness comparing to device-based interfaces. Therefore, gesture-based is more efficient than device-based interfaces.FUI CALLISTO-SAR

    Exploring the Affective Loop

    Get PDF
    Research in psychology and neurology shows that both body and mind are involved when experiencing emotions (Damasio 1994, Davidson et al. 2003). People are also very physical when they try to communicate their emotions. Somewhere in between beings consciously and unconsciously aware of it ourselves, we produce both verbal and physical signs to make other people understand how we feel. Simultaneously, this production of signs involves us in a stronger personal experience of the emotions we express. Emotions are also communicated in the digital world, but there is little focus on users' personal as well as physical experience of emotions in the available digital media. In order to explore whether and how we can expand existing media, we have designed, implemented and evaluated /eMoto/, a mobile service for sending affective messages to others. With eMoto, we explicitly aim to address both cognitive and physical experiences of human emotions. Through combining affective gestures for input with affective expressions that make use of colors, shapes and animations for the background of messages, the interaction "pulls" the user into an /affective loop/. In this thesis we define what we mean by affective loop and present a user-centered design approach expressed through four design principles inspired by previous work within Human Computer Interaction (HCI) but adjusted to our purposes; /embodiment/ (Dourish 2001) as a means to address how people communicate emotions in real life, /flow/ (Csikszentmihalyi 1990) to reach a state of involvement that goes further than the current context, /ambiguity/ of the designed expressions (Gaver et al. 2003) to allow for open-ended interpretation by the end-users instead of simplistic, one-emotion one-expression pairs and /natural but designed expressions/ to address people's natural couplings between cognitively and physically experienced emotions. We also present results from an end-user study of eMoto that indicates that subjects got both physically and emotionally involved in the interaction and that the designed "openness" and ambiguity of the expressions, was appreciated and understood by our subjects. Through the user study, we identified four potential design problems that have to be tackled in order to achieve an affective loop effect; the extent to which users' /feel in control/ of the interaction, /harmony and coherence/ between cognitive and physical expressions/,/ /timing/ of expressions and feedback in a communicational setting, and effects of users' /personality/ on their emotional expressions and experiences of the interaction
    • 

    corecore