216 research outputs found

    Timing and correction of stepping movements with a virtual reality avatar

    Get PDF
    Research into the ability to coordinate one’s movements with external cues has focussed on the use of simple rhythmic, auditory and visual stimuli, or interpersonal coordination with another person. Coordinating movements with a virtual avatar has not been explored, in the context of responses to temporal cues. To determine whether cueing of movements using a virtual avatar is effective, people’s ability to accurately coordinate with the stimuli needs to be investigated. Here we focus on temporal cues, as we know from timing studies that visual cues can be difficult to follow in the timing context. Real stepping movements were mapped onto an avatar using motion capture data. Healthy participants were then motion captured whilst stepping in time with the avatar’s movements, as viewed through a virtual reality headset. The timing of one of the avatar step cycles was accelerated or decelerated by 15% to create a temporal perturbation, for which participants would need to correct to, in order to remain in time. Step onset times of participants relative to the corresponding step-onsets of the avatar were used to measure the timing errors (asynchronies) between them. Participants completed either a visual-only condition, or auditory-visual with footstep sounds included, at two stepping tempo conditions (Fast: 400ms interval, Slow: 800ms interval). Participants’ asynchronies exhibited slow drift in the Visual-Only condition, but became stable in the Auditory-Visual condition. Moreover, we observed a clear corrective response to the phase perturbation in both the fast and slow tempo auditory-visual conditions. We conclude that an avatar’s movements can be used to influence a person’s own motion, but should include relevant auditory cues congruent with the movement to ensure a suitable level of entrainment is achieved. This approach has applications in physiotherapy, where virtual avatars present an opportunity to provide the guidance to assist patients in adhering to prescribed exercises

    Virtual reality obstacle crossing: adaptation, retention and transfer to the physical world

    Get PDF
    Virtual reality (VR) paradigms are increasingly being used in movement and exercise sciences with the aim to enhance motor function and stimulate motor adaptation in healthy and pathological conditions. Locomotor training based in VR may be promising for motor skill learning, with transfer of VR skills to the physical world in turn required to benefit functional activities of daily life. This PhD project aims to examine locomotor adaptations to repeated VR obstacle crossing in healthy young adults as well as transfers to the untrained limb and the physical world, and retention potential of the learned skills. For these reasons, the current thesis comprises three studies using controlled VR obstacle crossing interventions during treadmill walking. In the first and second studies we investigated adaptation to crossing unexpectedly appearing virtual obstacles, with and without feedback about crossing performance, and its transfer to the untrained leg. In the third study we investigated transfer of virtual obstacle crossing to physical obstacles of similar size to the virtual ones, that appeared at the same time point within the gait cycle. We also investigated whether the learned skills can be retained in each of the environments over one week. In all studies participants were asked to walk on a treadmill while wearing a VR headset that represented their body as an avatar via real-time synchronised optical motion capture. Participants had to cross virtual and/or physical obstacles with and without feedback about their crossing performance. If applicable, feedback was provided based on motion capture immediately after virtual obstacle crossing. Toe clearance, margin of stability, and lower extremity joint angles in the sagittal plane were calculated for the crossing legs to analyse adaptation, transfer, and retention of obstacle crossing performance. The main outcomes of the first and second studies were that crossing multiple virtual obstacles increased participants’ dynamic stability and led to a nonlinear adaptation of toe clearance that was enhanced by visual feedback about crossing performance. However, independent of the use of feedback, no transfer to the untrained leg was detected. Moreover, despite significant and rapid adaptive changes in locomotor kinematics with repeated VR obstacle crossing, results of the third study revealed limited transfer of learned skills from virtual to physical obstacles. Lastly, despite full retention over one week in the virtual environment we found only partial retention when crossing a physical obstacle while walking on the treadmill. In summary, the findings of this PhD project confirmed that repeated VR obstacle perturbations can effectively stimulate locomotor skill adaptations. However, these are not transferable to the untrained limb irrespective of enhanced awareness and feedback. Moreover, the current data provide evidence that, despite significant adaptive changes in locomotion kinematics with repeated practice of obstacle crossing under VR conditions, transfer to and retention in the physical environment is limited. It may be that perception-action coupling in the virtual environment, and thus sensorimotor coordination, differs from the physical world, potentially inhibiting retained transfer between those two conditions. Accordingly, VR-based locomotor skill training paradigms need to be considered carefully if they are to replace training in the physical world

    Creating and evaluating embodied interactive experiences: case studies of full-body, sonic and tactile enaction.

    Get PDF
    This thesis contributes to the field of embodied and multimodal interaction by presenting the development of different original interactive systems. Using a constructive approach, a variety of real-time user interaction situations were designed and tested, two cases of human-virtual character bodily interaction, two interactive sonifications of trampoline jumping, collaborative interaction in mobile music performance and tangible and tactile interaction with virtual sounds. While diverse in terms of application, all the explored interaction techniques belong to the context of augmentation and are grounded in the theory of embodiment and strategies for natural human-computer interaction (HCI). The cases have been contextualized within the umbrella of enaction, a paradigm of cognitive science that addresses the user as an embodied agent situated in an environment and coupled to it through sensorimotor activity. This activity of sensing and action is studied through different modalities: auditory, tactile and visual and combinations of these. The designed applications aim at a natural interaction with the system, being full-body, tangible and spatially aware. Particularly sonic interaction has been explored in the context of music creation, sports and auditory display. These technology-mediated scenarios are evaluated in order to understand what the adopted interaction techniques can bring to the user experience, how they modify impressions and enjoyment. The publications also discuss the enabling technologies used for the development, including motion tracking and programmed hardware for the tactile-sonic interaction and sonic and tangible interaction. Results show that combining full-body interaction with auditory augmentation and sonic interaction can modify the perception, observed behavior and emotion during the experience. Using spatial interaction together with tangible interaction or tactile feedback provides for a multimodal experience of exploring a mixed reality environment where audio can be accessed and manipulated with natural interaction. Embodied and spatial interaction brings playfulness to a mobile music improvisation, shifting the focus of the experience from music-making towards movement-based gaming. Finally, two novel implementations of full-body interaction based on the enactive paradigm are presented. In these designed scenarios of enaction the participant is motion tracked and a virtual character rendered as a stick figure is displayed in front of her on a screen. Results from the user studies show how the involvement of the body is crucial in understanding the behavior of a virtual character or a digital representation of the self in a gaming scenario

    A Multi-Modal, Modified-Feedback and Self-Paced Brain-Computer Interface (BCI) to Control an Embodied Avatar's Gait

    Full text link
    Brain-computer interfaces (BCI) have been used to control the gait of a virtual self-avatar with the aim of being used in gait rehabilitation. A BCI decodes the brain signals representing a desire to do something and transforms them into a control command for controlling external devices. The feelings described by the participants when they control a self-avatar in an immersive virtual environment (VE) demonstrate that humans can be embodied in the surrogate body of an avatar (ownership illusion). It has recently been shown that inducing the ownership illusion and then manipulating the movements of one’s self-avatar can lead to compensatory motor control strategies. In order to maximize this effect, there is a need for a method that measures and monitors embodiment levels of participants immersed in virtual reality (VR) to induce and maintain a strong ownership illusion. This is particularly true given that reaching a high level of both BCI performance and embodiment are inter-connected. To reach one of them, the second must be reached as well. Some limitations of many existing systems hinder their adoption for neurorehabilitation: 1- some use motor imagery (MI) of movements other than gait; 2- most systems allow the user to take single steps or to walk but do not allow both, which prevents users from progressing from steps to gait; 3- most of them function in a single BCI mode (cue-paced or self-paced), which prevents users from progressing from machine-dependent to machine-independent walking. Overcoming the aforementioned limitations can be done by combining different control modes and options in one single system. However, this would have a negative impact on BCI performance, therefore diminishing its usefulness as a potential rehabilitation tool. In this case, there will be a need to enhance BCI performance. For such purpose, many techniques have been used in the literature, such as providing modified feedback (whereby the presented feedback is not consistent with the user’s MI), sequential training (recalibrating the classifier as more data becomes available). This thesis was developed over 3 studies. The objective in study 1 was to investigate the possibility of measuring the level of embodiment of an immersive self-avatar, during the performing, observing and imagining of gait, using electroencephalogram (EEG) techniques, by presenting visual feedback that conflicts with the desired movement of embodied participants. The objective of study 2 was to develop and validate a BCI to control single steps and forward walking of an immersive virtual reality (VR) self-avatar, using mental imagery of these actions, in cue-paced and self-paced modes. Different performance enhancement strategies were implemented to increase BCI performance. The data of these two studies were then used in study 3 to construct a generic classifier that could eliminate offline calibration for future users and shorten training time. Twenty different healthy participants took part in studies 1 and 2. In study 1, participants wore an EEG cap and motion capture markers, with an avatar displayed in a head-mounted display (HMD) from a first-person perspective (1PP). They were cued to either perform, watch or imagine a single step forward or to initiate walking on a treadmill. For some of the trials, the avatar took a step with the contralateral limb or stopped walking before the participant stopped (modified feedback). In study 2, participants completed a 4-day sequential training to control the gait of an avatar in both BCI modes. In cue-paced mode, they were cued to imagine a single step forward, using their right or left foot, or to walk forward. In the self-paced mode, they were instructed to reach a target using the MI of multiple steps (switch control mode) or maintaining the MI of forward walking (continuous control mode). The avatar moved as a response to two calibrated regularized linear discriminant analysis (RLDA) classifiers that used the μ power spectral density (PSD) over the foot area of the motor cortex as features. The classifiers were retrained after every session. During the training, and for some of the trials, positive modified feedback was presented to half of the participants, where the avatar moved correctly regardless of the participant’s real performance. In both studies, the participants’ subjective experience was analyzed using a questionnaire. Results of study 1 show that subjective levels of embodiment correlate strongly with the power differences of the event-related synchronization (ERS) within the μ frequency band, and over the motor and pre-motor cortices between the modified and regular feedback trials. Results of study 2 show that all participants were able to operate the cued-paced BCI and the selfpaced BCI in both modes. For the cue-paced BCI, the average offline performance (classification rate) on day 1 was 67±6.1% and 86±6.1% on day 3, showing that the recalibration of the classifiers enhanced the offline performance of the BCI (p < 0.01). The average online performance was 85.9±8.4% for the modified feedback group (77-97%) versus 75% for the non-modified feedback group. For self-paced BCI, the average performance was 83% at switch control and 92% at continuous control mode, with a maximum of 12 seconds of control. Modified feedback enhanced BCI performances (p =0.001). Finally, results of study 3 show that the constructed generic models performed as well as models obtained from participant-specific offline data. The results show that there it is possible to design a participant-independent zero-training BCI.Les interfaces cerveau-ordinateur (ICO) ont été utilisées pour contrôler la marche d'un égo-avatar virtuel dans le but d'être utilisées dans la réadaptation de la marche. Une ICO décode les signaux du cerveau représentant un désir de faire produire un mouvement et les transforme en une commande de contrôle pour contrôler des appareils externes. Les sentiments décrits par les participants lorsqu'ils contrôlent un égo-avatar dans un environnement virtuel immersif démontrent que les humains peuvent être incarnés dans un corps d'un avatar (illusion de propriété). Il a été récemment démontré que provoquer l’illusion de propriété puis manipuler les mouvements de l’égo-avatar peut conduire à des stratégies de contrôle moteur compensatoire. Afin de maximiser cet effet, il existe un besoin d'une méthode qui mesure et surveille les niveaux d’incarnation des participants immergés dans la réalité virtuelle (RV) pour induire et maintenir une forte illusion de propriété. D'autre part, atteindre un niveau élevé de performances (taux de classification) ICO et d’incarnation est interconnecté. Pour atteindre l'un d'eux, le second doit également être atteint. Certaines limitations de plusieurs de ces systèmes entravent leur adoption pour la neuroréhabilitation: 1- certains utilisent l'imagerie motrice (IM) des mouvements autres que la marche; 2- la plupart des systèmes permettent à l'utilisateur de faire des pas simples ou de marcher mais pas les deux, ce qui ne permet pas à un utilisateur de passer des pas à la marche; 3- la plupart fonctionnent en un seul mode d’ICO, rythmé (cue-paced) ou auto-rythmé (self-paced). Surmonter les limitations susmentionnées peut être fait en combinant différents modes et options de commande dans un seul système. Cependant, cela aurait un impact négatif sur les performances de l’ICO, diminuant ainsi son utilité en tant qu'outil potentiel de réhabilitation. Dans ce cas, il sera nécessaire d'améliorer les performances des ICO. À cette fin, de nombreuses techniques ont été utilisées dans la littérature, telles que la rétroaction modifiée, le recalibrage du classificateur et l'utilisation d'un classificateur générique. Le projet de cette thèse a été réalisé en 3 études, avec objectif d'étudier dans l'étude 1, la possibilité de mesurer le niveau d'incarnation d'un égo-avatar immersif, lors de l'exécution, de l'observation et de l'imagination de la marche, à l'aide des techniques encéphalogramme (EEG), en présentant une rétroaction visuelle qui entre en conflit avec la commande du contrôle moteur des sujets incarnés. L'objectif de l'étude 2 était de développer un BCI pour contrôler les pas et la marche vers l’avant d'un égo-avatar dans la réalité virtuelle immersive, en utilisant l'imagerie motrice de ces actions, dans des modes rythmés et auto-rythmés. Différentes stratégies d'amélioration des performances ont été mises en œuvre pour augmenter la performance (taux de classification) de l’ICO. Les données de ces deux études ont ensuite été utilisées dans l'étude 3 pour construire des classificateurs génériques qui pourraient éliminer la calibration hors ligne pour les futurs utilisateurs et raccourcir le temps de formation. Vingt participants sains différents ont participé aux études 1 et 2. Dans l'étude 1, les participants portaient un casque EEG et des marqueurs de capture de mouvement, avec un avatar affiché dans un casque de RV du point de vue de la première personne (1PP). Ils ont été invités à performer, à regarder ou à imaginer un seul pas en avant ou la marche vers l’avant (pour quelques secondes) sur le tapis roulant. Pour certains essais, l'avatar a fait un pas avec le membre controlatéral ou a arrêté de marcher avant que le participant ne s'arrête (rétroaction modifiée). Dans l'étude 2, les participants ont participé à un entrainement séquentiel de 4 jours pour contrôler la marche d'un avatar dans les deux modes de l’ICO. En mode rythmé, ils ont imaginé un seul pas en avant, en utilisant leur pied droit ou gauche, ou la marche vers l’avant . En mode auto-rythmé, il leur a été demandé d'atteindre une cible en utilisant l'imagerie motrice (IM) de plusieurs pas (mode de contrôle intermittent) ou en maintenir l'IM de marche vers l’avant (mode de contrôle continu). L'avatar s'est déplacé en réponse à deux classificateurs ‘Regularized Linear Discriminant Analysis’ (RLDA) calibrés qui utilisaient comme caractéristiques la densité spectrale de puissance (Power Spectral Density; PSD) des bandes de fréquences µ (8-12 Hz) sur la zone du pied du cortex moteur. Les classificateurs ont été recalibrés après chaque session. Au cours de l’entrainement et pour certains des essais, une rétroaction modifiée positive a été présentée à la moitié des participants, où l'avatar s'est déplacé correctement quelle que soit la performance réelle du participant. Dans les deux études, l'expérience subjective des participants a été analysée à l'aide d'un questionnaire. Les résultats de l'étude 1 montrent que les niveaux subjectifs d’incarnation sont fortement corrélés à la différence de la puissance de la synchronisation liée à l’événement (Event-Related Synchronization; ERS) sur la bande de fréquence μ et sur le cortex moteur et prémoteur entre les essais de rétroaction modifiés et réguliers. L'étude 2 a montré que tous les participants étaient capables d’utiliser le BCI rythmé et auto-rythmé dans les deux modes. Pour le BCI rythmé, la performance hors ligne moyenne au jour 1 était de 67±6,1% et 86±6,1% au jour 3, ce qui montre que le recalibrage des classificateurs a amélioré la performance hors ligne du BCI (p <0,01). La performance en ligne moyenne était de 85,9±8,4% pour le groupe de rétroaction modifié (77-97%) contre 75% pour le groupe de rétroaction non modifié. Pour le BCI auto-rythmé, la performance moyenne était de 83% en commande de commutateur et de 92% en mode de commande continue, avec un maximum de 12 secondes de commande. Les performances de l’ICO ont été améliorées par la rétroaction modifiée (p = 0,001). Enfin, les résultats de l'étude 3 montrent que pour la classification des initialisations des pas et de la marche, il a été possible de construire des modèles génériques à partir de données hors ligne spécifiques aux participants. Les résultats montrent la possibilité de concevoir une ICO ne nécessitant aucun entraînement spécifique au participant

    Change blindness: eradication of gestalt strategies

    Get PDF
    Arrays of eight, texture-defined rectangles were used as stimuli in a one-shot change blindness (CB) task where there was a 50% chance that one rectangle would change orientation between two successive presentations separated by an interval. CB was eliminated by cueing the target rectangle in the first stimulus, reduced by cueing in the interval and unaffected by cueing in the second presentation. This supports the idea that a representation was formed that persisted through the interval before being 'overwritten' by the second presentation (Landman et al, 2003 Vision Research 43149–164]. Another possibility is that participants used some kind of grouping or Gestalt strategy. To test this we changed the spatial position of the rectangles in the second presentation by shifting them along imaginary spokes (by ±1 degree) emanating from the central fixation point. There was no significant difference seen in performance between this and the standard task [F(1,4)=2.565, p=0.185]. This may suggest two things: (i) Gestalt grouping is not used as a strategy in these tasks, and (ii) it gives further weight to the argument that objects may be stored and retrieved from a pre-attentional store during this task

    Sonic interactions in virtual environments

    Get PDF
    This book tackles the design of 3D spatial interactions in an audio-centered and audio-first perspective, providing the fundamental notions related to the creation and evaluation of immersive sonic experiences. The key elements that enhance the sensation of place in a virtual environment (VE) are: Immersive audio: the computational aspects of the acoustical-space properties of Virutal Reality (VR) technologies Sonic interaction: the human-computer interplay through auditory feedback in VE VR systems: naturally support multimodal integration, impacting different application domains Sonic Interactions in Virtual Environments will feature state-of-the-art research on real-time auralization, sonic interaction design in VR, quality of the experience in multimodal scenarios, and applications. Contributors and editors include interdisciplinary experts from the fields of computer science, engineering, acoustics, psychology, design, humanities, and beyond. Their mission is to shape an emerging new field of study at the intersection of sonic interaction design and immersive media, embracing an archipelago of existing research spread in different audio communities and to increase among the VR communities, researchers, and practitioners, the awareness of the importance of sonic elements when designing immersive environments

    A perspective review on integrating VR/AR with haptics into STEM education for multi-sensory learning

    Get PDF
    As a result of several governments closing educational facilities in reaction to the COVID-19 pandemic in 2020, almost 80% of the world’s students were not in school for several weeks. Schools and universities are thus increasing their efforts to leverage educational resources and provide possibilities for remote learning. A variety of educational programs, platforms, and technologies are now accessible to support student learning; while these tools are important for society, they are primarily concerned with the dissemination of theoretical material. There is a lack of support for hands-on laboratory work and practical experience. This is particularly important for all disciplines related to science, technology, engineering, and mathematics (STEM), where labs and pedagogical assets must be continuously enhanced in order to provide effective study programs. In this study, we describe a unique perspective to achieving multi-sensory learning through the integration of virtual and augmented reality (VR/AR) with haptic wearables in STEM education. We address the implications of a novel viewpoint on established pedagogical notions. We want to encourage worldwide efforts to make fully immersive, open, and remote laboratory learning a reality.European Union through the Erasmus+ Program under Grant 2020-1-NO01-KA203-076540, project title Integrating virtual and AUGMENTED reality with WEARable technology into engineering EDUcation (AugmentedWearEdu), https://augmentedwearedu.uia.no/ [34] (accessed on 27 March 2022). This work was also supported by the Top Research Centre Mechatronics (TRCM), University of Agder (UiA), Norwa

    Sonic Interactions in Virtual Environments

    Get PDF
    • …
    corecore