3,545 research outputs found

    Levels of control during a collaborative carrying task

    Get PDF
    Three experiments investigated the effect of implementing low-level aspects of motor control for a collaborative carrying task within a VE interface, leaving participants free to devote their cognitive resources to the higher-level components of the task. In the task, participants collaborated with an autonomous virtual human in an immersive virtual environment (VE) to carry an object along a predefined path. In experiment 1, participants took up to three times longer to perform the task with a conventional VE interface, in which they had to explicitly coordinate their hand and body movements, than with an interface that controlled the low-level tasks of grasping and holding onto the virtual object. Experiments 2 and 3 extended the study to include the task of carrying an object along a path that contained obstacles to movement. By allowing participants' virtual arms to stretch slightly, the interface software was able to take over some aspects of obstacle avoidance (another low-level task), and this led to further significant reductions in the time that participants took to perform the carrying task. Improvements in performance also occurred when participants used a tethered viewpoint to control their movements because they could see their immediate surroundings in the VEs. This latter finding demonstrates the superiority of a tethered view perspective to a conventional, human'seye perspective for this type of task

    An experiment design: investigating VR locomotion & virtual object interaction mechanics

    Get PDF
    In this paper, we describe an experiment outline on investigating design and user experience related aspects of several virtual reality locomotion and virtual object interaction mechanics. These mechanics will be based on consumer hardware like a common game controllers, an infrared hand and finger tracking device, VR hand controllers and an omnidirectional treadmill. Corresponding related work will contextualize and motivate this research. The projected experimental study will be based on user test sessions with a specifically developed 1st person VR puzzle horror game, called Gooze. A hybrid approach of self-assessment, in-game parameter tracking and session observations will be proposed for the investigation. Statistical analysis methods will be suggested to evaluate results. Furthermore, this paper will give an overview of the game and elaborate on design, gameplay and user experience related insights of already conducted informal pre-studies with it

    Temporary Nerve Block at Selected Digits Revealed Hand Motor Deficits in Grasping Tasks

    Get PDF
    Peripheral sensory feedback plays a crucial role in ensuring correct motor execution throughout hand grasp control. Previous studies utilized local anesthesia to deprive somatosensory feedback in the digits or hand, observations included sensorimotor deficits at both corticospinal and peripheral levels. However, the questions of how the disturbed and intact sensory input integrate and interact with each other to assist the motor program execution, and whether the motor coordination based on motor output variability between affected and non-affected elements (e.g., digits) becomes interfered by the local sensory deficiency, have not been answered. The current study aims to investigate the effect of peripheral deafferentation through digital nerve blocks at selective digits on motor performance and motor coordination in grasp control. Our results suggested that the absence of somatosensory information induced motor deficits in hand grasp control, as evidenced by reduced maximal force production ability in both local and non-local digits, impairment of force and moment control during object lift and hold, and attenuated motor synergies in stabilizing task performance variables, namely the tangential force and moment of force. These findings implied that individual sensory input is shared across all the digits and the disturbed signal from local sensory channel(s) has a more comprehensive impact on the process of the motor output execution in the sensorimotor integration process. Additionally, a feedback control mechanism with a sensation-based component resides in the formation process for the motor covariation structure

    CHARACTERISTICS OF HEAD MOUNTED DISPLAYS AND THEIR EFFECTS ON SIMULATOR SICKNESS

    Get PDF
    Characteristics of head-mounted displays (HMDs) and their effects on simulator sickness (SS) and presence were investigated. Update delay and wide field of views (FOV) have often been thought to elicit SS. With the exception of Draper et al. (2001), previous research that has examined FOV has failed to consider image scale factor, or the ratio between physical FOV of the HMD display and the geometric field of view (GFOV) of the virtual environment (VE). The current study investigated update delay, image scale factor, and peripheral vision on SS and presence when viewing a real-world scene. Participants donned an HMD and performed active head movements to search for objects located throughout the laboratory. Seven out of the first 28 participants withdrew from the study due to extreme responses. These participants experienced faint-like symptoms, confusion, ataxia, nausea, and tunnel vision. Thereafter, the use of a hand-rail was implemented to provide participants something to grasp while performing the experimental task. The 2X2X2 ANOVA revealed a main effect of peripheral vision, F(1,72) = 6.90, p= .01, indicating peak Simulator Sickness Questionnaire (SSQ) scores were significantly higher when peripheral vision was occluded than when peripheral vision was included. No main effects or interaction effects were revealed on Presence Questionnaire (PQ version 4.0) scores. However, a significant negative correlation of peak SSQ scores and PQ scores, r(77) = -.28, p= .013 was revealed. Participants also were placed into \u27sick\u27 and \u27not-sick\u27 groups based on a median split of SSQ scores. A chi-square analysis revealed that participants who were exposed to an additional update delay of ~200 ms were significantly more likely to be in the \u27sick\u27 group than those who were exposed to no additional update delay. To reduce the occurrence of SS, a degree of peripheral vision of the external world should be included and attempts to reduce update delay should continue. Furthermore, participants should be provided with something to grasp while in an HMD VE. Future studies should seek to investigate a critical amount of peripheral vision and update delay necessary to elicit SS

    VALIDATION OF A MODEL OF SENSORIMOTOR INTEGRATION WITH CLINICAL BENEFITS

    Get PDF
    Healthy sensorimotor integration – or how our touch influences our movements – is critical to efficiently interact with our environment. Yet, many aspects of this process are still poorly understood. Importantly, several movement disorders are often considered as originating from purely motor impairments, while a sensory origin could also lead to a similar set of symptoms. To alleviate these issues, we hereby propose a novel biologically-based model of the sensorimotor loop, known as the SMILE model. After describing both the functional, and the corresponding neuroanatomical versions of the SMILE, we tested several aspects of its motor component through functional magnetic resonance imaging (fMRI) and transcranial magnetic stimulation (TMS). Both experimental studies resulted in coherent outcomes with respect to the SMILE predictions, but they also provided novel scientific outcomes about such broad topics as the sub-phases of motor imagery, the neural processing of bodily representations, or the extend of the role of the extrastriate body area. In the final sections of this manuscript, we describe some potential clinical application of the SMILE. The first one presents the identification of plausible neuroanatomical origins for focal hand dystonia, a yet poorly understood sensorimotor disorder. The last chapter then covers possible improvements on brain-machine interfaces, driven by a better understanding of the sensorimotor system. -- La façon dont votre sens du toucher et vos mouvements interagissent est connue sous le nom d’intĂ©gration sensorimotrice. Ce procĂ©dĂ© est essentiel pour une interaction normale avec tout ce qui nous entoure. Cependant, plusieurs aspects de ce processus sont encore mĂ©connus. Plus important encore, l’origine de certaines dĂ©ficiences motrices encore trop peu comprises sont parfois considĂ©rĂ©es comme purement motrice, alors qu’une origine sensorielle pourrait mener Ă  un mĂȘme ensemble de symptĂŽmes. Afin d’amĂ©liorer cette situation, nous proposons ici un nouveau modĂšle d’intĂ©gration sensorimotrice, dĂ©nommĂ© « SMILE », basĂ© sur les connaissances de neurobiologie actuelles. Dans ce manuscrit, nous commençons par dĂ©crire les caractĂ©ristiques fonctionnelles et neuroanatomiques du SMILE. Plusieurs expĂ©riences sont ensuite effectuĂ©es, via l’imagerie par rĂ©sonance magnĂ©tique fonctionnelle (IRMf), et la stimulation magnĂ©tique transcranienne (SMT), afin de tester diffĂ©rents aspects de la composante motrice du SMILE. Si les rĂ©sultats de ces expĂ©riences corroborent les prĂ©dictions du SMILE, elles ont aussi mis en Ă©vidences d’autres rĂ©sultats scientifiques intĂ©ressants et novateurs, dans des domaines aussi divers que les sous-phases de l’imagination motrice, les processus cĂ©rĂ©braux liĂ©s aux reprĂ©sentations corporelles, ou encore l’extension du rĂŽle de l’extrastriate body area. Dans les derniĂšres parties de ce manuscrit, nous dĂ©voilons quelques applications cliniques potentielles de notre modĂšle. Nous utilisons le SMILE afin de proposer deux origines cĂ©rĂ©brales plausibles de la dystonie focale de la main. Le dernier chapitre prĂ©sente comment certaines technologies existantes, telles que les interfaces cerveaux-machines, pourraient bĂ©nĂ©ficier d’une meilleure comprĂ©hension du systĂšme sensorimoteur

    Object Manipulation in Virtual Reality Under Increasing Levels of Translational Gain

    Get PDF
    Room-scale Virtual Reality (VR) has become an affordable consumer reality, with applications ranging from entertainment to productivity. However, the limited physical space available for room-scale VR in the typical home or office environment poses a significant problem. To solve this, physical spaces can be extended by amplifying the mapping of physical to virtual movement (translational gain). Although amplified movement has been used since the earliest days of VR, little is known about how it influences reach-based interactions with virtual objects, now a standard feature of consumer VR. Consequently, this paper explores the picking and placing of virtual objects in VR for the first time, with translational gains of between 1x (a one-to-one mapping of a 3.5m*3.5m virtual space to the same sized physical space) and 3x (10.5m*10.5m virtual mapped to 3.5m*3.5m physical). Results show that reaching accuracy is maintained for up to 2x gain, however going beyond this diminishes accuracy and increases simulator sickness and perceived workload. We suggest gain levels of 1.5x to 1.75x can be utilized without compromising the usability of a VR task, significantly expanding the bounds of interactive room-scale VR
    • 

    corecore