315 research outputs found

    From ‘hands up’ to ‘hands on’: harnessing the kinaesthetic potential of educational gaming

    Get PDF
    Traditional approaches to distance learning and the student learning journey have focused on closing the gap between the experience of off-campus students and their on-campus peers. While many initiatives have sought to embed a sense of community, create virtual learning environments and even build collaborative spaces for team-based assessment and presentations, they are limited by technological innovation in terms of the types of learning styles they support and develop. Mainstream gaming development – such as with the Xbox Kinect and Nintendo Wii – have a strong element of kinaesthetic learning from early attempts to simulate impact, recoil, velocity and other environmental factors to the more sophisticated movement-based games which create a sense of almost total immersion and allow untethered (in a technical sense) interaction with the games’ objects, characters and other players. Likewise, gamification of learning has become a critical focus for the engagement of learners and its commercialisation, especially through products such as the Wii Fit. As this technology matures, there are strong opportunities for universities to utilise gaming consoles to embed levels of kinaesthetic learning into the student experience – a learning style which has been largely neglected in the distance education sector. This paper will explore the potential impact of these technologies, to broadly imagine the possibilities for future innovation in higher education

    Expectations and Beliefs in Immersive Virtual Reality Environments: Managing of Body Perception

    Get PDF
    Real and Perceived Feet Orientation Under Fatiguing and Non-Fatiguing Conditions in an Immersive Virtual Reality Environment ABSTRACT Lower limbs position sense is a complex yet poorly understood mechanism, influenced by many factors. Hence, we investigated the position sense of lower limbs through feet orientation with the use of Immersive Virtual Reality (IVR). Participants had to indicate how they perceived the real 1050 orientation of their feet by orientating a virtual representation of the feet that was shown in an IVR 1051 scenario. We calculated the angle between the two virtual feet (α-VR) after a high-knee step-in-1052 place task. Simultaneously, we recorded the real angle between the two feet (α-R) (T1). Hence, we 1053 assessed if the acute fatigue impacted the position sense. The same procedure was repeated after 1054 inducing muscle fatigue (T2) and after 10 minutes from T2 (T3). Finally, we also recorded the time 1055 needed to confirm the perceived position before and after the acute fatigue protocol. Thirty healthy 1056 adults (27.5 ± 3.8: 57% female, 43% male) were immersed in an IVR scenario with a representation 1057 of two feet. We found a mean difference between α-VR and α-R of 20.89° [95% CI: 14.67°, 27.10°] 1058 in T1, 16.76° [9.57°, 23.94°] in T2, and 16.34° [10.00°, 22.68°] in T3. Participants spent 12.59, 17.50 1059 and 17.95 seconds confirming the perceived position of their feet at T1, T2, T3, respectively. 1060 Participants indicated their feet as forwarding parallel though divergent, showing a mismatch in the 1061 perceived position of feet. Fatigue seemed not to have an impact on position sense but delayed the 1062 time to accomplish this task.The Effect of Context on Eye-Height Estimation in Immersive Virtual Reality: a Cross-Sectional Study ABSTRACT Eye-height spatial perception provides a reference to scale the surrounding environment. It is the result of the integration of visual and postural information. When these stimuli are discordant, the perceived spatial parameters are distorted. Previous studies in immersive virtual reality (IVR) showed that spatial perception is influenced by the visual context of the environment. Hence, this study explored how manipulating the context in IVR affects individuals’ eye-height estimation. Two groups of twenty participants each were immersed in two different IVR environments, represented by a closed room (Wall - W) and an open field (No Wall - NW). Under these two different conditions, participants had to adjust their virtual perspective, estimating their eye height. We calculated the perceived visual offset as the difference between virtual and real eye height, to assess whether the scenarios and the presence of virtual shoes (Feet, No Feet) influenced participants’ estimates at three initial offsets (+100 cm, +0 cm, -100 cm). We found a mean difference between the visual 1679 offsets registered in those trials that started with 100 cm and 0 cm offsets (17.24 cm [8.78; 25.69]) 1680 and between 100 cm and -100 cm offsets (22.35 cm [15.65; 29.05]). Furthermore, a noticeable mean difference was found between the visual offsets recorded in group W, depending on the presence or absence of the virtual shoes (Feet VS No Feet: -6.12 [-10.29, -1.95]). These findings describe that different contexts influenced eye-height perception.Positive Expectations led to Motor Improvement: an Immersive Virtual Reality Pilot Study ABSTRACT This pilot study tested the feasibility of an experimental protocol that evaluated the effect of different positive expectations (verbal and visual-haptic) on anterior trunk flexion. Thirty-six participants were assigned to 3 groups (G0, G+ and G++) that received a sham manoeuvre while immersed in Immersive Virtual Reality (IVR). In G0, the manouvre was paired with by neutral verbal statement. In G+ and G++ the manouvre was paired with a positive verbal statement, but only G++ received a visual-haptic illusion. The illusion consisted of lifting a movable tile placed in front of the participants, using its height to raise the floor level in virtual reality. In this way, participants experienced the perception of touching the floor, through the tactile and the virtual visual afference. The distance between fingertips and the floor was measured before, immediately after, and after 5 minutes from the different manouvres. A major difference in anterior trunk flexion was found for G++ compared to the other groups, although it was only significant compared to G0. This result highlighted the feasibility of the present study for future research on people with mobility limitations (e.g., low back pain or kinesiophobia) and the potential role of a visual-haptic illusion in modifying the performance of trunk flexion

    Principles of human movement augmentation and the challenges in making it a reality

    Get PDF
    Augmenting the body with artificial limbs controlled concurrently to one's natural limbs has long appeared in science fiction, but recent technological and neuroscientific advances have begun to make this possible. By allowing individuals to achieve otherwise impossible actions, movement augmentation could revolutionize medical and industrial applications and profoundly change the way humans interact with the environment. Here, we construct a movement augmentation taxonomy through what is augmented and how it is achieved. With this framework, we analyze augmentation that extends the number of degrees-of-freedom, discuss critical features of effective augmentation such as physiological control signals, sensory feedback and learning as well as application scenarios, and propose a vision for the field

    Direct Manipulation Of Virtual Objects

    Get PDF
    Interacting with a Virtual Environment (VE) generally requires the user to correctly perceive the relative position and orientation of virtual objects. For applications requiring interaction in personal space, the user may also need to accurately judge the position of the virtual object relative to that of a real object, for example, a virtual button and the user\u27s real hand. This is difficult since VEs generally only provide a subset of the cues experienced in the real world. Complicating matters further, VEs presented by currently available visual displays may be inaccurate or distorted due to technological limitations. Fundamental physiological and psychological aspects of vision as they pertain to the task of object manipulation were thoroughly reviewed. Other sensory modalities--proprioception, haptics, and audition--and their cross-interactions with each other and with vision are briefly discussed. Visual display technologies, the primary component of any VE, were canvassed and compared. Current applications and research were gathered and categorized by different VE types and object interaction techniques. While object interaction research abounds in the literature, pockets of research gaps remain. Direct, dexterous, manual interaction with virtual objects in Mixed Reality (MR), where the real, seen hand accurately and effectively interacts with virtual objects, has not yet been fully quantified. An experimental test bed was designed to provide the highest accuracy attainable for salient visual cues in personal space. Optical alignment and user calibration were carefully performed. The test bed accommodated the full continuum of VE types and sensory modalities for comprehensive comparison studies. Experimental designs included two sets, each measuring depth perception and object interaction. The first set addressed the extreme end points of the Reality-Virtuality (R-V) continuum--Immersive Virtual Environment (IVE) and Reality Environment (RE). This validated, linked, and extended several previous research findings, using one common test bed and participant pool. The results provided a proven method and solid reference points for further research. The second set of experiments leveraged the first to explore the full R-V spectrum and included additional, relevant sensory modalities. It consisted of two full-factorial experiments providing for rich data and key insights into the effect of each type of environment and each modality on accuracy and timeliness of virtual object interaction. The empirical results clearly showed that mean depth perception error in personal space was less than four millimeters whether the stimuli presented were real, virtual, or mixed. Likewise, mean error for the simple task of pushing a button was less than four millimeters whether the button was real or virtual. Mean task completion time was less than one second. Key to the high accuracy and quick task performance time observed was the correct presentation of the visual cues, including occlusion, stereoscopy, accommodation, and convergence. With performance results already near optimal level with accurate visual cues presented, adding proprioception, audio, and haptic cues did not significantly improve performance. Recommendations for future research include enhancement of the visual display and further experiments with more complex tasks and additional control variables
    corecore