9,917 research outputs found
MetaSpace II: Object and full-body tracking for interaction and navigation in social VR
MetaSpace II (MS2) is a social Virtual Reality (VR) system where multiple
users can not only see and hear but also interact with each other, grasp and
manipulate objects, walk around in space, and get tactile feedback. MS2 allows
walking in physical space by tracking each user's skeleton in real-time and
allows users to feel by employing passive haptics i.e., when users touch or
manipulate an object in the virtual world, they simultaneously also touch or
manipulate a corresponding object in the physical world. To enable these
elements in VR, MS2 creates a correspondence in spatial layout and object
placement by building the virtual world on top of a 3D scan of the real world.
Through the association between the real and virtual world, users are able to
walk freely while wearing a head-mounted device, avoid obstacles like walls and
furniture, and interact with people and objects. Most current virtual reality
(VR) environments are designed for a single user experience where interactions
with virtual objects are mediated by hand-held input devices or hand gestures.
Additionally, users are only shown a representation of their hands in VR
floating in front of the camera as seen from a first person perspective. We
believe, representing each user as a full-body avatar that is controlled by
natural movements of the person in the real world (see Figure 1d), can greatly
enhance believability and a user's sense immersion in VR.Comment: 10 pages, 9 figures. Video:
http://living.media.mit.edu/projects/metaspace-ii
Evaluating rules of interaction for object manipulation in cluttered virtual environments
A set of rules is presented for the design of interfaces that allow virtual objects to be manipulated in 3D virtual environments (VEs). The rules differ from other interaction techniques because they focus on the problems of manipulating objects in cluttered spaces rather than open spaces. Two experiments are described that were used to evaluate the effect of different interaction rules on participants' performance when they performed a task known as "the piano mover's problem." This task involved participants in moving a virtual human through parts of a virtual building while simultaneously manipulating a large virtual object that was held in the virtual human's hands, resembling the simulation of manual materials handling in a VE for ergonomic design. Throughout, participants viewed the VE on a large monitor, using an "over-the-shoulder" perspective. In the most cluttered VEs, the time that participants took to complete the task varied by up to 76% with different combinations of rules, thus indicating the need for flexible forms of interaction in such environments
The benefits of using a walking interface to navigate virtual environments
Navigation is the most common interactive task performed in three-dimensional virtual environments (VEs), but it is also a task that users often find difficult. We investigated how body-based information about the translational and rotational components of movement helped participants to perform a navigational search task (finding targets hidden inside boxes in a room-sized space). When participants physically walked around the VE while viewing it on a head-mounted display (HMD), they then performed 90% of trials perfectly, comparable to participants who had performed an equivalent task in the real world during a previous study. By contrast, participants performed less than 50% of trials perfectly if they used a tethered HMD (move by physically turning but pressing a button to translate) or a desktop display (no body-based information). This is the most complex navigational task in which a real-world level of performance has been achieved in a VE. Behavioral data indicates that both translational and rotational body-based information are required to accurately update one's position during navigation, and participants who walked tended to avoid obstacles, even though collision detection was not implemented and feedback not provided. A walking interface would bring immediate benefits to a number of VE applications
Recommended from our members
Regulating stepping during fixed-speed and self-paced treadmill walking
textBackground: Treadmill walking should closely simulate overground walking for research validation and optimal skill transfer. Traditional fixed-speed treadmill (FS) walking may not simulate natural walking because of the fixed belt speed and lack of visual cues. Self-paced (SP) treadmill walking, especially feedback controlled SP treadmill walking, enables close-to-real-time belt speed changes with users' speed changes. Different sensitivity levels of SP treadmill feedback determine how fast the treadmill respond to user's speed change. Few studies have examined the differences between FS and SP treadmill walking, or the difference between sensitivity levels of SP treadmills, and their methods were questionable because of averaging kinematics and kinetics parameters, and failing to examine directly treadmill and subjects' speed data. This study compared FS with two SP modes with variation of treadmill speed and user's speed as dependent variables. Method: Thirteen young healthy subjects participated. Subjects walked on a motorized split-belt treadmill under FS, high sensitivity SP (SP-H) and low sensitivity SP (SP-L) conditions at normal walking speed. Root mean square error (RMSE) for subject's pelvis global speed (Vpg), pelvis speed with respect to treadmill speed (Vpt), and treadmill speed (Vtg) data were collected for all trials. Results: Significant condition effects were found between FS and the two SP modes in all RMSE values (p < 0.001). The two sensitivity levels of SP had similar speed patterns. Large subject × condition interaction effects were found for all variables (p < 0.001). Only small subject effects were found. Conclusions: The results of the study reveal different walking patterns between FS and SP. However, the two sensitivity levels failed to differ much. More habituation time may be needed for subjects to learn to optimally respond to the SP algorithm. Future work should include training subjects for more natural responses, applying a feed-forward algorithm, and testing the effect of optic flow on FS and SP speed variation.Kinesiology and Health Educatio
Assessing the feasibility of online SSVEP decoding in human walking using a consumer EEG headset.
BackgroundBridging the gap between laboratory brain-computer interface (BCI) demonstrations and real-life applications has gained increasing attention nowadays in translational neuroscience. An urgent need is to explore the feasibility of using a low-cost, ease-of-use electroencephalogram (EEG) headset for monitoring individuals' EEG signals in their natural head/body positions and movements. This study aimed to assess the feasibility of using a consumer-level EEG headset to realize an online steady-state visual-evoked potential (SSVEP)-based BCI during human walking.MethodsThis study adopted a 14-channel Emotiv EEG headset to implement a four-target online SSVEP decoding system, and included treadmill walking at the speeds of 0.45, 0.89, and 1.34 meters per second (m/s) to initiate the walking locomotion. Seventeen participants were instructed to perform the online BCI tasks while standing or walking on the treadmill. To maintain a constant viewing distance to the visual targets, participants held the hand-grip of the treadmill during the experiment. Along with online BCI performance, the concurrent SSVEP signals were recorded for offline assessment.ResultsDespite walking-related attenuation of SSVEPs, the online BCI obtained an information transfer rate (ITR) over 12 bits/min during slow walking (below 0.89 m/s).ConclusionsSSVEP-based BCI systems are deployable to users in treadmill walking that mimics natural walking rather than in highly-controlled laboratory settings. This study considerably promotes the use of a consumer-level EEG headset towards the real-life BCI applications
- …