8,541 research outputs found
Combining multi-sensory stimuli in virtual worlds - a progress report
Fröhlich J, Wachsmuth I. Combining multi-sensory stimuli in virtual worlds - a progress report. In: Shumaker R, ed. Virtual Augmented and Mixed Reality. Designing and Developing Augmented and Virtual Environments. Lecture Notes in Computer Science. Vol 8525. Cham: Springer International Publishing Switzerland; 2014: 44-54.In order to make a significant step towards more realistic virtual experiences, we created a multi-sensory stimuli display for a CAVE-like environment. It comprises graphics, sound, tactile feedback, wind and warmth. In the present report we discuss the possibilities and constraints tied to such an enhancement. To use a multi-modal display in a proper way, many considerations have to be taken into account. This includes safety requirements, hardware devices and software integration. For each stimulus different possibilities are reviewed with regard to their assets and drawbacks. Eventually the resulting setup realized in our lab is described – to our knowledge one of the most comprehensive systems. Technical evaluations as well as user studies accompanied the development and gave hints with respect to necessities and chances
MetaSpace II: Object and full-body tracking for interaction and navigation in social VR
MetaSpace II (MS2) is a social Virtual Reality (VR) system where multiple
users can not only see and hear but also interact with each other, grasp and
manipulate objects, walk around in space, and get tactile feedback. MS2 allows
walking in physical space by tracking each user's skeleton in real-time and
allows users to feel by employing passive haptics i.e., when users touch or
manipulate an object in the virtual world, they simultaneously also touch or
manipulate a corresponding object in the physical world. To enable these
elements in VR, MS2 creates a correspondence in spatial layout and object
placement by building the virtual world on top of a 3D scan of the real world.
Through the association between the real and virtual world, users are able to
walk freely while wearing a head-mounted device, avoid obstacles like walls and
furniture, and interact with people and objects. Most current virtual reality
(VR) environments are designed for a single user experience where interactions
with virtual objects are mediated by hand-held input devices or hand gestures.
Additionally, users are only shown a representation of their hands in VR
floating in front of the camera as seen from a first person perspective. We
believe, representing each user as a full-body avatar that is controlled by
natural movements of the person in the real world (see Figure 1d), can greatly
enhance believability and a user's sense immersion in VR.Comment: 10 pages, 9 figures. Video:
http://living.media.mit.edu/projects/metaspace-ii
Sensory Communication
Contains table of contents for Section 2, an introduction and reports on fourteen research projects.National Institutes of Health Grant RO1 DC00117National Institutes of Health Grant RO1 DC02032National Institutes of Health/National Institute on Deafness and Other Communication Disorders Grant R01 DC00126National Institutes of Health Grant R01 DC00270National Institutes of Health Contract N01 DC52107U.S. Navy - Office of Naval Research/Naval Air Warfare Center Contract N61339-95-K-0014U.S. Navy - Office of Naval Research/Naval Air Warfare Center Contract N61339-96-K-0003U.S. Navy - Office of Naval Research Grant N00014-96-1-0379U.S. Air Force - Office of Scientific Research Grant F49620-95-1-0176U.S. Air Force - Office of Scientific Research Grant F49620-96-1-0202U.S. Navy - Office of Naval Research Subcontract 40167U.S. Navy - Office of Naval Research/Naval Air Warfare Center Contract N61339-96-K-0002National Institutes of Health Grant R01-NS33778U.S. Navy - Office of Naval Research Grant N00014-92-J-184
Virtual acoustics displays
The real time acoustic display capabilities are described which were developed for the Virtual Environment Workstation (VIEW) Project at NASA-Ames. The acoustic display is capable of generating localized acoustic cues in real time over headphones. An auditory symbology, a related collection of representational auditory 'objects' or 'icons', can be designed using ACE (Auditory Cue Editor), which links both discrete and continuously varying acoustic parameters with information or events in the display. During a given display scenario, the symbology can be dynamically coordinated in real time with 3-D visual objects, speech, and gestural displays. The types of displays feasible with the system range from simple warnings and alarms to the acoustic representation of multidimensional data or events
Backwards is the way forward: feedback in the cortical hierarchy predicts the expected future
Clark offers a powerful description of the brain as a prediction machine, which offers progress on two distinct levels. First, on an abstract conceptual level, it provides a unifying framework for perception, action, and cognition (including subdivisions such as attention, expectation, and imagination). Second, hierarchical prediction offers progress on a concrete descriptive level for testing and constraining conceptual elements and mechanisms of predictive coding models (estimation of predictions, prediction errors, and internal models)
A Human-Centric Metaverse Enabled by Brain-Computer Interface: A Survey
The growing interest in the Metaverse has generated momentum for members of
academia and industry to innovate toward realizing the Metaverse world. The
Metaverse is a unique, continuous, and shared virtual world where humans embody
a digital form within an online platform. Through a digital avatar, Metaverse
users should have a perceptual presence within the environment and can interact
and control the virtual world around them. Thus, a human-centric design is a
crucial element of the Metaverse. The human users are not only the central
entity but also the source of multi-sensory data that can be used to enrich the
Metaverse ecosystem. In this survey, we study the potential applications of
Brain-Computer Interface (BCI) technologies that can enhance the experience of
Metaverse users. By directly communicating with the human brain, the most
complex organ in the human body, BCI technologies hold the potential for the
most intuitive human-machine system operating at the speed of thought. BCI
technologies can enable various innovative applications for the Metaverse
through this neural pathway, such as user cognitive state monitoring, digital
avatar control, virtual interactions, and imagined speech communications. This
survey first outlines the fundamental background of the Metaverse and BCI
technologies. We then discuss the current challenges of the Metaverse that can
potentially be addressed by BCI, such as motion sickness when users experience
virtual environments or the negative emotional states of users in immersive
virtual applications. After that, we propose and discuss a new research
direction called Human Digital Twin, in which digital twins can create an
intelligent and interactable avatar from the user's brain signals. We also
present the challenges and potential solutions in synchronizing and
communicating between virtual and physical entities in the Metaverse
Multimodality in VR: A survey
Virtual reality (VR) is rapidly growing, with the potential to change the way we create and consume content. In VR, users integrate multimodal sensory information they receive, to create a unified perception of the virtual world. In this survey, we review the body of work addressing multimodality in VR, and its role and benefits in user experience, together with different applications that leverage multimodality in many disciplines. These works thus encompass several fields of research, and demonstrate that multimodality plays a fundamental role in VR; enhancing the experience, improving overall performance, and yielding unprecedented abilities in skill and knowledge transfer
- …