14,604 research outputs found

    Design of an annotation system for taking notes in virtual reality

    Get PDF
    International audienceThe industry uses immersive virtual environments for testing engineering solutions. Annotation systems allow capturing the insights that arise during those virtual reality sessions. However, those annotations remain in the virtual environment. Users are required to return to virtual reality to access it. We propose a new annotation system for VR. The design of this system contains two important aspects. First, the digital representation of the annotations enables to access the annotation in both virtual and physical world. Secondly, the interaction technique for taking notes in VR is designed to enhance the feeling of bringing the annotations from the physical world to the virtual and vice versa. We also propose the first implementation of this design

    Meetings and Meeting Modeling in Smart Environments

    Get PDF
    In this paper we survey our research on smart meeting rooms and its relevance for augmented reality meeting support and virtual reality generation of meetings in real time or off-line. The research reported here forms part of the European 5th and 6th framework programme projects multi-modal meeting manager (M4) and augmented multi-party interaction (AMI). Both projects aim at building a smart meeting environment that is able to collect multimodal captures of the activities and discussions in a meeting room, with the aim to use this information as input to tools that allow real-time support, browsing, retrieval and summarization of meetings. Our aim is to research (semantic) representations of what takes place during meetings in order to allow generation, e.g. in virtual reality, of meeting activities (discussions, presentations, voting, etc.). Being able to do so also allows us to look at tools that provide support during a meeting and at tools that allow those not able to be physically present during a meeting to take part in a virtual way. This may lead to situations where the differences between real meeting participants, human-controlled virtual participants and (semi-) autonomous virtual participants disappear

    Mixed reality participants in smart meeting rooms and smart home enviroments

    Get PDF
    Human–computer interaction requires modeling of the user. A user profile typically contains preferences, interests, characteristics, and interaction behavior. However, in its multimodal interaction with a smart environment the user displays characteristics that show how the user, not necessarily consciously, verbally and nonverbally provides the smart environment with useful input and feedback. Especially in ambient intelligence environments we encounter situations where the environment supports interaction between the environment, smart objects (e.g., mobile robots, smart furniture) and human participants in the environment. Therefore it is useful for the profile to contain a physical representation of the user obtained by multi-modal capturing techniques. We discuss the modeling and simulation of interacting participants in a virtual meeting room, we discuss how remote meeting participants can take part in meeting activities and they have some observations on translating research results to smart home environments

    Smart Exposition Rooms: The Ambient Intelligence View

    Get PDF
    We introduce our research on smart environments, in particular research on smart meeting rooms and investigate how research approaches here can be used in the context of smart museum environments. We distinguish the identification of domain knowledge, its use in sensory perception, its use in interpretation and modeling of events and acts in smart environments and we have some observations on off-line browsing and on-line remote participation in events in smart environments. It is argued that large-scale European research in the area of ambient intelligence will be an impetus to the research and development of smart galleries and museum spaces

    Developing serious games for cultural heritage: a state-of-the-art review

    Get PDF
    Although the widespread use of gaming for leisure purposes has been well documented, the use of games to support cultural heritage purposes, such as historical teaching and learning, or for enhancing museum visits, has been less well considered. The state-of-the-art in serious game technology is identical to that of the state-of-the-art in entertainment games technology. As a result, the field of serious heritage games concerns itself with recent advances in computer games, real-time computer graphics, virtual and augmented reality and artificial intelligence. On the other hand, the main strengths of serious gaming applications may be generalised as being in the areas of communication, visual expression of information, collaboration mechanisms, interactivity and entertainment. In this report, we will focus on the state-of-the-art with respect to the theories, methods and technologies used in serious heritage games. We provide an overview of existing literature of relevance to the domain, discuss the strengths and weaknesses of the described methods and point out unsolved problems and challenges. In addition, several case studies illustrating the application of methods and technologies used in cultural heritage are presented

    VR-Notes: A Perspective-Based, Multimedia Annotation System in Virtual Reality

    Get PDF
    Virtual reality (VR) has begun to emerge as a new technology in the commercial and research space, and many people have begun to utilize VR technologies in their workflows. To improve user productivity in these scenarios, annotation systems in VR allow users to capture insights and observations while in VR sessions. In the digital, 3D world of VR, we can design annotation systems to take advantage of these capabilities to provide a richer annotation viewing experience. I propose VR-Notes, a design for a new annotation system in VR that focuses on capturing the annotator\u27s perspective for both doodle annotations and audio annotations, as well as various features that improve the viewing experience of these annotations at a later time. Early results from my experiment showed that the VR-Notes doodle method required 53%, 44%, 51% less movement and 42%, 41%, 45% less rotation (head, left controller, and right controller respectively) when compared to a popular 3D freehand drawing method. Additionally, users preferred and scored the VR Notes doodle method higher when compared to the freehand drawing method

    Enabling collaboration in virtual reality navigators

    Get PDF
    In this paper we characterize a feature superset for Collaborative Virtual Reality Environments (CVRE), and derive a component framework to transform stand-alone VR navigators into full-fledged multithreaded collaborative environments. The contributions of our approach rely on a cost-effective and extensible technique for loading software components into separate POSIX threads for rendering, user interaction and network communications, and adding a top layer for managing session collaboration. The framework recasts a VR navigator under a distributed peer-to-peer topology for scene and object sharing, using callback hooks for broadcasting remote events and multicamera perspective sharing with avatar interaction. We validate the framework by applying it to our own ALICE VR Navigator. Experimental results show that our approach has good performance in the collaborative inspection of complex models.Postprint (published version
    corecore