6 research outputs found

    Semantic Virtual Environments with Adaptive Multimodal Interfaces

    Get PDF
    We present a system for real-time configuration of multimodal interfaces to Virtual Environments (VE). The flexibility of our tool is supported by a semantics-based representation of VEs. Semantic descriptors are used to define interaction devices and virtual entities under control. We use portable (XML) descriptors to define the I/O channels of a variety of interaction devices. Semantic description of virtual objects turns them into reactive entities with whom the user can communicate in multiple ways. This article gives details on the semantics-based representation and presents some examples of multimodal interfaces created with our system, including gestures-based and PDA-based interfaces, amongst others

    The “Caddie Paradigm”: a free-Locomotion Interface for Teleoperation

    Get PDF
    This paper presents the “caddie paradigm” a new interface for teleoperation of mobile robots. We present a prototype based on the use of a treadmill as a free-locomotion interface. The “caddie paradigm” allows for controlling a real robot by “pushing” its virtual representation displayed on a large projection screen. This system lets the operator walk right behind the robot situated in a remote location. A discussion on the advantages of the interface is presented. The implemented prototype shows the feasibility of using this paradigm of interaction to control robots remotely

    Advanced Mixed Reality Technologies for Surveillance and Risk Prevention Applications

    Get PDF
    We present a system that exploits advanced Mixed and Virtual Reality technologies to create a surveillance and security system that could be also extended to define emergency prevention plans in crowdy environments. Surveillance cameras are carried by a mini Blimp which is tele-operated using an innovative Virtual Reality interface with haptic feedback. An interactive control room (CAVE) receives multiple video streams from airborne and fixed cameras. Eye tracking technology allows for turning the user’s gaze into the main interaction mechanism; the user in charge can examine, zoom and select specific views by looking at them. Video streams selected at the control room can be redirected to agents equipped with a PDA. On-field agents can examine the video sent by the control center and locate the actual position of the airborne cameras in aGPS-driven map. The aerial video would be augmented with real-time 3D crowd to create more realist risk and emergency prevention plans. The prototype we present shows the added value of integrating AR/VR technologies into a complex application and opens up several research directions in the areas of tele-operation, Multimodal Interfaces, simulation, risk and emergency prevention plans, etc

    An Ontology of Virtual Humans: incorporating

    No full text
    semantics into human shapes Most of the efforts concerning graphical representations of humans (Virtual Humans) have been focused on synthesizing geometry for static or animated shapes. The next step is to consider a human body not only as a 3D shape, but as an active semantic entity with features, functionalities, interaction skills, etc. In the framework of the AIM@SHAPE Network of Excellence we are currently working on an ontology-based approach to make Virtual Humans more active and understandable both for humans and machines. The ontology for Virtual Humans we are defining will provide the ”semantic layer” required to reconstruct, stock, retrieve and reuse content and knowledge related to Virtual Humans. The connection between the semantic and the graphical data is achieved thanks to an intermediate layer based on anatomical features extracted from morphological shape analysis. The resulting shape descriptors can be used to derive higher-level descriptors from the raw geometric data. High-level descriptors can then be used to control human models

    An ontology of virtual humans: incorporating semantics into human shapes

    No full text
    semantic annotations, ontologies. Most of the efforts concerning graphical representations of humans (Virtual Humans) have been focused on synthesizing geometry for static or animated shapes. The next step is to consider a human body not only as a 3D shape, but as an active semantic entity with features, functionalities, interaction skills, etc. The ontology for Virtual Humans we are defining will provide the ”semantic layer ” required to reconstruct, stock, retrieve and reuse content and knowledge related to Virtual Humans. The connection between the semantic and graphical data is achieved thanks to an intermediate layer based on anatomical features extracted form morphological shape analysis. The resulting shape descriptors can be used to derive higher-level descriptors from the raw geometric data. High-level descriptors can be used to control human models. 1
    corecore