72,032 research outputs found

    Controlling a camera in a virtual environment.

    Get PDF
    International audienceThis paper presents an original solution to the camera control problem in a virtual environment. Our objective is to present a general framework that allows the automatic control of a camera in a dynamic environment. The proposed method is based on the image-based control or visual servoing approach. It consists in positioning a camera according to the information perceived in the image. This is thus a very intuitive approach of animation. To be able to react automatically to modifications of the environment, we also considered the introduction of constraints into the control. This approach is thus adapted to highly reactive contexts (virtual reality, video games). Numerous examples dealing with classic problems in animation are considered within this framework and presented in this paper

    A motion control method for a differential drive robot based on human walking for immersive telepresence

    Get PDF
    Abstract. This thesis introduces an interface for controlling Differential Drive Robots (DDRs) for telepresence applications. Our goal is to enhance immersive experience while reducing user discomfort, when using Head Mounted Displays (HMDs) and body trackers. The robot is equipped with a 360° camera that captures the Robot Environment (RE). Users wear an HMD and use body trackers to navigate within a Local Environment (LE). Through a live video stream from the robot-mounted camera, users perceive the RE within a virtual sphere known as the Virtual Environment (VE). A proportional controller was employed to facilitate the control of the robot, enabling to replicate the movements of the user. The proposed method uses chest tracker to control the telepresence robot and focuses on minimizing vection and rotations induced by the robot’s motion by modifying the VE, such as rotating and translating it. Experimental results demonstrate the accuracy of the robot in reaching target positions when controlled through the body-tracker interface. Additionally, it also reveals an optimal VE size that effectively reduces VR sickness and enhances the sense of presence

    Design of Participatory Virtual Reality System for visualizing an intelligent adaptive cyberspace

    Get PDF
    The concept of 'Virtual Intelligence' is proposed as an intelligent adaptive interaction between the simulated 3-D dynamic environment and the 3-D dynamic virtual image of the participant in the cyberspace created by a virtual reality system. A system design for such interaction is realised utilising only a stereoscopic optical head-mounted LCD display with an ultrasonic head tracker, a pair of gesture-controlled fibre optic gloves and, a speech recogni(ion and synthesiser device, which are all connected to a Pentium computer. A 3-D dynamic environment is created by physically-based modelling and rendering in real-time and modification of existing object description files by afractals-based Morph software. It is supported by an extensive library of audio and video functions, and functions characterising the dynamics of various objects. The multimedia database files so created are retrieved or manipulated by intelligent hypermedia navigation and intelligent integration with existing information. Speech commands control the dynamics of the environment and the corresponding multimedia databases. The concept of a virtual camera developed by ZeIter as well as Thalmann and Thalmann, as automated by Noma and Okada, can be applied for dynamically relating the orientation and actions of the virtual image of the participant with respect to the simulated environment. Utilising the fibre optic gloves, gesture-based commands are given by the participant for controlling his 3-D virtual image using a gesture language. Optimal estimation methods and dataflow techniques enable synchronisation between the commands of the participant expressed through the gesture language and his 3-D dynamic virtual image. Utilising a framework, developed earlier by the author, for adaptive computational control of distribute multimedia systems, the data access required for the environment as well as the virtual image of the participant can be endowed with adaptive capability

    Measuring user Quality of Experience in social VR systems

    Get PDF
    Virtual Reality (VR) is a computer-generated experience that can simulate physical presence in real or imagined environments [7]. A social VR system is an application that allows multiple users to join a collaborative Virtual Environment (VE), such as a computer-generated 3D scene or a 360-degree natural scene captured by an omnidirectional camera, and communicate with each other, usually by means of visual and audio cues. Each user is represented in the VE as a computer-generated avatar [3] or, in recently proposed systems, with a virtual representation based on live captures [1]. Depending on the system, the user’ virtual representation can also interact with the virtual environment, for example by manipulating virtual objects, controlling the appearance of the VE, or controlling the playout of additional media in the VE. The interest for social Virtual Reality (VR) systems dates back to the late 90s [4, 8] but has recently increased [2, 5, 6] due to the availability of affordable head-mounted displays on the consumer market and to the appearance of new applications, such as Facebook Spaces, YouTube VR, Hulu VR, which explicitly aim at including social features in existing VR platforms for multimedia delivery. In this talk, we will address the problem of measuring user Quality of Experience (QoE) in social VR systems. We will review the studies that have analysed how different features of a social VR system design, such as avatar appearance and behavioural realism, can affect user’s experience, and propose a comparison of the objective and subjective measures used in the literature to quantify user QoE in social VR. Finally, we will discuss the use case of watching movies together in VR and present the results of one of our recent studies focusing on this scenario, designed and performed in the framework of the European project VRTogether (http://vrtogether.eu). Particularly, we show an analysis of correlation between the objective and subjective measurements collected during our study, to provide guidelines toward the design of a unified methodology to monitor and quantify users’ QoE in social VR systems. The open questions to be addressed in the future in order to achieve such goal are also discussed

    Interaction and Expressivity in Video Games: Harnessing the Rhetoric of Film

    Get PDF
    The film-maker uses the camera and editing creatively, not simply to present the action of the film but also to set up a particular relation between the action and the viewer. In 3D video games with action controlled by the player, the pseudo camera is usually less creatively controlled and has less effect on the player’s appreciation of and engagement with the game. This paper discusses methods of controlling games by easy and intuitive interfaces and use of an automated virtual camera to increase the appeal of games for users

    Refining personal and social presence in virtual meetings

    Get PDF
    Virtual worlds show promise for conducting meetings and conferences without the need for physical travel. Current experience suggests the major limitation to the more widespread adoption and acceptance of virtual conferences is the failure of existing environments to provide a sense of immersion and engagement, or of ‘being there’. These limitations are largely related to the appearance and control of avatars, and to the absence of means to convey non-verbal cues of facial expression and body language. This paper reports on a study involving the use of a mass-market motion sensor (Kinectℱ) and the mapping of participant action in the real world to avatar behaviour in the virtual world. This is coupled with full-motion video representation of participant’s faces on their avatars to resolve both identity and facial expression issues. The outcomes of a small-group trial meeting based on this technology show a very positive reaction from participants, and the potential for further exploration of these concepts

    Ambient Gestures

    No full text
    We present Ambient Gestures, a novel gesture-based system designed to support ubiquitous ‘in the environment’ interactions with everyday computing technology. Hand gestures and audio feedback allow users to control computer applications without reliance on a graphical user interface, and without having to switch from the context of a non-computer task to the context of the computer. The Ambient Gestures system is composed of a vision recognition software application, a set of gestures to be processed by a scripting application and a navigation and selection application that is controlled by the gestures. This system allows us to explore gestures as the primary means of interaction within a multimodal, multimedia environment. In this paper we describe the Ambient Gestures system, define the gestures and the interactions that can be achieved in this environment and present a formative study of the system. We conclude with a discussion of our findings and future applications of Ambient Gestures in ubiquitous computing

    Effects of automation on situation awareness in controlling robot teams

    Get PDF
    Declines in situation awareness (SA) often accompany automation. Some of these effects have been characterized as out-of-the-loop, complacency, and automation bias. Increasing autonomy in multi-robot control might be expected to produce similar declines in operators’ SA. In this paper we review a series of experiments in which automation is introduced in controlling robot teams. Automating path planning at a foraging task improved both target detection and localization which is closely tied to SA. Timing data, however, suggested small declines in SA for robot location and pose. Automation of image acquisition, by contrast, led to poorer localization. Findings are discussed and alternative explanations involving shifts in strategy proposed
    • 

    corecore