43 research outputs found

    Organic shape modeling through haptic devices

    Get PDF
    This paper presents a sketching system for 3D organic shapes modeling and animation using virtual reality devices. On the hardware side, it is based on the Haptic Workstation™ which conveys force feedback on the user arms (upper body limbs), and a head mounted display to present the 3D generated images. On the software side, we use implicit surfaces modeling techniques such as metaballs. In fact, designers feel well comfortable with this kind of primitives due to their ability in the organic shapes creation such as virtual humans. The proposed system provides an efficient alternative to produce advanced 3D shapes sketchin

    Controlling Virtual Humans Using PDAs

    Get PDF
    The new breed of Personal Digital Assistants (PDA) and mobile phones have enough computing power to display 3D graphics. These new mobile devices (handhelds) have some other interesting communication and interaction possibilities as well. In this paper we explore the potential applications of 3D virtual humans inside mobile devices and the use of such handhelds as control interfaces to drive the virtual humans and navigate through their virtual environments

    Intelligent switch: An algorithm to provide the best third-person perspective in augmented reality

    Get PDF
    Augmented reality (AR) environments are suffering from a limited workspace. In addition, registration issues are also increased by the use of a mobile camera on the user that provides a first-person perspective (1PP). Using several fixed cameras reduces the registration issues and, depending on their location, the workspace could also be enlarged. In this case of an extended workspace, it has been shown that third-person perspective (3PP) is sometimes preferred by the user. Based on the previous hypotheses, we developed a system working with several fixed cameras that can provide 3PP to a user wearing a video see-through HMD. Our system uses an "intelligent switch" to propose our "best view" to the user, i.e. avoiding markers occlusion and taking into account user displacements. We present in this paper, such a system, its decision algorithm, and the discussion of obtained results that seem to be very promising within the AR domain

    Wearable Mixed Reality System In Less Than 1 Pound

    Get PDF
    We have designed a wearable Mixed Reality (MR) framework which allows to real-time render game-like 3D scenes on see-through head-mounted displays (see through HMDs) and to localize the user position within aknown internet wireless area. Our equipment weights less than 1 Pound (0.45 Kilos). The information visualized on the mobile device could be sent on-demand from a remote server and realtime rendered onboard.We present our PDA-based platform as a valid alternative to use in wearable MR contexts under less mobility and encumbering constraints: our approach eliminates the typical backpack with a laptop, a GPS antenna and a heavy HMD usually required in this cases. A discussion about our results and user experiences with our approach using a handheld for 3D rendering is presented as well

    A MPEG-4 virtual human animation engine for interactive web based applications

    Get PDF
    This paper presents a novel, MPEG-4 compliant animation engine (body player). It has been designed to synthesize virtual human full-body animations in interactive multimedia applications for the web. We believe that a full-body player can provide a more expressive and interesting interface than the use of animated faces only (talking heads). This is one of the first implementations of a MPEG-4 animation engine with deformable models (it uses the MPEG-4 Body Definition Parameters and Deformation Tables). Several potential applications are overviewed. This software tool was developed in the framework of the IST-INTERFACE European projec

    The Benefits of Third-Person Perspective in Virtual and Augmented Reality?

    Get PDF
    Instead of the reality in which you can see your own limbs, in virtual reality simulations it is sometimes disturbing not to be able to see your own body. It seems to create an issue in the proprioperception of the user who does not completely feel integrated in the environment. This perspective should be beneficial for the users. We propose to give the possibility to the people to use the first and the third-person perspective like in video games (e.g. GTA). As the gamers prefer to use the third-person perspective for moving actions and the first-person view for the thin operations, we will verify this comportment is extendable to simulations in augmented and virtual reality

    Reflex movements for a virtual human: a biology inspired approach

    Get PDF
    This paper presents the results of a method to produce autonomous animation of virtual humans. In particular, the proposed methodology is focused on the autonomous synthesis of nonvoluntary gestures such as reflexes and subtle movements which provide a noticeable impression of realism and naturalness. The final goal of this technique is to produce virtual humans with a more spontaneous, non preprogrammed behaviour. For the moment, the technique is applied to the synthesis of reflex movements of the arm, in reaction to thermic stimuli. Nevertheless, a general architecture is outline
    corecore