54,598 research outputs found

    VRMoViAn - An Immersive Data Annotation Tool for Visual Analysis of Human Interactions in VR

    Get PDF
    Understanding human behavior in virtual reality (VR) is a key component for developing intelligent systems to enhance human focused VR experiences. The ability to annotate human motion data proves to be a very useful way to analyze and understand human behavior. However, due to the complexity and multi-dimensionality of human activity data, it is necessary to develop software that can display the data in a comprehensible way and can support intuitive data annotation for developing machine learning models able recognize and assist human motions in VR (e.g., remote physical therapy). Although past research has been done to improve VR data visualization, no emphasis has been put into VR data annotation specifically for future machine learning applications. To fill this gap, we have developed a data annotation tool capable of displaying complex VR data in an expressive 3D animated format as well as providing an easily-understandable user interface that allows users to annotate and label human activity efficiently. Specifically, it can convert multiple motion data files into a watchable 3D video, and effectively demonstrate body motion: including eye tracking of the player in VR using animations as well as showcasing hand-object interactions with level-of-detail visualization features. The graphical user interface allows the user to interact and annotate VR data just like they do with other video playback tools. Our next step is to develop and integrate machine learning based clusters to automate data annotation. A user study is being planned to evaluate the tool in terms of user-friendliness and effectiveness in assisting with visualizing and analyzing human behavior along with the ability to easily and accurately annotate real-world datasets

    Sketch-based virtual human modelling and animation

    Get PDF
    Animated virtual humans created by skilled artists play a remarkable role in today’s public entertainment. However, ordinary users are still treated as audiences due to the lack of appropriate expertise, equipment, and computer skills. We developed a new method and a novel sketching interface, which enable anyone who can draw to “sketch-out” 3D virtual humans and animation. We devised a “Stick FigureFleshing-outSkin Mapping” graphical pipeline, which decomposes the complexity of figure drawing and considerably boosts the modelling and animation efficiency. We developed a gesture-based method for 3D pose reconstruction from 2D stick figure drawings. We investigated a “Creative Model-based Method”, which performs a human perception process to transfer users’ 2D freehand sketches into 3D human bodies of various body sizes, shapes and fat distributions. Our current system supports character animation in various forms including articulated figure animation, 3D mesh model animation, and 2D contour/NPR animation with personalised drawing styles. Moreover, this interface also supports sketch-based crowd animation and 2D storyboarding of 3D multiple character interactions. A preliminary user study was conducted to support the overall system design. Our system has been formally tested by various users on Tablet PC. After minimal training, even a beginner can create vivid virtual humans and animate them within minutes

    Natural Virtual Reality User Interface to Define Assembly Sequences for Digital Human Models

    Get PDF
    Digital human models (DHMs) are virtual representations of human beings. They are used to conduct, among other things, ergonomic assessments in factory layout planning. DHM software tools are challenging in their use and thus require a high amount of training for engineers. In this paper, we present a virtual reality (VR) application that enables engineers to work with DHMs easily. Since VR systems with head-mounted displays (HMDs) are less expensive than CAVE systems, HMDs can be integrated more extensively into the product development process. Our application provides a reality-based interface and allows users to conduct an assembly task in VR and thus to manipulate the virtual scene with their real hands. These manipulations are used as input for the DHM to simulate, on that basis, human ergonomics. Therefore, we introduce a software and hardware architecture, the VATS (virtual action tracking system). This paper furthermore presents the results of a user study in which the VATS was compared to the existing WIMP (Windows, Icons, Menus and Pointer) interface. The results show that the VATS system enables users to conduct tasks in a significantly faster way

    Sketching-out virtual humans: From 2d storyboarding to immediate 3d character animation

    Get PDF
    Virtual beings are playing a remarkable role in today’s public entertainment, while ordinary users are still treated as audiences due to the lack of appropriate expertise, equipment, and computer skills. In this paper, we present a fast and intuitive storyboarding interface, which enables users to sketch-out 3D virtual humans, 2D/3D animations, and character intercommunication. We devised an intuitive “stick figurefleshing-outskin mapping” graphical animation pipeline, which realises the whole process of key framing, 3D pose reconstruction, virtual human modelling, motion path/timing control, and the final animation synthesis by almost pure 2D sketching. A “creative model-based method” is developed, which emulates a human perception process, to generate the 3D human bodies of variational sizes, shapes, and fat distributions. Meanwhile, our current system also supports the sketch-based crowd animation and the storyboarding of the 3D multiple character intercommunication. This system has been formally tested by various users on Tablet PC. After minimal training, even a beginner can create vivid virtual humans and animate them within minutes
    • 

    corecore