14 research outputs found

    Populating 3D Cities: a True Challenge

    Full text link
    In this paper, we describe how we can model crowds in real-time using dynamic meshes, static meshes andimpostors. Techniques to introduce variety in crowds including colors, shapes, textures, individualanimation, individualized path-planning, simple and complex accessories are explained. We also present ahybrid architecture to handle the path planning of thousands of pedestrians in real time, while ensuringdynamic collision avoidance. Several behavioral aspects are presented as gaze control, group behaviour, aswell as the specific technique of crowd patches

    Populating 3D Cities: A True Challenge

    Get PDF
    In this paper, we describe how we can model crowds in real-time using dynamic meshes, static meshes andimpostors. Techniques to introduce variety in crowds including colors, shapes, textures, individualanimation, individualized path-planning, simple and complex accessories are explained. We also present ahybrid architecture to handle the path planning of thousands of pedestrians in real time, while ensuringdynamic collision avoidance. Several behavioral aspects are presented as gaze control, group behaviour, aswell as the specific technique of crowd patches

    Synchronized partial-body motion graphs

    Get PDF
    Author name used in this publication: William W. L. NGAuthor name used in this publication: Clifford S. T. ChoyAuthor name used in this publication: Daniel P. K. LunRefereed conference paper2010-2011 > Academic research: refereed > Refereed conference paperVersion of RecordPublishe

    Real Time Animation of Virtual Humans: A Trade-off Between Naturalness and Control

    Get PDF
    Virtual humans are employed in many interactive applications using 3D virtual environments, including (serious) games. The motion of such virtual humans should look realistic (or ‘natural’) and allow interaction with the surroundings and other (virtual) humans. Current animation techniques differ in the trade-off they offer between motion naturalness and the control that can be exerted over the motion. We show mechanisms to parametrize, combine (on different body parts) and concatenate motions generated by different animation techniques. We discuss several aspects of motion naturalness and show how it can be evaluated. We conclude by showing the promise of combinations of different animation paradigms to enhance both naturalness and control

    Comparing and Evaluating Real Time Character Engines for Virtual Environments

    Get PDF
    As animated characters increasingly become vital parts of virtual environments, then the engines that drive these characters increasingly become vital parts of virtual environment software. This paper gives an overview of the state of the art in character engines, and proposes a taxonomy of the features that are commonly found in them. This taxonomy can be used as a tool for comparison and evaluation of different engines. In order to demonstrate this we use it to compare three engines. The first is Cal3D, the most commonly used open source engine. We also introduce two engines created by the authors, Piavca and HALCA. The paper ends with a brief discussion of some other popular engines

    E-Drama: Facilitating Online Role-play using an AI Actor and Emotionally Expressive Characters.

    Get PDF
    This paper describes a multi-user role-playing environment, e-drama, which enables groups of people to converse online, in scenario driven virtual environments. The starting point of this research – edrama – is a 2D graphical environment in which users are represented by static cartoon figures. An application has been developed to enable integration of the existing edrama tool with several new components to support avatars with emotionally expressive behaviours, rendered in a 3D environment. The functionality includes the extraction of affect from open-ended improvisational text. The results of the affective analysis are then used to: (a) control an automated improvisational AI actor – EMMA (emotion, metaphor and affect) that operates a bit-part character in the improvisation; (b) drive the animations of avatars using the Demeanour framework in the user interface so that they react bodily in ways that are consistent with the affect that they are expressing. Finally, we describe user trials that demonstrate that the changes made improve the quality of social interaction and users’ sense of presence. Moreover, our system has the potential to evolve normal classroom education for young people with or without learning disabilities by providing 24/7 efficient personalised social skill, language and career training via role-play and offering automatic monitoring

    A SENSORY-MOTOR LINGUISTIC FRAMEWORK FOR HUMAN ACTIVITY UNDERSTANDING

    Get PDF
    We empirically discovered that the space of human actions has a linguistic structure. This is a sensory-motor space consisting of the evolution of joint angles of the human body in movement. The space of human activity has its own phonemes, morphemes, and sentences. We present a Human Activity Language (HAL) for symbolic non-arbitrary representation of sensory and motor information of human activity. This language was learned from large amounts of motion capture data. Kinetology, the phonology of human movement, finds basic primitives for human motion (segmentation) and associates them with symbols (symbolization). This way, kinetology provides a symbolic representation for human movement that allows synthesis, analysis, and symbolic manipulation. We introduce a kinetological system and propose five basic principles on which such a system should be based: compactness, view-invariance, reproducibility, selectivity, and reconstructivity. We demonstrate the kinetological properties of our sensory-motor primitives. Further evaluation is accomplished with experiments on compression and decompression of motion data. The morphology of a human action relates to the inference of essential parts of movement (morpho-kinetology) and its structure (morpho-syntax). To learn morphemes and their structure, we present a grammatical inference methodology and introduce a parallel learning algorithm to induce a grammar system representing a single action. The algorithm infers components of the grammar system as a subset of essential actuators, a CFG grammar for the language of each component representing the motion pattern performed in a single actuator, and synchronization rules modeling coordination among actuators. The syntax of human activities involves the construction of sentences using action morphemes. A sentence may range from a single action morpheme (nuclear syntax) to a sequence of sets of morphemes. A single morpheme is decomposed into analogs of lexical categories: nouns, adjectives, verbs, and adverbs. The sets of morphemes represent simultaneous actions (parallel syntax) and a sequence of movements is related to the concatenation of activities (sequential syntax). We demonstrate this linguistic framework on real motion capture data from a large scale database containing around 200 different actions corresponding to English verbs associated with voluntary meaningful observable movement
    corecore