474 research outputs found

    Real Time Animation of Virtual Humans: A Trade-off Between Naturalness and Control

    Get PDF
    Virtual humans are employed in many interactive applications using 3D virtual environments, including (serious) games. The motion of such virtual humans should look realistic (or ‘natural’) and allow interaction with the surroundings and other (virtual) humans. Current animation techniques differ in the trade-off they offer between motion naturalness and the control that can be exerted over the motion. We show mechanisms to parametrize, combine (on different body parts) and concatenate motions generated by different animation techniques. We discuss several aspects of motion naturalness and show how it can be evaluated. We conclude by showing the promise of combinations of different animation paradigms to enhance both naturalness and control

    Psychological Model for Animating Crowded

    Get PDF
    This paper proposes a psychological model forsimulating pedestrian behaviors in a crowdedspace. Our decision-making scheme controlsplausible avoidance behavior depending onthe positional relations among surroundingpersons, on the basis of a two-stage personalspace and a virtual memory structure asproposed in social psychology. Our systemdetermines pedestrian walking speed withthe crowd density to imitate the measureddata in urban engineering, and automaticallygenerates plausible motions of the individualpedestrian by composing a locomotion graphwith motion capture data. Our approachbased on psychology and a variety of actualmeasurements can increase the accuracy ofsimulation at both the micro and macro levels

    BUILDING A BETTER TRAINING IMAGE WITH DIGITAL OUTCROP MODELS: THESE GO TO ELEVEN

    Get PDF
    Current standard geostatistical approaches to subsurface heterogeneity studies may not capture realistic facies geometries and fluid flow paths. Multiple-point statistics (MPS) has shown promise in portraying complex geometries realistically; however, realizations are limited by the reliability of the model of heterogeneity upon which MPS relies, that is the Training Image (TI). Attempting to increase realism captured in TIs, a quantitative outcrop analog-based approach utilizing terrestrial lidar and high-resolution, calibrated digital photography is combined with lithofacies analysis to produce TIs. Terrestrial lidar scans and high-resolution digital imagery were acquired of a Westwater Canyon Member, Morrison Formation outcrop in Ojito Wilderness, New Mexico, USA. The resulting point cloud was used to develop a cm scale mesh. Digital images of the outcrop were processed through a series of photogrammetric techniques to delineate different facies and sedimentary structures. The classified images were projected onto the high-resolution mesh creating a physically plausible Digital Outcrop Model (DOM), portions of which were used to build MPS TIs. The resulting MPS realization appears to capture realistic geometries of the deposit and empirically honors facies distributions

    Data-driven techniques for animating virtual characters

    Get PDF
    One of the key goals of current research in data-driven computer animation is the synthesis of new motion sequences from existing motion data. This thesis presents three novel techniques for synthesising the motion of a virtual character from existing motion data and develops a framework of solutions to key character animation problems. The first motion synthesis technique presented is based on the character’s locomotion composition process. This technique examines the ability of synthesising a variety of character’s locomotion behaviours while easily specified constraints (footprints) are placed in the three-dimensional space. This is achieved by analysing existing motion data, and by assigning the locomotion behaviour transition process to transition graphs that are responsible for providing information about this process. However, virtual characters should also be able to animate according to different style variations. Therefore, a second technique to synthesise real-time style variations of character’s motion. A novel technique is developed that uses correlation between two different motion styles, and by assigning the motion synthesis process to a parameterised maximum a posteriori (MAP) framework retrieves the desire style content of the input motion in real-time, enhancing the realism of the new synthesised motion sequence. The third technique presents the ability to synthesise the motion of the character’s fingers either o↵-line or in real-time during the performance capture process. The advantage of both techniques is their ability to assign the motion searching process to motion features. The presented technique is able to estimate and synthesise a valid motion of the character’s fingers, enhancing the realism of the input motion. To conclude, this thesis demonstrates that these three novel techniques combine in to a framework that enables the realistic synthesis of virtual character movements, eliminating the post processing, as well as enabling fast synthesis of the required motion

    Real-time biped character stepping

    Get PDF
    PhD ThesisA rudimentary biped activity that is essential in interactive evirtual worlds, such as video-games and training simulations, is stepping. For example, stepping is fundamental in everyday terrestrial activities that include walking and balance recovery. Therefore an effective 3D stepping control algorithm that is computationally fast and easy to implement is extremely valuable and important to character animation research. This thesis focuses on generating real-time controllable stepping motions on-the-fly without key-framed data that are responsive and robust (e.g.,can remain upright and balanced under a variety of conditions, such as pushes and dynami- cally changing terrain). In our approach, we control the character’s direction and speed by means of varying the stepposition and duration. Our lightweight stepping model is used to create coordinated full-body motions, which produce directable steps to guide the character with specific goals (e.g., following a particular path while placing feet at viable locations). We also create protective steps in response to random disturbances (e.g., pushes). Whereby, the system automatically calculates where and when to place the foot to remedy the disruption. In conclusion, the inverted pendulum has a number of limitations that we address and resolve to produce an improved lightweight technique that provides better control and stability using approximate feature enhancements, for instance, ankle-torque and elongated-body

    Road distance and travel time for spatial urban modelling

    Get PDF
    Interactions within and between urban environments include the price of houses, the flow of traffic and the intensity of noise pollution, which can all be restricted by various physical, regulatory and customary barriers. Examples of such restrictions include buildings, one-way systems and pedestrian crossings. These constrictive features create challenges for predictive modelling in urban space, which are not fully captured when proximity-based models rely on the typically used Euclidean (straight line) distance metric. Over the course of this thesis, I ask three key questions in an attempt to identify how to improve spatial models in restricted urban areas. These are: (1) which distance function best models real world spatial interactions in an urban setting? (2) when, if ever, are non-Euclidean distance functions valid for urban spatial models? and (3) what is the best way to estimate the generalisation performance of urban models utilising spatial data? This thesis answers each of these questions through three contributions supporting the interdisciplinary domain of Urban Sciences. These contributions are: (1) the provision of an improved approximation of road distance and travel time networks to model urban spatial interactions; (2) the approximation of valid distance metrics from non-Euclidean inputs for improved spatial predictions and (3) the presentation of a road distance and travel time cross-validation metric to improve the estimation of urban model generalisation. Each of these contributions provide improvements against the current state-of-the-art. Throughout, all experiments utilise real world datasets in England and Wales, such datasets contain information on restricted roads, travel times, house sales and traffic counts. With these datasets, I display a number of case studies which show up to a 32% improved model accuracy against Euclidean distances and in some cases, a 90% improvement for the estimation of model generalisation performance. Combined, the contributions improve the way that proximity-based urban models perform and also provides a more accurate estimate of generalisation performance for predictive models in urban space. The main implication of these contributions to Urban Science is the ability to better model the challenges within a city based on how they interact with themselves and each other using an improved function of urban mobility, compared with the current state-of-the-art. Such challenges may include selecting the optimal locations for emergency services, identifying the causes of traffic incidents or estimating the density of air pollution. Additionally, the key implication of this research on geostatistics is that it provides the motivation and means of undertaking non-Euclidean based research for non-urban applications, for example predicting with alternative, non-road based, mobility patterns such as migrating animals, rivers and coast lines. Finally, the implication of my research to the real estate industry is significant, in which one can now improve the accuracy of the industry's state-of-the-art nationwide house price predictor, whilst also being able to more appropriately present their accuracy estimates for robustness

    Learning Finite State Machine Controllers from Motion Capture Data

    Get PDF
    With characters in computer games and interactive media increasingly being based on real actors, the individuality of an actor's performance should not only be reflected in the appearance and animation of the character but also in the Artificial Intelligence that governs the character's behavior and interactions with the environment. Machine learning methods applied to motion capture data provide a way of doing this. This paper presents a method for learning the parameters of a Finite State Machine controller. The method learns both the transition probabilities of the Finite State Machine and also how to select animations based on the current state
    corecore