10 research outputs found
Footstep parameterized motion blending using barycentric coordinates
This paper presents a real-time animation system for fully embodied virtual humans that satisfies accurate foot placement constraints for different human walking and running styles. Our method offers a fine balance between motion fidelity and character control, and can efficiently animate over sixty agents in real time (25 FPS) and over a hundred characters at 13 FPS. Given a point cloud of reachable support foot configurations extracted from the set of available animation clips, we compute the Delaunay triangulation. At runtime, the triangulation is queried to obtain the simplex containing the next footstep, which is used to compute the barycentric blending weights of the animation clips. Our method synthesizes animations to accurately follow footsteps, and a simple IK solver adjusts small offsets, foot orientation, and handles uneven terrain. To incorporate root velocity fidelity, the method is further extended to include the parametric space of root movement and combine it with footstep based interpolation. The presented method is evaluated on a variety of test cases and error measurements are calculated to offer a quantitative analysis of the results achieved.Peer ReviewedPostprint (author’s final draft
An automatic tool to facilitate authoring animation blending in game engines
Achieving realistic virtual humans is crucial in virtual reality applications and video games. Nowadays there are software and game development tools, that are of great help to generate and simulate characters. They offer easy to use GUIs to create characters by dragging and drooping features, and making small modifications. Similarly, there are tools to create animation graphs and setting blending parameters among others. Unfortunately, even though these tools are relatively user friendly, achieving natural animation transitions is not straight forward and thus non-expert users tend to spend a large amount of time to generate animations that are not completely free of artefacts. In this paper we present a method to automatically generate animation blend spaces in Unreal engine, which offers two advantages: the first one is that it provides a tool to evaluate the quality of an animation set, and the second one is that the resulting graph does not depend on user skills and it is thus not prone to user errors.Peer ReviewedPostprint (author's final draft
Synthesising character animation for real time crowd simulation systems in Unreal Engine
In this master thesis, we have developed a framework to automate the process of generating animation synthesis graphs. This thesis includes a tool to ease the animation configurations, a method to add behaviours to the characters, and a method to generate automatically the graphs
Authoring virtual crowds: a survey
Recent advancements in crowd simulation unravel a wide range of functionalities for virtual agents, delivering highly-realistic,natural virtual crowds. Such systems are of particular importance to a variety of applications in fields such as: entertainment(e.g., movies, computer games); architectural and urban planning; and simulations for sports and training. However, providingtheir capabilities to untrained users necessitates the development of authoring frameworks. Authoring virtual crowds is acomplex and multi-level task, varying from assuming control and assisting users to realise their creative intents, to deliveringintuitive and easy to use interfaces, facilitating such control. In this paper, we present a categorisation of the authorable crowdsimulation components, ranging from high-level behaviours and path-planning to local movements, as well as animation andvisualisation. We provide a review of the most relevant methods in each area, emphasising the amount and nature of influencethat the users have over the final result. Moreover, we discuss the currently available authoring tools (e.g., graphical userinterfaces, drag-and-drop), identifying the trends of early and recent work. Finally, we suggest promising directions for futureresearch that mainly stem from the rise of learning-based methods, and the need for a unified authoring framework.This work has received funding from the European Union’s Horizon 2020 research and innovation programme under the Marie Skłodowska Curie grant agreement No 860768 (CLIPE project). This project has received funding from the European Union’s Horizon 2020 Research and Innovation Programme under Grant Agreement No 739578 and the Government of the Republic of Cyprus through the Deputy Ministry of Research, Innovation and Digital PolicyPeer ReviewedPostprint (author's final draft
Relationship descriptors for interactive motion adaptation
In this thesis we present an interactive motion adaptation scheme for close
interactions between skeletal characters and mesh structures, such as navigating
restricted environments and manipulating tools.
We propose a new spatial-relationship based representation to encode
character-object interactions describing the kinematics of the body parts by the
weighted sum of vectors relative to descriptor points selectively sampled over the
scene. In contrast to previous discrete representations that either only handle
static spatial relationships, or require offline, costly optimization processes, our
continuous framework smoothly adapts the motion of a character to deformations
in the objects and character morphologies in real-time whilst preserving the
original context and style of the scene.
We demonstrate the strength of working in our relationship-descriptor
space in tackling the issue of motion editing under large environment
deformations by integrating procedural animation techniques such as
repositioning contacts in an interaction whilst preserving the context and style of
the original animation.
Furthermore we propose a method that can be used to adapt animations
from template objects to novel ones by solving for mappings between the two in
our relationship-descriptor space effectively transferring an entire motion from
one object to a new one of different geometry whilst ensuring continuity across
all frames of the animation, as opposed to mapping static poses only as is
traditionally achieved.
The experimental results show that our method can be used for a wide
range of applications, including motion retargeting for dynamically changing
scenes, multi-character interactions, and interactive character control and
deformation transfer for scenes that involve close interactions. We further
demonstrate a key use case in retargeting locomotion to uneven terrains and
curving paths convincingly for bipeds and quadrupeds.
Our framework is useful for artists who need to design animated scenes
interactively, and modern computer games that allow users to design their own
virtual characters, objects and environments, such that they can recycle existing
motion data for a large variety of different configurations without the need to
manually reconfigure motion from scratch or store expensive combinations of
animation in memory. Most importantly it’s achieved in real-time
Recommended from our members
Modelling and Animation using Partial Differential Equations. Geometric modelling and computer animation of virtual characters using elliptic partial differential equations.
This work addresses various applications pertaining to the design, modelling and animation of parametric surfaces using elliptic Partial Differential Equations (PDE) which are produced via the PDE method. Compared with traditional surface generation techniques, the PDE method is an effective technique that can represent complex three-dimensional (3D) geometries in terms of a relatively small set of parameters. A PDE-based surface can be produced from a set of pre-configured curves that are used as the boundary conditions to solve a number of PDE. An important advantage of using this method is that most of the information required to define a surface is contained at its boundary. Thus, complex surfaces can be computed using only a small set of design parameters.
In order to exploit the advantages of this methodology various applications were developed that vary from the interactive design of aircraft configurations to the animation of facial expressions in a computer-human interaction system that utilizes an artificial intelligence (AI) bot for real time conversation. Additional applications of generating cyclic motions for PDE based human character integrated in a Computer-Aided Design (CAD) package as well as developing techniques to describe a given mesh geometry by a set of boundary conditions, required to evaluate the PDE method, are presented. Each methodology presents a novel approach for interacting with parametric surfaces obtained by the PDE method. This is due to the several advantages this surface generation technique has to offer. Additionally, each application developed in this thesis focuses on a specific target that delivers efficiently various operations in the design, modelling and animation of such surfaces.The project files will not be available online
Footstep parameterized motion blending using barycentric coordinates
This paper presents a real-time animation system for fully embodied virtual humans that satisfies accurate foot placement constraints for different human walking and running styles. Our method offers a fine balance between motion fidelity and character control, and can efficiently animate over sixty agents in real time (25 FPS) and over a hundred characters at 13 FPS. Given a point cloud of reachable support foot configurations extracted from the set of available animation clips, we compute the Delaunay triangulation. At runtime, the triangulation is queried to obtain the simplex containing the next footstep, which is used to compute the barycentric blending weights of the animation clips. Our method synthesizes animations to accurately follow footsteps, and a simple IK solver adjusts small offsets, foot orientation, and handles uneven terrain. To incorporate root velocity fidelity, the method is further extended to include the parametric space of root movement and combine it with footstep based interpolation. The presented method is evaluated on a variety of test cases and error measurements are calculated to offer a quantitative analysis of the results achieved.Peer Reviewe
Development and Application of Computer Graphics Techniques for the Visualization of Large Geo-Related Data-Sets
Ziel dieser Arbeit war es, Algorithmen zu entwickeln und zu verbessern, die
es gestatten, grosse geographische und andere geo-bezogene Datensätze
mithilfe computergraphischer Techniken visualisieren zu können.
Ein Schwerpunkt war dabei die Entwicklung neuer kamera-adaptiver Datenstrukturen für
digitale Höhenmodelle und Rasterbilder.
In der Arbeit wird zunächst ein neuartiges Multiresolutionmodell für Höhenfelder
definiert. Dieses Modell braucht nur sehr wenig zusätzlichen Speicherplatz und ist geeignet,
interaktive Anpassungsraten zu gewährleisten.
Weiterhin werden Ansätze zur schnellen Bestimmung sichtbarer und verdeckter
Teile einer computergraphischen Szene diskutiert, um die Bewegung in grossen und
ausgedehnten Szenen wie Stadtmodellen oder Gebäuden zu beschleunigen.
Im Anschluss daran werden einige Problemstellungen im Zusammenhang mit Texture Mapping
erörtert, so werden zum Beispiel eine neue beobachterabhängige Datenstruktur
für Texturdaten und ein neuer Ansatz zur Texturfilterung vorgestellt.
Die meisten dieser Algorithmen und Verfahren wurden in ein interaktives System zur
Geländevisualisierung integriert, das den Projektnamen 'FlyAway' hat und im letzten Kapitel
der Arbeit beschrieben wird
Proceedings of the 7th Sound and Music Computing Conference
Proceedings of the SMC2010 - 7th Sound and Music Computing Conference, July 21st - July 24th 2010