4 research outputs found

    Apparent biological motion in first and third person perspective

    Get PDF
    Apparent biological motion is the perception of plausible movements when two alternating images depicting the initial and final phase of an action are presented at specific stimulus onset asynchronies. Here, we show lower subjective apparent biological motion perception when actions are observed from a first relative to a third visual perspective. These findings are discussed within the context of sensorimotor contributions to body ownership

    Performance

    No full text
    performance-driven puppets using low-cost optical motion capture devic

    A Demo of a Dynamic Facial UI for Digital Artists

    No full text
    Part 1: Long and Short PapersInternational audienceCharacter facial animation is difficult because the face of a character assumes many complex expressions. To achieve convincing visual results for animation, 3D digital artists need to prepare their characters with sophisticated control structures. One of the most important techniques to achieve good facial animation is to use facial control interfaces, also called facial user interfaces, or facial UI’s. But facial UI’s are usually dull and often confusing, with limited user interaction and no flexibility. We developed a concept and a working prototype of a dynamic facial UI inside the Blender [1] open-source software to allow their large community of digital artists to better control and organize the facial animation of a character. Our interactive system is running stable in the latest version of Blender and we started to build a full-face dynamic UI to show its interactive potential in a character’s face

    Easy Generation of Facial Animation Using Motion Graphs

    Get PDF
    Facial animation is a time-consuming and cumbersome task that requires years of experience and/or a complex and expensive set-up. This becomes an issue, especially when animating the multitude of secondary characters required, e.g. in films or video-games. We address this problem with a novel technique that relies on motion graphs to represent a landmarked database. Separate graphs are created for different facial regions, allowing a reduced memory footprint compared to the original data. The common poses are identified using a Euclidean-based similarity metric and merged into the same node. This process traditionally requires a manually chosen threshold, however, we simplify it by optimizing for the desired graph compression. Motion synthesis occurs by traversing the graph using Dijkstra's algorithm, and coherent noise is introduced by swapping some path nodes with their neighbours. Expression labels, extracted from the database, provide the control mechanism for animation. We present a way of creating facial animation with reduced input that automatically controls timing and pose detail. Our technique easily fits within video-game and crowd animation contexts, allowing the characters to be more expressive with less effort. Furthermore, it provides a starting point for content creators aiming to bring more life into their characters.<br/
    corecore