184 research outputs found

    Space-time sketching of character animation

    Get PDF
    International audienceWe present a space-time abstraction for the sketch-based design of character animation. It allows animators to draft a full coordinated motion using a single stroke called the space-time curve (STC). From the STC we compute a dynamic line of action (DLOA) that drives the motion of a 3D character through projective constraints. Our dynamic models for the line's motion are entirely geometric, require no pre-existing data, and allow full artistic control. The resulting DLOA can be refined by over-sketching strokes along the space-time curve, or by composing another DLOA on top leading to control over complex motions with few strokes. Additionally , the resulting dynamic line of action can be applied to arbitrary body parts or characters. To match a 3D character to the 2D line over time, we introduce a robust matching algorithm based on closed-form solutions, yielding a tight match while allowing squash and stretch of the character's skeleton. Our experiments show that space-time sketching has the potential of bringing animation design within the reach of beginners while saving time for skilled artists

    gMotion: A spatio-temporal grammar for the procedural generation of motion graphics

    Get PDF
    Creating by hand compelling 2D animations that choreograph several groups of shapes requires a large number of manual edits. We present a method to procedurally generate motion graphics with timeslice grammars. Timeslice grammars are to time what split grammars are to space. We use this grammar to formally model motion graphics, manipulating them in both temporal and spatial components. We are able to combine both these aspects by representing animations as sets of affine transformations sampled uniformly in both space and time. Rules and operators in the grammar manipulate all spatio-temporal matrices as a whole, allowing us to expressively construct animation with few rules. The grammar animates shapes, which are represented as highly tessellated polygons, by applying the affine transforms to each shape vertex given the vertex position and the animation time. We introduce a small set of operators showing how we can produce 2D animations of geometric objects, by combining the expressive power of the grammar model, the composability of the operators with themselves, and the capabilities that derive from using a unified spatio-temporal representation for animation data. Throughout the paper, we show how timeslice grammars can produce a wide variety of animations that would take artists hours of tedious and time-consuming work. In particular, in cases where change of shapes is very common, our grammar can add motion detail to large collections of shapes with greater control over per-shape animations along with a compact rules structure

    Investigating User Experience Using Gesture-based and Immersive-based Interfaces on Animation Learners

    Get PDF
    Creating animation is a very exciting activity. However, the long and laborious process can be extremely challenging. Keyframe animation is a complex technique that takes a long time to complete, as the procedure involves changing the poses of characters through modifying the time and space of an action, called frame-by-frame animation. This involves the laborious, repetitive process of constantly reviewing results of the animation in order to make sure the movement-timing is accurate. A new approach to animation is required in order to provide a more intuitive animating experience. With the evolution of interaction design and the Natural User Interface (NUI) becoming widespread in recent years, a NUI-based animation system is expected to allow better usability and efficiency that would benefit animation. This thesis investigates the effectiveness of gesture-based and immersive-based interfaces as part of animation systems. A practice-based element of this research is a prototype of the hand gesture interface, which was created based on experiences from reflective practices. An experimental design is employed to investigate the usability and efficiency of gesture-based and immersive-based interfaces in comparison to the conventional GUI/WIMP interface application. The findings showed that gesture-based and immersive-based interfaces are able to attract animators in terms of the efficiency of the system. However, there was no difference in their preference for usability with the two interfaces. Most of our participants are pleasant with NUI interfaces and new technologies used in the animation process, but for detailed work and taking control of the application, the conventional GUI/WIMP is preferable. Despite the awkwardness of devising gesture-based and immersive-based interfaces for animation, the concept of the system showed potential for a faster animation process, an enjoyable learning system, and stimulating interest in a kinaesthetic learning experience

    Enriquecendo animações em quadros-chaves espaciais com movimento capturado

    Get PDF
    While motion capture (mocap) achieves realistic character animation at great cost, keyframing is capable of producing less realistic but more controllable animations. In this work we show how to combine the Spatial Keyframing (SK) Framework of IGARASHI et al. [1] and multidimensional projection techniques to reuse mocap data in several ways. Additionally, we show that multidimensional projection also can be used for visualization and motion analysis. We also propose a method for mocap compaction with the help of SK’s pose reconstruction (backprojection) algorithm. Finally, we present a novel multidimensional projection optimization technique that significantly enhances SK-based reconstruction and can also be applied to other contexts where a backprojection algorithm is available.Movimento capturado (mocap) produz animacões de personagens com grande realismo mas a um custo alto. A utilização de quadros-chave torna mais difícil um resultado com realismo mas torna mais fácil o controle da animacão. Neste trabalho, mostramos como combinar o uso de quadros-chaves espaciais – Spatial Keyframing (SK) Framework – de IGARASHI et al. [1] e técnicas de projeção multidimensional para reutilizar dados de movimento capturado de várias maneiras. Mostramos também como projeções multidimensionais podem ser utilizadas para visualização e análise de movimento. Propomos um método de compactação de dados de mocap utilizando a reconstrução de poses por meio do algoritmo de quadros-chaves espaciais. Também apresentamos uma técnica de otimização para as projeções multidimensionais que melhora a reconstrução do movimento e que pode ser aplicada em outros casos onde um algoritmo de retroprojecão esteja dad

    A Motion Control Scheme for Animating Expressive Arm Movements

    Get PDF
    Current methods for figure animation involve a tradeoff between the level of realism captured in the movements and the ease of generating the animations. We introduce a motion control paradigm that circumvents this tradeoff-it provides the ability to generate a wide range of natural-looking movements with minimal user labor. Effort, which is one part of Rudolf Laban\u27s system for observing and analyzing movement, describes the qualitative aspects of movement. Our motion control paradigm simplifies the generation of expressive movements by proceduralizing these qualitative aspects to hide the non-intuitive, quantitative aspects of movement. We build a model of Effort using a set of kinematic movement parameters that defines how a figure moves between goal keypoints. Our motion control scheme provides control through Effort\u27s four dimensional system of textual descriptors, providing a level of control thus far missing from behavioral animation systems and offering novel specification and editing capabilities on top of traditional keyframing and inverse kinematics methods. Since our Effort model is inexpensive computationally, Effort-based motion control systems can work in real-time. We demonstrate our motion control scheme by implementing EMOTE (Expressive MOTion Engine), a character animation module for expressive arm movements. EMOTE works with inverse kinematics to control the qualitative aspects of end-effector specified movements. The user specifies general movements by entering a sequence of goal positions for each hand. The user then expresses the essence of the movement by adjusting sliders for the Effort motion factors: Space, Weight, Time, and Flow. EMOTE produces a wide range of expressive movements, provides an easy-to-use interface (that is more intuitive than joint angle interpolation curves or physical parameters), features interactive editing, and real-time motion generation

    Supplementing Frequency Domain Interpolation Methods for Character Animation

    Get PDF
    The animation of human characters entails difficulties exceeding those met simulating objects, machines or plants. A person's gait is a product of nature affected by mood and physical condition. Small deviations from natural movement are perceived with ease by an unforgiving audience. Motion capture technology is frequently employed to record human movement. Subsequent playback on a skeleton underlying the character being animated conveys many of the subtleties of the original motion. Played-back recordings are of limited value, however, when integration in a virtual environment requires movements beyond those in the motion library, creating a need for the synthesis of new motion from pre-recorded sequences. An existing approach involves interpolation between motions in the frequency domain, with a blending space defined by a triangle network whose vertices represent input motions. It is this branch of character animation which is supplemented by the methods presented in this thesis, with work undertaken in three distinct areas. The first is a streamlined approach to previous work. It provides benefits including an efficiency gain in certain contexts, and a very different perspective on triangle network construction in which they become adjustable and intuitive user-interface devices with an increased flexibility allowing a greater range of motions to be blended than was possible with previous networks. Interpolation-based synthesis can never exhibit the same motion variety as can animation methods based on the playback of rearranged frame sequences. Limitations such as this were addressed by the second phase of work, with the creation of hybrid networks. These novel structures use properties of frequency domain triangle blending networks to seamlessly integrate playback-based animation within them. The third area focussed on was distortion found in both frequency- and time-domain blending. A new technique, single-source harmonic switching, was devised which greatly reduces it, and adds to the benefits of blending in the frequency domain

    Automatic rigging and animation of 3D characters

    Get PDF
    Animating an articulated 3D character currently requires manual rigging to specify its internal skeletal structure and to define how the input motion deforms its surface. We present a method for animating characters automatically. Given a static character mesh and a generic skeleton, our method adapts the skeleton to the character and attaches it to the surface, allowing skeletal motion data to animate the character. Because a single skeleton can be used with a wide range of characters, our method, in conjunction with a library of motions for a few skeletons, enables a user-friendly animation system for novices and children. Our prototype implementation, called Pinocchio, typically takes under a minute to rig a character on a modern midrange PC.Solidworks CorporationNational Science Foundation (U.S.). Graduate Research Fellowshi

    Vector Graphics Animation with Time-Varying Topology

    Get PDF
    International audienceWe introduce the Vector Animation Complex (VAC), a novel data structure for vector graphics animation, designed to support themodeling of time-continuous topological events. This allows features of a connected drawing to merge, split, appear, or disappear atdesired times via keyframes that introduce the desired topological change. Because the resulting space-time complex directly capturesthe time-varying topological structure, features are readily edited in both space and time in a way that reflects the intent of the drawing.A formal description of the data structure is provided, along with topological and geometric invariants. We illustrate our modelingparadigm with experimental results on various examples

    Software systems for modeling articulated figures

    Get PDF
    Research in computer animation and simulation of human task performance requires sophisticated geometric modeling and user interface tools. The software for a research environment should present the programmer with a powerful but flexible substrate of facilities for displaying and manipulating geometric objects, yet insure that future tools have a consistent and friendly user interface. Jack is a system which provides a flexible and extensible programmer and user interface for displaying and manipulating complex geometric figures, particularly human figures in a 3D working environment. It is a basic software framework for high-performance Silicon Graphics IRIS workstations for modeling and manipulating geometric objects in a general but powerful way. It provides a consistent and user-friendly interface across various applications in computer animation and simulation of human task performance. Currently, Jack provides input and control for applications including lighting specification and image rendering, anthropometric modeling, figure positioning, inverse kinematics, dynamic simulation, and keyframe animation
    • …
    corecore