3,377 research outputs found

    gMotion: A spatio-temporal grammar for the procedural generation of motion graphics

    Get PDF
    Creating by hand compelling 2D animations that choreograph several groups of shapes requires a large number of manual edits. We present a method to procedurally generate motion graphics with timeslice grammars. Timeslice grammars are to time what split grammars are to space. We use this grammar to formally model motion graphics, manipulating them in both temporal and spatial components. We are able to combine both these aspects by representing animations as sets of affine transformations sampled uniformly in both space and time. Rules and operators in the grammar manipulate all spatio-temporal matrices as a whole, allowing us to expressively construct animation with few rules. The grammar animates shapes, which are represented as highly tessellated polygons, by applying the affine transforms to each shape vertex given the vertex position and the animation time. We introduce a small set of operators showing how we can produce 2D animations of geometric objects, by combining the expressive power of the grammar model, the composability of the operators with themselves, and the capabilities that derive from using a unified spatio-temporal representation for animation data. Throughout the paper, we show how timeslice grammars can produce a wide variety of animations that would take artists hours of tedious and time-consuming work. In particular, in cases where change of shapes is very common, our grammar can add motion detail to large collections of shapes with greater control over per-shape animations along with a compact rules structure

    September 2020 Academic Affairs Minutes

    Get PDF

    Shadow Puppets Performance of Yogyakarta through its Visual Language

    Get PDF
    The shadow puppet performance conveys its story to the viewers through its visual language. The aim of this research is to reveal the visual language of the gesture effects in the shadow images of the shadow puppet performance of Yogyakarta. Visual language stands for the representative images that are similar to the original objects and the way to draw it. In the case of shadow puppet performance of Yogyakarta, Central Java, Indonesia, the way to draw the images correlated to the space-time-plane system of drawing. The space-time-plane system has the characteristic to have multiple angles, distances and moments. The results of this research were each movements of the main puppet character determine it own body language

    SkillVis: A Visualization Tool for Boxing Skill Assessment

    Get PDF
    Motion analysis and visualization are crucial in sports science for sports training and performance evaluation. While primitive computational methods have been proposed for simple analysis such as postures and movements, few can evaluate the high-level quality of sports players such as their skill levels and strategies. We propose a visualization tool to help visualizing boxers' motions and assess their skill levels. Our system automatically builds a graph-based representation from motion capture data and reduces the dimension of the graph onto a 3D space so that it can be easily visualized and understood. In particular, our system allows easy understanding of the boxer's boxing behaviours, preferred actions, potential strength and weakness. We demonstrate the effectiveness of our system on different boxers' motions. Our system not only serves as a tool for visualization, it also provides intuitive motion analysis that can be further used beyond sports science

    Synesthetic art through 3-D projection: The requirements of a computer-based supermedium

    Get PDF
    A computer-based form of multimedia art is proposed that uses the computer to fuse aspects of painting, sculpture, dance, music, film, and other media into a one-to-one synthesia of image and sound for spatially synchronous 3-D projection. Called synesthetic art, this conversion of many varied media into an aesthetically unitary experience determines the character and requirements of the system and its software. During the start-up phase, computer stereographic systems are unsuitable for software development. Eventually, a new type of illusory-projective supermedium will be required to achieve the needed combination of large-format projection and convincing real life presence, and to handle the vast amount of 3-D visual and acoustic information required. The influence of the concept on the author's research and creative work is illustrated through two examples

    사람 동작 생성을 위한 의미 분석

    Get PDF
    학위논문 (박사)-- 서울대학교 대학원 : 전기·컴퓨터공학부, 2017. 2. 이제희.One of main goals of computer-generated character animation is to reduce cost to create animated scenes. Using human motion in makes it easier to animate characters, so motion capture technology is used as a standard technique. However, it is difficult to get the desired motion because it requires a large space, high-performance cameras, actors, and a significant amount of work for post-processing. Data-driven character animation includes a set of techniques that make effective use of captured motion data. In this thesis, I introduce methods that analyze the semantics of motion data to enhance the utilization of the data. To accomplish this, various techniques in other fields are integrated so that we can understand the semantics of a unit motion clip, the implicit structure of a motion sequence, and a natural description of movements. Based upon that understanding, we can generate new animation systems. The first animation system in this thesis allows the user to generate an animation of basketball play from the tactics board. In order to handle complex basketball rule that players must follow, we use context-free grammars for motion representation. Our motion grammar enables the user to define implicit/explicit rules of human behavior and generates valid movement of basketball players. Interactions between players or between players and the environment are represented with semantic rules, which results in plausible animation. When we compose motion sequences, we rely on motion corpus storing the prepared motion clips and the transition between them. It is important to construct good motion corpus to create natural and rich animations, but it requires the efforts of experts. We introduce a semi-supervised learning technique for automatic generation of motion corpus. Stacked autoencoders are used to find latent features for large amounts of motion capture data and the features are used to effectively discover worthwhile motion clips. The other animation system uses natural language processing technology to understand the meaning of the animated scene that the user wants to make. Specifically, the script of an animated scene is used to synthesize the movements of characters. Like the sketch interface, scripts are very sparse input sources. Understanding motion allows the system to interpret abstract user input and generate scenes that meet user needs.1 Introduction 1 2 Background 8 2.1 RepresentationofHumanMovements 8 2.2 MotionAnnotation 11 2.3 MotionGrammars 12 2.4 NaturalLanguageProcessing 15 3 Motion Grammar 17 3.1 Overview 18 3.2 MotionGrammar 20 3.2.1 Instantiation, Semantics, and Plausibility 22 3.2.2 ASimpleExample 25 3.3 BasketballTacticsBoard 27 3.4 MotionSynthesis 29 3.5 Results 35 3.6 Discussion 39 4 Motion Embedding 49 4.1 Overview 50 4.2 MotionData 51 4.3 Autoencoders 52 4.3.1 Stackedautoencoders 53 4.4 MotionCorpus 53 4.4.1 Training 53 4.4.2 FindingMotionClips 55 4.5 Results 55 4.6 Discussion 57 5 Text to Animation 62 5.1 Overview 63 5.2 UnderstandingSemantics 64 5.3 ActionChains 65 5.3.1 WordEmbedding 66 5.3.2 MotionPlausibility 67 5.4 SceneGeneration 69 5.5 Results 70 5.6 Discussion 70 6 Conclusion 74 Bibliography 76 초록 100Docto

    Real time multimodal interaction with animated virtual human

    Get PDF
    This paper describes the design and implementation of a real time animation framework in which animated virtual human is capable of performing multimodal interactions with human user. The animation system consists of several functional components, namely perception, behaviours generation, and motion generation. The virtual human agent in the system has a complex underlying geometry structure with multiple degrees of freedom (DOFs). It relies on a virtual perception system to capture information from its environment and respond to human user's commands by a combination of non-verbal behaviours including co-verbal gestures, posture, body motions and simple utterances. A language processing module is incorporated to interpret user's command. In particular, an efficient motion generation method has been developed to combines both motion captured data and parameterized actions generated in real time to produce variations in agent's behaviours depending on its momentary emotional states

    Simulating collective transport of virtual ants

    Get PDF
    This paper simulates the behaviour of collective transport where a group of ants transports an object in a cooperative fashion. Different from humans, the task coordination of collective transport, with ants, is not achieved by direct communication between group individuals, but through indirect information transmission via mechanical movements of the object. This paper proposes a stochastic probability model to model the decision-making procedure of group individuals and trains a neural network via reinforcement learning to represent the force policy. Our method is scalable to different numbers of individuals and is adaptable to users' input, including transport trajectory, object shape, external intervention, etc. Our method can reproduce the characteristic strategies of ants, such as realign and reposition. The simulations show that with the strategy of reposition, the ants can avoid deadlock scenarios during the task of collective transport
    corecore