5,587 research outputs found

    Publishing Time Dependent Oceanographic Visualizations using VRML

    Get PDF
    Oceanographic simulations generate time dependent data; thus, visualizations of this data should include and realize the variable `time'. Moreover, the oceanographers are located across the world and they wish to conveniently communicate and exchange these temporal realizations. This publication of material may be achieved using different methods and languages. VRML provides one convenient publication medium that allows the visualizations to be easily viewed and exchanged between users. Using VRML as the implementation language, we describe five categories of operation. The strategies are determined by the level of calculation that is achieved at the generation stage compared to the playing of the animation. We name the methods: 2D movie, 3D spatial, 3D flipbook, key frame deformation and visualization program

    Specifying and Generating Editing Environments for Interactive Animated Visual Models

    Get PDF
    The behavior of a dynamic system is most easily understood if it is illustrated by a visual model that is animated over time. Graphs are a widely accepted approach for representing such dynamic models in an abstract way. System behavior and, therefore, model behavior corresponds to modifications of its representing graph over time. Graph transformations are an obvious choice for specifying these graph modifications and, hence, model behavior. Existing approaches use a graph to represent the static state of a model whereas modifications of this graph are described by graph transformations that happen instantaneously, but whose durations are stretched over time in order to allow for smooth animations. However, long-running and simultaneous animations of different parts of a model as well as interactions during animations are difficult to specify and realize that way. This paper describes a different approach. A graph does not necessarily represent the static aspect of a model, but rather represents the currently changing model. Graph transformations, when triggered at specific points of time, modify such graphs and thus start, change, or stop animations. Several concurrent animations may simultaneously take place in a model. Graph transformations can easily describe interactions within the model or between user and model, too. This approach has been integrated into the DiaMeta framework that now allows for specifying and generating editing environments for interactive animated visual models. The approach is demonstrated using the game Avalanche where many parallel and interacting movements take place

    From the Behavior Model of an Animated Visual Language to its Editing Environment Based on Graph Transformation

    Get PDF
    Animated visual models are a reasonable means for illustrating system behavior. However, implementing animated visual languages and their editing environments is difficult. Therefore, guidelines, specification methods, and tool support are necessary. A flexible approach for specifying model states and behavior is to use graphs and graph transformations. Thereby, a graph can also represent dynamic aspects of a model, like animations, and graph transformations are triggered over time to control the behavior, like starting, modifying, and stopping animations or adding and removing elements. These concepts had already been added to Dia-Meta, a framework for generating editing environments, but they provide only low-level support for specifying and implementing animated visual languages; specifying complex dynamic languages was still a challenging task. This paper proposes the Animation Modeling Language (AML), which allows to model behavior and animations on a higher level of abstraction. AML models are then translated into low-level specifications based on graph transformations. The approach is demonstrated using a traffic simulation

    Simple MoCap System for Home Usage

    Get PDF
    Nowadays many MoCap systems exist. Generating 3D facial animation of characters is currently realized by using the motion capture data (MoCap data), which is obtained by tracking the facial markers from an actor/actress. In general it is a professional solution that is sophisticated and costly. This paper presents a solution with a system that is inexpensive. We propose a new easy-to-use system for home usage, through which we are making character animation. In its implementation we paid attention to the elimination of errors from the previous solutions. In this paper the authors describe the method how motion capture characters on a treadmill and as well as an own Java application that processes the video for its further use in Cinema 4D. This paper describes the implementation of this technology of sensing in a way so that the animated character authentically imitated human movement on a treadmill

    Computer-Aided Teaching Using Animations for Engineering Curricula: A Case Study for Automotive Engineering Modules

    Get PDF
    © 2021 Crown Copyright. This is the accepted manuscript version of an article which has been published in final form at https://doi.org/10.1109/TE.2021.3100471.One-dimensional (1-D) demonstrations, e.g. the black-box systems, have become popular in teaching materials for engineering modules due to the high complexity of the system’s multi-dimensional (e.g. 2-D and 3-D) identities. The need for multi-dimensional explanations on how multi-physics equations and systems work is vital for engineering students, whose learning experience must gain a cognitive process understanding for utilizing such multi-physics-focused equations into a pragmatic dimension. The lack of knowledge and expertise in creating animations for visualizing sequent processes and operations in academia can result in an ineffective learning experience for engineering students. This study explores the benefits of animation which can eventually improve the teaching and student learning experiences. In this paper, the use of computer-aided animation tools is evaluated based on their capabilities. Based on their strengths and weaknesses, the study offered some insights for selecting the investigated tools. To verify the effectiveness of animations in teaching and learning, a survey was conducted for undergraduate and postgraduate cohorts and automotive engineering academics. Based on the survey’s data, some analytics and discussion have offered more quantitative results. The historic data (2012-2020) analysis have validated the animations efficacy as achievements of the study, where the average mark of both modules has significantly improved, with the reduced rate of failure.Peer reviewe

    MOG 2007:Workshop on Multimodal Output Generation: CTIT Proceedings

    Get PDF
    This volume brings together presents a wide variety of work offering different perspectives on multimodal generation. Two different strands of work can be distinguished: half of the gathered papers present current work on embodied conversational agents (ECA’s), while the other half presents current work on multimedia applications. Two general research questions are shared by all: what output modalities are most suitable in which situation, and how should different output modalities be combined
    corecore