5,070 research outputs found

    Shape Animation with Combined Captured and Simulated Dynamics

    Get PDF
    We present a novel volumetric animation generation framework to create new types of animations from raw 3D surface or point cloud sequence of captured real performances. The framework considers as input time incoherent 3D observations of a moving shape, and is thus particularly suitable for the output of performance capture platforms. In our system, a suitable virtual representation of the actor is built from real captures that allows seamless combination and simulation with virtual external forces and objects, in which the original captured actor can be reshaped, disassembled or reassembled from user-specified virtual physics. Instead of using the dominant surface-based geometric representation of the capture, which is less suitable for volumetric effects, our pipeline exploits Centroidal Voronoi tessellation decompositions as unified volumetric representation of the real captured actor, which we show can be used seamlessly as a building block for all processing stages, from capture and tracking to virtual physic simulation. The representation makes no human specific assumption and can be used to capture and re-simulate the actor with props or other moving scenery elements. We demonstrate the potential of this pipeline for virtual reanimation of a real captured event with various unprecedented volumetric visual effects, such as volumetric distortion, erosion, morphing, gravity pull, or collisions

    Discrete event simulation and virtual reality use in industry: new opportunities and future trends

    Get PDF
    This paper reviews the area of combined discrete event simulation (DES) and virtual reality (VR) use within industry. While establishing a state of the art for progress in this area, this paper makes the case for VR DES as the vehicle of choice for complex data analysis through interactive simulation models, highlighting both its advantages and current limitations. This paper reviews active research topics such as VR and DES real-time integration, communication protocols, system design considerations, model validation, and applications of VR and DES. While summarizing future research directions for this technology combination, the case is made for smart factory adoption of VR DES as a new platform for scenario testing and decision making. It is put that in order for VR DES to fully meet the visualization requirements of both Industry 4.0 and Industrial Internet visions of digital manufacturing, further research is required in the areas of lower latency image processing, DES delivery as a service, gesture recognition for VR DES interaction, and linkage of DES to real-time data streams and Big Data sets

    From the Behavior Model of an Animated Visual Language to its Editing Environment Based on Graph Transformation

    Get PDF
    Animated visual models are a reasonable means for illustrating system behavior. However, implementing animated visual languages and their editing environments is difficult. Therefore, guidelines, specification methods, and tool support are necessary. A flexible approach for specifying model states and behavior is to use graphs and graph transformations. Thereby, a graph can also represent dynamic aspects of a model, like animations, and graph transformations are triggered over time to control the behavior, like starting, modifying, and stopping animations or adding and removing elements. These concepts had already been added to Dia-Meta, a framework for generating editing environments, but they provide only low-level support for specifying and implementing animated visual languages; specifying complex dynamic languages was still a challenging task. This paper proposes the Animation Modeling Language (AML), which allows to model behavior and animations on a higher level of abstraction. AML models are then translated into low-level specifications based on graph transformations. The approach is demonstrated using a traffic simulation

    A General Framework and Communication Protocol for the Real-Time Transmission of Interactive Media

    Get PDF
    In this paper we present a general framework for the real-time transmission ofinteractive media, i.e. media involving user interaction. Examples of interactive media are shared whiteboards, Java animations and VRML worlds. By identifying and supporting the common aspects of this media class the framework allows the development of generic services for network sessions involving the transmission of interactive media. Examples are mechanisms for late join and session recording. The proposed framework is based on the Real-Time Transport Protocol (RTP) which is widely used in the Internet for the real-time transmission of audio and video. Using the experience gained through the framework for audio and video, our work consists of three important parts: the definition of a protocol profile, the instantiation of this profile for specific media, and the development of generic services. The profile captures those aspects that are common to the class of interactive media. A single medium must instantiate this profile by providing media-specific information in the form of a payload type definition. Based on the profile, generic services can be developed for all interactive media. In this paper we focus on the description of the profile for the real-time transmission of interactive media. We then present the main ideas behind a generic recording service. Finally we show how multi-user VRML and distributed interactive Java animations can instantiate the profile

    Modelling and Refinement in CODA

    Full text link
    This paper provides an overview of the CODA framework for modelling and refinement of component-based embedded systems. CODA is an extension of Event-B and UML-B and is supported by a plug-in for the Rodin toolset. CODA augments Event-B with constructs for component-based modelling including components, communications ports, port connectors, timed communications and timing triggers. Component behaviour is specified through a combination of UML-B state machines and Event-B. CODA communications and timing are given an Event-B semantics through translation rules. Refinement is based on Event-B refinement and allows layered construction of CODA models in a consistent way.Comment: In Proceedings Refine 2013, arXiv:1305.563

    Design of an Application to Collect Data and Create Animations from Visual Algorithm Simulation Exercises

    Get PDF
    Visual Algorithm Simulation (VAS) exercises are commonly used in Computer Science education to help learners understand the logic behind the abstractions used in programming. These exercises also present problems common in the daily work of Computer Science graduates. Aalto University uses the JSAV library to create VAS exercises and evaluate the solutions submitted by students. The evaluation process counts the amount of correct steps given by the user during the exercise. However, because more detailed data is not collected, teachers currently can not recreate and analyse the submitted solutions more in depth. This thesis presents the design, development and evaluation of an application prototype, which can be easily integrated in existing VAS exercises created with the JSAV library. The prototype is called Player Application, and it is designed as a service that can be easily integrated into other systems while still remaining independent. The Player Application consists of two main independent components: the Exercise Recorder and the Exercise Player. A third important contribution is the new JSON-based Algorithm Animation Language, which is designed to describe, structure and store the data collected from the VAS exercises. The prototype was successfully tested in an online environment by importing the Exercise Recorder into existing exercises and replaying the submitted solutions in the Exercise Player. The tests showed that its design and architecture were valid. Next, the aim is to create a mature application, which can be used at Aalto University and other institutions, in addition the prototype still needs further development to support more VAS exercise types

    Specifying and Generating Editing Environments for Interactive Animated Visual Models

    Get PDF
    The behavior of a dynamic system is most easily understood if it is illustrated by a visual model that is animated over time. Graphs are a widely accepted approach for representing such dynamic models in an abstract way. System behavior and, therefore, model behavior corresponds to modifications of its representing graph over time. Graph transformations are an obvious choice for specifying these graph modifications and, hence, model behavior. Existing approaches use a graph to represent the static state of a model whereas modifications of this graph are described by graph transformations that happen instantaneously, but whose durations are stretched over time in order to allow for smooth animations. However, long-running and simultaneous animations of different parts of a model as well as interactions during animations are difficult to specify and realize that way. This paper describes a different approach. A graph does not necessarily represent the static aspect of a model, but rather represents the currently changing model. Graph transformations, when triggered at specific points of time, modify such graphs and thus start, change, or stop animations. Several concurrent animations may simultaneously take place in a model. Graph transformations can easily describe interactions within the model or between user and model, too. This approach has been integrated into the DiaMeta framework that now allows for specifying and generating editing environments for interactive animated visual models. The approach is demonstrated using the game Avalanche where many parallel and interacting movements take place

    Model-based engineering of animated interactive systems for the interactive television environment

    Get PDF
    Les interfaces graphiques étaient la plupart du temps statiques, et représentaient une succession d'états logiciels les uns après les autres. Cependant, les transitions animées entre ces états statiques font partie intégrante des interfaces utilisateurs modernes, et leurs processus de design et d'implémentations constituent un défi pour les designers et les développeurs. Cette thèse propose un processus de conception de systèmes interactifs centré sur les animations, ainsi qu'une architecture pour la définition et l'implémentation d'animations au sein des interfaces graphiques. L'architecture met en avant une approche à deux niveaux pour définir une vue haut niveau d'une animation (avec un intérêt particulier pour les objets animés, leurs propriétés à être animé et la composition d'animations) ainsi qu'une vue bas niveau traitant des aspects détaillés des animations tels que les timings et les optimisations. Concernant les spécifications formelles de ces deux niveaux, nous utilisons une approche qui facilite les réseaux de Petri orientés objets pour la conception, l'implémentation et la validation d'interfaces utilisateurs animées en fournissant une description complète et non-ambiguë de l'ensemble de l'interface utilisateur, y compris les animations. Enfin, nous décrivons la mise en pratique du processus présenté, illustré par un cas d'étude d'un prototype haute-fidélité d'une interface utilisateur, pour le domaine de la télévision interactive. Ce processus conduira à une spécification formelle et détaillée du système interactif, et incluera des animations utilisant des réseaux de Petri orientés objet (conçus avec l'outil PetShop CASE).Graphical User Interfaces used to be mostly static, representing one software state after the other. However, animated transitions between these static states are an integral part in modern user interfaces and processes for both their design and implementation remain a challenge for designers and developers. This thesis proposes a process for designing interactive systems focusing on animations, along with an architecture for the definition and implementation of animation in user interfaces. The architecture proposes a two levels approach for defining a high-level view of an animation (focusing on animated objects, their properties to be animated and on the composition of animations) and a low-level one dealing with detailed aspects of animations such as timing and optimization. For the formal specification of these two levels, we are using an approach facilitating object-oriented Petri nets to support the design, implementation and validation of animated user interfaces by providing a complete and unambiguous description of the entire user interface including animations. Finally, we describe the application of the presented process exemplified by a case study for a high-fidelity prototype of a user interface for the interactive Television domain. This process will lead to a detailed formal specification of the interactive system, including animations using object-oriented Petri nets (designed with the PetShop CASE tool)
    • …
    corecore