4 research outputs found

    Création automatique des animations 3D

    Get PDF
    RÉSUMÉ La production traditionnelle d'animations 3D pour un jeu vidéo ou un film d'animation est un processus lourd. Les animateurs ont besoin de plusieurs années de pratique et de bons logiciels de création de contenu numérique pour réussir à créer des animations 3D. Cela est dû à la complexité du logiciel et à la complexité de la tâche. La création d'un cycle de marche réaliste dans une scène complexe nécessite de nombreux détails de bas niveau pour atteindre un niveau élevé de réalisme. Ce mémoire propose une vue de haut niveau dans la création automatique des animations 3D afin de simplifier le processus global de production de l'animation. Afin d'aborder cette problématique, l'objectif général de la recherche a consisté à élaborer un prototype logiciel capable de générer automatiquement des animations 3D qui représentent le sens d'une phrase simple. Ce projet faisait partie intégrale du projet GITAN dans le domaine de l'infographie. GITAN proposait une solution pour générer des animations 3D à partir du texte. La solution proposée dans ce mémoire constitue principalement le module graphique qui génère la scène 3D animée qui représente la phrase d'entrée. Avec ce système, la complexité de la construction de la scène animée est considérablement réduite, puisque nous utilisons une représentation textuelle pour décrire l'animation et les différents objets dans la scène. La revue bibliographique a suggéré que les systèmes semblables qui permettent de générer automatiquement des animations 3D à partir du texte sont souvent très orientés vers un domaine d'application spécifique, par exemple les accidents automobiles, les comportements ou les interactions des personnages. L'automatisation de la génération de la scène sur ces systèmes se base souvent sur des langages script ou des formalismes qui étaient souvent orientés au domaine d'application. De plus, nous voulions générer l'animation en utilisant un format d'échange 3D à la place d'afficher directement l'animation. Nous pensons que l'utilisation d'un format d'échange 3D nous permet de bien générer la scène 3D, puisqu'un bon format d'échange intermédiaire permet de bien définir une animation de façon standard et fournit des outils nécessaires pour son utilisation. Pour cette raison, nous avons utilisé COLLADA comme format 3D pour représenter nos animations. D'après ces observations, nous avons émis trois hypothèses de recherche. La première supposait qu'il était possible de créer un formalisme capable de décrire une scène animée à partir d'une phrase simple. Le formalisme nous permet de faire une description de la scène animée en utilisant des noeuds,des contraintes et des images clés. La deuxième hypothèse supposait qu'il est possible de traduire le script qui décrit la scène vers le fichier COLLADA. Nous avons proposé un système logiciel qui permet de traduire le script vers un fichier COLLADA qui contient l'animation 3D. Finalement, la troisième hypothèse supposait que l'animation générée par le système permet de communiquer le sens de la phrase initiale. Le système doit pouvoir communiquer le message de la phrase qui décrit la scène vers les observateurs. Pour tester ces hypothèses, la méthodologie que nous avons retenue consiste, premièrement, à la création du formalisme qui permet de décrire la scène 3D. Nous avons proposé un schéma XML qui permet de déclarer des noeuds, des animations prédéfinies, des contraintes et des images clés qui décrivent la scène la générer. Par la suite, nous avons proposé une architecture logicielle modulaire qui traduit le script vers le fichier COLLADA. Le système utilise des algorithmes pour positionner correctement les objets dans la scène et pour synchroniser les animations. Finalement, nous avons effectué un sondage pour valider la communication du message par les scènes 3D générées. Le résultat du sondage nous permet d'analyser la compréhension du message par les observateurs et l'influence de l'environnement de la scène 3D sur le message, et ainsi, déterminer s'il est possible de transmettre le sens de la phrase initiale avec l'animation 3D. Les résultats que nous avons obtenus sont très satisfaisants. Nous avons été capables de décrire les scènes avec le formalisme proposée. De plus, le système logiciel génère des fichiers COLLADA bien structurés et il est capable de générer deux types de scènes : des scènes statiques et des scènes animées. Finalement, l'analyse des résultats du sondage montre que les scènes animées permettent de mieux communiquer les messages que les scènes statiques, mais l'utilisation correcte de deux types de scènes en fonction de la phrase permet de bien communiquer le message. En eet, les phrases qui contiennent des verbes d'état seront mieux représentées par des scènes statiques, tandis que des animations 3D permettent de mieux représenter des phrases qui contiennent des verbes d'action. De plus, l'analyse de l'influence de l'environnement nous a permis de constater qu'il n'offre pas d'amélioration dans la communication du message. Ces résultats nous ont permis de constater que le système est capable de générer de façon automatique des animations 3D qui transmettent le sens d'une phrase simple ce qui permet de simplifier le processus de production traditionnelle des animations 3D.----------ABSTRACT The traditional production of 3D animations for a video game or an animated film is a cumbersome process. Animators need several years of practice and excellent skills using Digital Content Creation (DCC) software to successfully create 3D animations. This is due to the complexity of the software and the complexity of the task. Creating a realistic walk cycle in a complex scene requires many low-level details for achieving a high level of realism. This thesis proposes a high-level view in the automatic creation of 3D animations to simplify the overall process of animation production. To address this problem, the overall objective of the research was to develop a software prototype able to automatically generate 3D animations that represent the meaning of a simple sentence. This project was an integral part of the project GITAN in computer graphics. GITAN proposed a solution to generate 3D animations from text. The solution proposed in this paper is mainly the graphics module that generates animated 3D scene representing the input sentence. With this system, the complexity of building the animated scene is greatly reduced, since we use a textual representation to describe the animation and the various objects in the scene. The literature review suggested that similar systems that automatically generate 3D animations from text are often related to a specific application domain such as automobile accidents, behavior or interactions of the characters. The automation of the scene generation for these systems is often based on scripting languages related to an application domain. In addition, we wanted to generate the animation using a 3D exchange format instead of directly display the animation. We believe that using a 3D exchange format allows us to better generate the 3D scene, since a good intermediate exchange format allows to define animations as building blocks and provides the tools to use them. For this reason, we used COLLADA as 3D format to represent our animations. From these observations, we formulated three research hypotheses. The first one assumed that it was possible to create a formalism able to describe an animated scene from a simple sentence. The formalism allows us to make an animated description of the scene using nodes, constraints and keyframes. The second hypothesis assumed that it is possible to translate the script that described the scene to a COLLADA file. We proposed a software system that translates the script to a COLLADA file that contains the 3D animation. Finally, the third hypothesis assumed that the animation generated by the system communicate the original meaning of the sentence. The system must be able to communicate the message of the sentence describing the scene to the observers. To test these hypotheses, the methodology we have adopted consists, rst of all, in the creation of the formalism for describing the 3D scene. We have proposed an XML schema for declaring nodes, animation presets, constraints and keyframes to describe the scene. Subsequently, we proposed a modular software architecture that translates the script into the COLLADA file. The system uses algorithms to correctly position the objects in the scene and to synchronize animations. Finally, we conducted a survey to validate the communication of the message contained in the 3D scenes. The result of the survey allows us to analyze the transmission of the message to the observers and the influence of the environment of the 3D scene on the message, and so, determine if it's possible to transmit the original meaning of the sentence with the 3D animation. The results we obtained are very rewarding. We were able to describe the scenes with the proposed script language. In addition, the software system is generating well structured COLLADA files and it is capable of generating two types of scenes: static scenes and animated scenes. Finally, analysis of survey results shows that the animated scenes can better communicate messages than static scenes, but the proper use of the two types of scenes according to the phrase can eectively communicate the message. Indeed, sentences that contain state verbs will be better represented by static scenes, while 3D animations can more adequately represent sentences that contain action verbs. Furthermore, in the analysis of the influence of the environment, we found that it offers no improvement in communicating the message. These results revealed that the system is able to automatically generate 3D animations that convey the sense of a simple sentence to simplify the production process of traditional animation

    Contributions to Big Geospatial Data Rendering and Visualisations

    Get PDF
    Current geographical information systems lack features and components which are commonly found within rendering and game engines. When combined with computer game technologies, a modern geographical information system capable of advanced rendering and data visualisations are achievable. We have investigated the combination of big geospatial data, and computer game engines for the creation of a modern geographical information system framework capable of visualising densely populated real-world scenes using advanced rendering algorithms. The pipeline imports raw geospatial data in the form of Ordnance Survey data which is provided by the UK government, LiDAR data provided by a private company, and the global open mapping project of OpenStreetMap. The data is combined to produce additional terrain data where data is missing from the high resolution data sources of LiDAR by utilising interpolated Ordnance Survey data. Where data is missing from LiDAR, the same interpolation techniques are also utilised. Once a high resolution terrain data set which is complete in regards to coverage, is generated, sub datasets can be extracted from the LiDAR using OSM boundary data as a perimeter. The boundaries of OSM represent buildings or assets. Data can then be extracted such as the heights of buildings. This data can then be used to update the OSM database. Using a novel adjacency matrix extraction technique, 3D model mesh objects can be generated using both LiDAR and OSM information. The generation of model mesh objects created from OSM data utilises procedural content generation techniques, enabling the generation of GIS based 3D real-world scenes. Although only LiDAR and Ordnance Survey for UK data is available, restricting the generation to the UK borders, using OSM alone, the system is able to procedurally generate any place within the world covered by OSM. In this research, to manage the large amounts of data, a novel scenegraph structure has been generated to spatially separate OSM data according to OS coordinates, splitting the UK into 1kilometer squared tiles, and categorising OSM assets such as buildings, highways, amenities. Once spatially organised, and categorised as an asset of importance, the novel scenegraph allows for data dispersal through an entire scene in real-time. The 3D real-world scenes visualised within the runtime simulator can be manipulated in four main aspects; • Viewing at any angle or location through the use of a 3D and 2D camera system. • Modifying the effects or effect parameters applied to the 3D model mesh objects to visualise user defined data by use of our novel algorithms and unique lighting data-structure effect file with accompanying material interface. • Procedurally generating animations which can be applied to the spatial parameters of objects, or the visual properties of objects. • Applying Indexed Array Shader Function and taking advantage of the novel big geospatial scenegraph structure to exploit better rendering techniques in the context of a modern Geographical Information System, which has not been done, to the best of our knowledge. Combined with a novel scenegraph structure layout, the user can view and manipulate real-world procedurally generated worlds with additional user generated content in a number of unique and unseen ways within the current geographical information system implementations. We evaluate multiple functionalities and aspects of the framework. We evaluate the performance of the system, measuring frame rates with multi sized maps by stress testing means, as well as evaluating the benefits of the novel scenegraph structure for categorising, separating, manoeuvring, and data dispersal. Uniform scaling by n2 of scenegraph nodes which contain no model mesh data, procedurally generated model data, and user generated model data. The experiment compared runtime parameters, and memory consumption. We have compared the technical features of the framework against that of real-world related commercial projects; Google Maps, OSM2World, OSM-3D, OSM-Buildings, OpenStreetMap, ArcGIS, Sustainability Assessment Visualisation and Enhancement (SAVE), and Autonomous Learning Agents for Decentralised Data and Information (ALLADIN). We conclude that when compared to related research, the framework produces data-sets relevant for visualising geospatial assets from the combination of real-world data-sets, capable of being used by a multitude of external game engines, applications, and geographical information systems. The ability to manipulate the production of said data-sets at pre-compile time aids processing speeds for runtime simulation. This ability is provided by the pre-processor. The added benefit is to allow users to manipulate the spatial and visual parameters in a number of varying ways with minimal domain knowledge. The features of creating procedural animations attached to each of the spatial parameters and visual shading parameters allow users to view and encode their own representations of scenes which are unavailable within all of the products stated. Each of the alternative projects have similar features, but none which allow full animation ability of all parameters of an asset; spatially or visually, or both. We also evaluated the framework on the implemented features; implementing the needed algorithms and novelties of the framework as problems arose in the development of the framework. Examples of this is the algorithm for combining the multiple terrain data-sets we have (Ordnance Survey terrain data and Light Detection and Ranging Digital Surface Model data and Digital Terrain Model data), and combining them in a justifiable way to produce maps with no missing data values for further analysis and visualisation. A majority of visualisations are rendered using an Indexed Array Shader Function effect file, structured to create a novel design to encapsulate common rendering effects found in commercial computer games, and apply them to the rendering of real-world assets for a modern geographical information system. Maps of various size, in both dimensions, polygonal density, asset counts, and memory consumption prove successful in relation to real-time rendering parameters i.e. the visualisation of maps do not create a bottleneck for processing. The visualised scenes allow users to view large dense environments which include terrain models within procedural and user generated buildings, highways, amenities, and boundaries. The use of a novel scenegraph structure allows for the fast iteration and search from user defined dynamic queries. The interaction with the framework is allowed through a novel Interactive Visualisation Interface. Utilising the interface, a user can apply procedurally generated animations to both spatial and visual properties to any node or model mesh within the scene. We conclude that the framework has been a success. We have completed what we have set out to develop and create, we have combined multiple data-sets to create improved terrain data-sets for further research and development. We have created a framework which combines the real-world data of Ordnance Survey, LiDAR, and OpenStreetMap, and implemented algorithms to create procedural assets of buildings, highways, terrain, amenities, model meshes, and boundaries. for visualisation, with implemented features which allows users to search and manipulate a city’s worth of data on a per-object basis, or user-defined combinations. The successful framework has been built by the cross domain specialism needed for such a project. We have combined the areas of; computer games technology, engine and framework development, procedural generation techniques and algorithms, use of real-world data-sets, geographical information system development, data-parsing, big-data algorithmic reduction techniques, and visualisation using shader techniques

    Proceedings. 9th 3DGeoInfo Conference 2014, [11-13 November 2014, Dubai]

    Get PDF
    It is known that, scientific disciplines such as geology, geophysics, and reservoir exploration intrinsically use 3D geo-information in their models and simulations. However, 3D geo-information is also urgently needed in many traditional 2D planning areas such as civil engineering, city and infrastructure modeling, architecture, environmental planning etc. Altogether, 3DGeoInfo is an emerging technology that will greatly influence the market within the next few decades. The 9th International 3DGeoInfo Conference aims at bringing together international state-of-the-art researchers and practitioners facilitating the dialogue on emerging topics in the field of 3D geo-information. The conference in Dubai offers an interdisciplinary forum of sub- and above-surface 3D geo-information researchers and practitioners dealing with data acquisition, modeling, management, maintenance, visualization, and analysis of 3D geo-information
    corecore