187 research outputs found

    A 3D Pipeline for 2D Pixel Art Animation

    Get PDF
    Aquest document presenta un informe exhaustiu sobre un projecte destinat a desenvolupar un procés automatitzat per a la creació d'animacions 2D a partir de models 3D utilitzant Blender. L'objectiu principal del projecte és millorar les tècniques existents i reduir la necessitat que els artistes realitzin tasques repetitives en el procés de producció d'animació. El projecte implica el disseny i desenvolupament d'un complement per a Blender, programat en Python, que es va desenvolupar per ser eficient i reduir les tasques intensives en temps que solen caracteritzar algunes etapes en el procés d'animació. El complement suporta tres estils específics d'animació: l'art de píxel, "cel shader", i "cel shader" amb contorns, i es pot expandir per suportar una àmplia gamma d'estils. El complement també és de codi obert, permetent una major col·laboració i potencials contribucions per part de la comunitat. Malgrat els problemes trobats, el projecte ha estat exitós en aconseguir els seus objectius, i els resultats mostren que el complement pot aconseguir resultats similars als adquirits amb eines similars i animació tradicional. El treball futur inclou mantenir el complement actualitzat amb les últimes versions de Blender, publicar-lo a GitHub i mercats de complements de Blender, així com afegir nous estils d'art.This document presents a comprehensive report on a project aimed at developing an automated process for creating 2D animations from 3D models using Blender. The project's main goal is to improve upon existing techniques and reduce the need for artists to do clerical tasks in the animation production process. The project involves the design and development of a plugin for Blender, coded in Python, which was developed to be efficient and reduce time-intensive tasks that usually characterise some stages in the animation process. The plugin supports three specific styles of animation: pixel art, cel shading, and cel shading with outlines, and can be expanded to support a wider range of styles. The plugin is also open-source, allowing for greater collaboration and potential contributions from the community. Despite the challenges faced, the project was successful in achieving its goals, and the results show that the plugin could achieve results similar to those acquired with similar tools and traditional animation. The future work includes keeping the plugin up-to-date with the latest versions of Blender, publishing it on GitHub and Blender plugin markets, as well as adding new art styles

    The physics of streamer discharge phenomena

    Get PDF
    In this review we describe a transient type of gas discharge which is commonly called a streamer discharge, as well as a few related phenomena in pulsed discharges. Streamers are propagating ionization fronts with self-organized field enhancement at their tips that can appear in gases at (or close to) atmospheric pressure. They are the precursors of other discharges like sparks and lightning, but they also occur in for example corona reactors or plasma jets which are used for a variety of plasma chemical purposes. When enough space is available, streamers can also form at much lower pressures, like in the case of sprite discharges high up in the atmosphere. We explain the structure and basic underlying physics of streamer discharges, and how they scale with gas density. We discuss the chemistry and applications of streamers, and describe their two main stages in detail: inception and propagation. We also look at some other topics, like interaction with flow and heat, related pulsed discharges, and electron runaway and high energy radiation. Finally, we discuss streamer simulations and diagnostics in quite some detail. This review is written with two purposes in mind: First, we describe recent results on the physics of streamer discharges, with a focus on the work performed in our groups. We also describe recent developments in diagnostics and simulations of streamers. Second, we provide background information on the above-mentioned aspects of streamers. This review can therefore be used as a tutorial by researchers starting to work in the field of streamer physics.Comment: 89 pages, 29 figure

    Support tools for 3D game creation

    Get PDF
    Nowadays, tools for developing videogames are a very important part of the development process in the game industry. Such tools are used to assist game developers in their tasks, allowing them to create functional games while writing a few lines of code. For example, these tools allow the users to import the content for the game, set the game logic, or produce the source code and compile it. There are several tasks and components regarding the development of videogames that may become unproductive, therefore, it’s necessary to automate and/or optimize such tasks. For example, the programming of events or dialogs can be a task that consumes too much time in the development cycle, and a tedious and repetitive task for the programmer. For this reason, the use of tools to support these tasks can be very important to increase productivity and help on the maintenance of the various processes that involve the development of videogames. This dissertation aims to demonstrate the advantages of the use of these kind of tools during the development of videogames, presenting a case study involving the development of a Serious Game entitled Clean World.Atualmente, as ferramentas para o desenvolvimento de jogos são uma parte bastante importante de todo o processo de desenvolvimento. Estas ferramentas servem para assistir os criadores de jogos nas tarefas que realizam, permitindo-lhes a criação de jogos funcionais escrevendo poucas linhas de código. Desenvolver um videojogo sem a utilização de ferramentas especializadas é um processo complexo e que consome bastante tempo, daí a existência de ferramentas que permitem ao utilizador importar os conteúdos para o jogo, definir a lógica de jogo, produzir o código fonte e compilá-lo. Este tipo de software é normalmente utilizado por quem se dedica à criação de jogos como hobby, ou por profissionais que procuram otimizar o processo de desenvolvimento de jogos. Existem várias componentes ao nível do desenvolvimento de videojogos que se tornam pouco produtivas, se não forem automatizados e/ou otimizadas. Por exemplo, a programação de eventos ou de diálogos pode ser uma tarefa que consome demasiado tempo no ciclo de desenvolvimento, para além de ser uma tarefa entediante e repetitiva no ponto de vista do programador. Por este motivo, a utilização de ferramentas pode ser muito importante no que diz respeito ao aumento da produtividade e manutenção dos vários processos que envolvem o desenvolvimento de videojogos. Nesta dissertação pretendemos demonstrar as vantagens da utilização dessas mesmas ferramentas durante o desenvolvimento de videojogos, através da apresentação de um caso de estudo que envolve o desenvolvimento de um Serious Game intitulado Clean World. Em Clean World, foram identificadas determinadas tarefas que se mostraram demasiado repetitivas e entediantes quando programadas por inteiro, como é o caso da adição, modificação ou remoção de componentes como diálogos, quest ou items. Tendo em conta este problema concreto, foram criadas algumas ferramentas de forma a aumentar a produtividade no desenvolvimento do jogo, tornando tarefas repetitivas e entediantes em processos simples e intuitivos. O conjunto de ferramentas é constituído por: Item Manager, Quest Manager, Dialog Manager e Terrain Creator

    Automatic video segmentation employing object/camera modeling techniques

    Get PDF
    Practically established video compression and storage techniques still process video sequences as rectangular images without further semantic structure. However, humans watching a video sequence immediately recognize acting objects as semantic units. This semantic object separation is currently not reflected in the technical system, making it difficult to manipulate the video at the object level. The realization of object-based manipulation will introduce many new possibilities for working with videos like composing new scenes from pre-existing video objects or enabling user-interaction with the scene. Moreover, object-based video compression, as defined in the MPEG-4 standard, can provide high compression ratios because the foreground objects can be sent independently from the background. In the case that the scene background is static, the background views can even be combined into a large panoramic sprite image, from which the current camera view is extracted. This results in a higher compression ratio since the sprite image for each scene only has to be sent once. A prerequisite for employing object-based video processing is automatic (or at least user-assisted semi-automatic) segmentation of the input video into semantic units, the video objects. This segmentation is a difficult problem because the computer does not have the vast amount of pre-knowledge that humans subconsciously use for object detection. Thus, even the simple definition of the desired output of a segmentation system is difficult. The subject of this thesis is to provide algorithms for segmentation that are applicable to common video material and that are computationally efficient. The thesis is conceptually separated into three parts. In Part I, an automatic segmentation system for general video content is described in detail. Part II introduces object models as a tool to incorporate userdefined knowledge about the objects to be extracted into the segmentation process. Part III concentrates on the modeling of camera motion in order to relate the observed camera motion to real-world camera parameters. The segmentation system that is described in Part I is based on a background-subtraction technique. The pure background image that is required for this technique is synthesized from the input video itself. Sequences that contain rotational camera motion can also be processed since the camera motion is estimated and the input images are aligned into a panoramic scene-background. This approach is fully compatible to the MPEG-4 video-encoding framework, such that the segmentation system can be easily combined with an object-based MPEG-4 video codec. After an introduction to the theory of projective geometry in Chapter 2, which is required for the derivation of camera-motion models, the estimation of camera motion is discussed in Chapters 3 and 4. It is important that the camera-motion estimation is not influenced by foreground object motion. At the same time, the estimation should provide accurate motion parameters such that all input frames can be combined seamlessly into a background image. The core motion estimation is based on a feature-based approach where the motion parameters are determined with a robust-estimation algorithm (RANSAC) in order to distinguish the camera motion from simultaneously visible object motion. Our experiments showed that the robustness of the original RANSAC algorithm in practice does not reach the theoretically predicted performance. An analysis of the problem has revealed that this is caused by numerical instabilities that can be significantly reduced by a modification that we describe in Chapter 4. The synthetization of static-background images is discussed in Chapter 5. In particular, we present a new algorithm for the removal of the foreground objects from the background image such that a pure scene background remains. The proposed algorithm is optimized to synthesize the background even for difficult scenes in which the background is only visible for short periods of time. The problem is solved by clustering the image content for each region over time, such that each cluster comprises static content. Furthermore, it is exploited that the times, in which foreground objects appear in an image region, are similar to the corresponding times of neighboring image areas. The reconstructed background could be used directly as the sprite image in an MPEG-4 video coder. However, we have discovered that the counterintuitive approach of splitting the background into several independent parts can reduce the overall amount of data. In the case of general camera motion, the construction of a single sprite image is even impossible. In Chapter 6, a multi-sprite partitioning algorithm is presented, which separates the video sequence into a number of segments, for which independent sprites are synthesized. The partitioning is computed in such a way that the total area of the resulting sprites is minimized, while simultaneously satisfying additional constraints. These include a limited sprite-buffer size at the decoder, and the restriction that the image resolution in the sprite should never fall below the input-image resolution. The described multisprite approach is fully compatible to the MPEG-4 standard, but provides three advantages. First, any arbitrary rotational camera motion can be processed. Second, the coding-cost for transmitting the sprite images is lower, and finally, the quality of the decoded sprite images is better than in previously proposed sprite-generation algorithms. Segmentation masks for the foreground objects are computed with a change-detection algorithm that compares the pure background image with the input images. A special effect that occurs in the change detection is the problem of image misregistration. Since the change detection compares co-located image pixels in the camera-motion compensated images, a small error in the motion estimation can introduce segmentation errors because non-corresponding pixels are compared. We approach this problem in Chapter 7 by integrating risk-maps into the segmentation algorithm that identify pixels for which misregistration would probably result in errors. For these image areas, the change-detection algorithm is modified to disregard the difference values for the pixels marked in the risk-map. This modification significantly reduces the number of false object detections in fine-textured image areas. The algorithmic building-blocks described above can be combined into a segmentation system in various ways, depending on whether camera motion has to be considered or whether real-time execution is required. These different systems and example applications are discussed in Chapter 8. Part II of the thesis extends the described segmentation system to consider object models in the analysis. Object models allow the user to specify which objects should be extracted from the video. In Chapters 9 and 10, a graph-based object model is presented in which the features of the main object regions are summarized in the graph nodes, and the spatial relations between these regions are expressed with the graph edges. The segmentation algorithm is extended by an object-detection algorithm that searches the input image for the user-defined object model. We provide two objectdetection algorithms. The first one is specific for cartoon sequences and uses an efficient sub-graph matching algorithm, whereas the second processes natural video sequences. With the object-model extension, the segmentation system can be controlled to extract individual objects, even if the input sequence comprises many objects. Chapter 11 proposes an alternative approach to incorporate object models into a segmentation algorithm. The chapter describes a semi-automatic segmentation algorithm, in which the user coarsely marks the object and the computer refines this to the exact object boundary. Afterwards, the object is tracked automatically through the sequence. In this algorithm, the object model is defined as the texture along the object contour. This texture is extracted in the first frame and then used during the object tracking to localize the original object. The core of the algorithm uses a graph representation of the image and a newly developed algorithm for computing shortest circular-paths in planar graphs. The proposed algorithm is faster than the currently known algorithms for this problem, and it can also be applied to many alternative problems like shape matching. Part III of the thesis elaborates on different techniques to derive information about the physical 3-D world from the camera motion. In the segmentation system, we employ camera-motion estimation, but the obtained parameters have no direct physical meaning. Chapter 12 discusses an extension to the camera-motion estimation to factorize the motion parameters into physically meaningful parameters (rotation angles, focal-length) using camera autocalibration techniques. The speciality of the algorithm is that it can process camera motion that spans several sprites by employing the above multi-sprite technique. Consequently, the algorithm can be applied to arbitrary rotational camera motion. For the analysis of video sequences, it is often required to determine and follow the position of the objects. Clearly, the object position in image coordinates provides little information if the viewing direction of the camera is not known. Chapter 13 provides a new algorithm to deduce the transformation between the image coordinates and the real-world coordinates for the special application of sport-video analysis. In sport videos, the camera view can be derived from markings on the playing field. For this reason, we employ a model of the playing field that describes the arrangement of lines. After detecting significant lines in the input image, a combinatorial search is carried out to establish correspondences between lines in the input image and lines in the model. The algorithm requires no information about the specific color of the playing field and it is very robust to occlusions or poor lighting conditions. Moreover, the algorithm is generic in the sense that it can be applied to any type of sport by simply exchanging the model of the playing field. In Chapter 14, we again consider panoramic background images and particularly focus ib their visualization. Apart from the planar backgroundsprites discussed previously, a frequently-used visualization technique for panoramic images are projections onto a cylinder surface which is unwrapped into a rectangular image. However, the disadvantage of this approach is that the viewer has no good orientation in the panoramic image because he looks into all directions at the same time. In order to provide a more intuitive presentation of wide-angle views, we have developed a visualization technique specialized for the case of indoor environments. We present an algorithm to determine the 3-D shape of the room in which the image was captured, or, more generally, to compute a complete floor plan if several panoramic images captured in each of the rooms are provided. Based on the obtained 3-D geometry, a graphical model of the rooms is constructed, where the walls are displayed with textures that are extracted from the panoramic images. This representation enables to conduct virtual walk-throughs in the reconstructed room and therefore, provides a better orientation for the user. Summarizing, we can conclude that all segmentation techniques employ some definition of foreground objects. These definitions are either explicit, using object models like in Part II of this thesis, or they are implicitly defined like in the background synthetization in Part I. The results of this thesis show that implicit descriptions, which extract their definition from video content, work well when the sequence is long enough to extract this information reliably. However, high-level semantics are difficult to integrate into the segmentation approaches that are based on implicit models. Intead, those semantics should be added as postprocessing steps. On the other hand, explicit object models apply semantic pre-knowledge at early stages of the segmentation. Moreover, they can be applied to short video sequences or even still pictures since no background model has to be extracted from the video. The definition of a general object-modeling technique that is widely applicable and that also enables an accurate segmentation remains an important yet challenging problem for further research

    MASCOT : metadata for advanced scalable video coding tools : final report

    Get PDF
    The goal of the MASCOT project was to develop new video coding schemes and tools that provide both an increased coding efficiency as well as extended scalability features compared to technology that was available at the beginning of the project. Towards that goal the following tools would be used: - metadata-based coding tools; - new spatiotemporal decompositions; - new prediction schemes. Although the initial goal was to develop one single codec architecture that was able to combine all new coding tools that were foreseen when the project was formulated, it became clear that this would limit the selection of the new tools. Therefore the consortium decided to develop two codec frameworks within the project, a standard hybrid DCT-based codec and a 3D wavelet-based codec, which together are able to accommodate all tools developed during the course of the project

    Procedural Generation and Rendering of Trees and Landscapes in the Style of Eyvind Earle

    Get PDF
    In this thesis I develop methods of generating digital 3D landscapes in the style of the artist, Eyvind Earle, who is perhaps most well-known for his art direction and background paintings on Sleeping Beauty. I develop a variety of trees and other terrain elements, each tailored to match the graphic shapes and rendered accordingly to match the style of reference artwork. Creation of both terrain and trees can be highly generative in nature – complex in a way that lends to being defined by a logical systematic approach. I provide procedural methods for matching the shapes of the objects, relying on noise, L-systems, and other constraints. In general, the process is divided into base geometry generation and shading details. Shading methods include simple custom shaders and geometry-based stippling and linework. The various systems are implemented in Side Effects Software’s Houdini as its procedural capabilities allows the creation of many scenes with the same tools

    Exploration of Pervasive Games in Relation to Mobile Technologies

    Get PDF
    The project is an exploration of Pervasive Games in relation to mobile technologies, with the intention of developing a pervasive game engine. Pervasive Games are interactive games where the participants drive the game play by playing the game in both the real world and a virtual environment. This is an area of gaming that has rapidly evolved over the last few years. The initial research involved establishing several key elements common to existing pervasive applications, defining real world / virtual world considerations for game play (both positive and negative) and identifying the technical requirements needed to implement play elements on a mobile device. After comparing several platforms the Windows 7 platform was selected for development purposes. The requirements for establishing a working development platform (with delivery mechanism) was investigated and a working environment set-up. A pervasive games engine was then developed in the format of 67 code stubs (coding solutions) that allow the implementation of solutions to gaming elements required in the development of pervasive applications. Two new helper classes were in addition developed containing solutions to topics related to run-time data storage (StorageUtils.cs) and generic gaming tasks (GameCode.cs). A pervasive game was implemented to test a cross section of functionality in the engine. The basic principle behind the game was to overlay various layers video, backgrounds, sprite and text, to build up an immersive pervasive environment with a player in the centre of the game imagery, game domain and real world. The intention of the game was to see how the pervasive game experience could be reflected in the game mechanics and pervasive interaction, while utilising the engine functionality

    Real-time rendering of large surface-scanned range data natively on a GPU

    Get PDF
    This thesis presents research carried out for the visualisation of surface anatomy data stored as large range images such as those produced by stereo-photogrammetric, and other triangulation-based capture devices. As part of this research, I explored the use of points as a rendering primitive as opposed to polygons, and the use of range images as the native data representation. Using points as a display primitive as opposed to polygons required the creation of a pipeline that solved problems associated with point-based rendering. The problems inves tigated were scattered-data interpolation (a common problem with point-based rendering), multi-view rendering, multi-resolution representations, anti-aliasing, and hidden-point re- moval. In addition, an efficient real-time implementation on the GPU was carried out
    corecore