159 research outputs found

    Methods for Procedural Terrain Generation

    Get PDF
    Procedural generation has been utilized in the automatic generation of data for a long time. This automated processing has been utilized in the entertainment industry as well as in research work in order to be able to quickly produce large amounts of just the kind of data needed, for example, in system testing. In this thesis, we examine different ways to utilize procedural generation to produce different synthetic terrains. First, we will take a closer look at what procedural generation is, where it originally started, and where it was utilized. From this we move on to look at how this technology is utilized in the creation of terrains and what terrain is generally visually required. From this we move on to look at different ways to implement terrain generation. As part of this thesis, we have selected three methods and implemented our own implementations for terrain generation. We look at the performance of these implementations, and what a test group thinks about those synthetic terrains. The results obtained from this are analyzed and presented at the end of the thesis.Proseduraalista generointia on hyödynnetty datan automaattisessa tuottamisessa jo pitkään. Tätä automatisoitua prosessointia on niin hyödynnetty viihdeteollisuudessa kuin tutkimustyössä, jotta ollaan voitu tuottaa nopeasti suuria määriä juuri sellaista dataa kuin tarvitaan esimerkiksi järjestelmän testauksessa. Tässä tutkielmassa tarkastellaan erilaisia tapoja hyödyntää proseduraalista generointia erilaisten synteettisten maastojen tuottamiseksi. Aluksi tutustutaan hieman tarkemmin siihen mitä proseduraalinen generointi on, mistä se on alunperin lähtenyt ja mihin sitä on hyödynnetty. Tästä siirrytään tarkastelemaan miten kyseistä tekniikkaa hyödynnetään maastojen luomisessa ja mitä maastoilta yleensä visuaalisesti vaaditaan. Tästä siirrytään tarkastelemaan eri tapoja toteuttaa maaston generointia. Osana tätä tutkielmaa, on valittu kolme menetelmää ja laadittu niistä kullekin oma toteutus maaston generointiin. Työssä tarkastellaan näiden toteutusten suoritustuloksia, ja mitä mieltä testiryhmä on kyseisistä synteettisistä maastoista. Saadut tulokset ja niiden analyyysi esitellään tutkielman lopussa

    Discovering Regularity in Point Clouds of Urban Scenes

    Full text link
    Despite the apparent chaos of the urban environment, cities are actually replete with regularity. From the grid of streets laid out over the earth, to the lattice of windows thrown up into the sky, periodic regularity abounds in the urban scene. Just as salient, though less uniform, are the self-similar branching patterns of trees and vegetation that line streets and fill parks. We propose novel methods for discovering these regularities in 3D range scans acquired by a time-of-flight laser sensor. The applications of this regularity information are broad, and we present two original algorithms. The first exploits the efficiency of the Fourier transform for the real-time detection of periodicity in building facades. Periodic regularity is discovered online by doing a plane sweep across the scene and analyzing the frequency space of each column in the sweep. The simplicity and online nature of this algorithm allow it to be embedded in scanner hardware, making periodicity detection a built-in feature of future 3D cameras. We demonstrate the usefulness of periodicity in view registration, compression, segmentation, and facade reconstruction. The second algorithm leverages the hierarchical decomposition and locality in space of the wavelet transform to find stochastic parameters for procedural models that succinctly describe vegetation. These procedural models facilitate the generation of virtual worlds for architecture, gaming, and augmented reality. The self-similarity of vegetation can be inferred using multi-resolution analysis to discover the underlying branching patterns. We present a unified framework of these tools, enabling the modeling, transmission, and compression of high-resolution, accurate, and immersive 3D images

    Aeolus Ocean -- A simulation environment for the autonomous COLREG-compliant navigation of Unmanned Surface Vehicles using Deep Reinforcement Learning and Maritime Object Detection

    Full text link
    Heading towards navigational autonomy in unmanned surface vehicles (USVs) in the maritime sector can fundamentally lead towards safer waters as well as reduced operating costs, while also providing a range of exciting new capabilities for oceanic research, exploration and monitoring. However, achieving such a goal is challenging. USV control systems must, safely and reliably, be able to adhere to the international regulations for preventing collisions at sea (COLREGs) in encounters with other vessels as they navigate to a given waypoint while being affected by realistic weather conditions, either during the day or at night. To deal with the multitude of possible scenarios, it is critical to have a virtual environment that is able to replicate the realistic operating conditions USVs will encounter, before they can be implemented in the real world. Such "digital twins" form the foundations upon which Deep Reinforcement Learning (DRL) and Computer Vision (CV) algorithms can be used to develop and guide USV control systems. In this paper we describe the novel development of a COLREG-compliant DRL-based collision avoidant navigational system with CV-based awareness in a realistic ocean simulation environment. The performance of the trained autonomous Agents resulting from this approach is evaluated in several successful navigations to set waypoints in both open sea and coastal encounters with other vessels. A binary executable version of the simulator with trained agents is available at https://github.com/aavek/Aeolus-OceanComment: 22 pages, last blank page, 17 figures, 1 table, color, high resolution figure

    Image-based procedural texture matching and transformation

    Get PDF
    In this thesis, we present an approach to finding a procedural representation of a texture to replicate a given texture image which we call image-based procedural texture matching. Procedural representations are frequently used for many aspects of computer generated imagery, however, the ability to use procedural textures is limited by the difficulty inherent in finding a suitable procedural representation to match a desired texture. More importantly, the process of determining an appropriate set of parameters necessary to approximate the sample texture is a difficult task for a graphic artist.The textural characteristics of many real world objects change over time, so we are therefore interested in how textured objects in a graphical animation could also be made to change automatically. We would like this automatic texture transformation to be based on different texture samples in a time-dependant manner. This notion, which is a natural extension of procedural texture matching, involves the creation of a smoothly varying sequence of texture images, while allowing the graphic artist to control various characteristics of the texture sequence.Given a library of procedural textures, our approach uses a perceptually motivated texture similarity measure to identify which procedural textures in the library may produce a suitable match. Our work assumes that at least one procedural texture in the library is capable of approximating the desired texture. Because exhaustive search of all of the parameter combinations for each procedural texture is not computationally feasible, we perform a two-stage search on the candidate procedural textures. First, a global search is performed over pre-computed samples from the given procedural texture to locate promising parameter settings. Secondly, these parameter settings are optimised using a local search method to refine the match to the desired texture.The characteristics of a procedural texture generally do not vary uniformly for uniform parameter changes. That is, in some areas of the parameter domain of a procedural texture (the set of all valid parameter settings for the given procedural texture) small changes may produce large variations in the resulting texture, while in other areas the same changes may produce no variation at all. In this thesis, we present an adaptive random sampling algorithm which captures the texture range (the set of all images a procedural texture can produce) of a procedural texture by maintaining a sampling density which is consistent with the amount of change occurring in that region of the parameter domain.Texture transformations may not always be contained to a single procedural texture, and we therefore describe an approach to finding transitional points from one procedural texture to another. We present an algorithm for finding a path through the texture space formed from combining the texture range of the relevant procedural textures and their transitional points.Several examples of image-based texture matching, and texture transformations are shown. Finally, potential limitations of this work as well as future directions are discussed

    An Active-Library Based Investigation into the Performance Optimisation of Linear Algebra and the Finite Element Method

    No full text
    In this thesis, I explore an approach called "active libraries". These are libraries that take part in their own optimisation, enabling both high-performance code and the presentation of intuitive abstractions. I investigate the use of active libraries in two domains. Firstly, dense and sparse linear algebra, particularly, the solution of linear systems of equations. Secondly, the specification and solution of finite element problems. Extending my earlier (MEng) thesis work, I describe the modifications to my linear algebra library "Desola" required to perform sparse-matrix code generation. I show that optimisations easily applied in the dense case using code-transformation must be applied at a higher level of abstraction in the sparse case. I present performance results for sparse linear system solvers generated using Desola and compare against an implementation using the Intel Math Kernel Library. I also present improved dense linear-algebra performance results. Next, I explore the active-library approach by developing a finite element library that captures runtime representations of basis functions, variational forms and sequences of operations between discretised operators and fields. Using captured representations of variational forms and basis functions, I demonstrate optimisations to cell-local integral assembly that this approach enables, and compare against the state of the art. As part of my work on optimising local assembly, I extend the work of Hosangadi et al. on common sub-expression elimination and factorisation of polynomials. I improve the weight function presented by Hosangadi et al., increasing the number of factorisations found. I present an implementation of an optimised branch-and-bound algorithm inspired by reformulating the original matrix-covering problem as a maximal graph biclique search problem. I evaluate the algorithm's effectiveness on the expressions generated by our finite element solver

    Self-Supervised Shape and Appearance Modeling via Neural Differentiable Graphics

    Get PDF
    Inferring 3D shape and appearance from natural images is a fundamental challenge in computer vision. Despite recent progress using deep learning methods, a key limitation is the availability of annotated training data, as acquisition is often very challenging and expensive, especially at a large scale. This thesis proposes to incorporate physical priors into neural networks that allow for self-supervised learning. As a result, easy-to-access unlabeled data can be used for model training. In particular, novel algorithms in the context of 3D reconstruction and texture/material synthesis are introduced, where only image data is available as supervisory signal. First, a method that learns to reason about 3D shape and appearance solely from unstructured 2D images, achieved via differentiable rendering in an adversarial fashion, is proposed. As shown next, learning from videos significantly improves 3D reconstruction quality. To this end, a novel ray-conditioned warp embedding is proposed that aggregates pixel-wise features from multiple source images. Addressing the challenging task of disentangling shape and appearance, first a method that enables 3D texture synthesis independent of shape or resolution is presented. For this purpose, 3D noise fields of different scales are transformed into stationary textures. The method is able to produce 3D textures, despite only requiring 2D textures for training. Lastly, the surface characteristics of textures under different illumination conditions are modeled in the form of material parameters. Therefore, a self-supervised approach is proposed that has no access to material parameters but only flash images. Similar to the previous method, random noise fields are reshaped to material parameters, which are conditioned to replicate the visual appearance of the input under matching light

    DAPHNE: An Open and Extensible System Infrastructure for Integrated Data Analysis Pipelines

    Get PDF
    Integrated data analysis (IDA) pipelines—that combine data management (DM) and query processing, high-performance computing (HPC), and machine learning (ML) training and scoring—become increasingly common in practice. Interestingly, systems of these areas share many compilation and runtime techniques, and the used—increasingly heterogeneous—hardware infrastructure converges as well. Yet, the programming paradigms, cluster resource management, data formats and representations, as well as execution strategies differ substantially. DAPHNE is an open and extensible system infrastructure for such IDA pipelines, including language abstractions, compilation and runtime techniques, multi-level scheduling, hardware (HW) accelerators, and computational storage for increasing productivity and eliminating unnecessary overheads. In this paper, we make a case for IDA pipelines, describe the overall DAPHNE system architecture, its key components, and the design of a vectorized execution engine for computational storage, HW accelerators, as well as local and distributed operations. Preliminary experiments that compare DAPHNE with MonetDB, Pandas, DuckDB, and TensorFlow show promising results

    Synthesis and evaluation of geometric textures

    Get PDF
    Two-dimensional geometric textures are the geometric analogues of raster (pixel-based) textures and consist of planar distributions of discrete shapes with an inherent structure. These textures have many potential applications in art, computer graphics, and cartography. Synthesizing large textures by hand is generally a tedious task. In raster-based synthesis, many algorithms have been developed to limit the amount of manual effort required. These algorithms take in a small example as a reference and produce larger similar textures using a wide range of approaches. Recently, an increasing number of example-based geometric synthesis algorithms have been proposed. I refer to them in this dissertation as Geometric Texture Synthesis (GTS) algorithms. Analogous to their raster-based counterparts, GTS algorithms synthesize arrangements that ought to be judged by human viewers as “similar” to the example inputs. However, an absence of conventional evaluation procedures in current attempts demands an inquiry into the visual significance of synthesized results. In this dissertation, I present an investigation into GTS and report on my findings from three projects. I start by offering initial steps towards grounding texture synthesis techniques more firmly with our understanding of visual perception through two psychophysical studies. My observations throughout these studies result in important visual cues used by people when generating and/or comparing similarity of geometric arrangements as well a set of strategies adopted by participants when generating arrangements. Based on one of the generation strategies devised in these studies I develop a new geometric synthesis algorithm that uses a tile-based approach to generate arrangements. Textures synthesized by this algorithm are comparable to the state of the art in GTS and provide an additional reference in subsequent evaluations. To conduct effective evaluations of GTS, I start by collecting a set of representative examples, use them to acquire arrangements from multiple sources, and then gather them into a dataset that acts as a standard for the GTS research community. I then utilize this dataset in a second set of psychophysical studies that define an effective methodology for comparing current and future geometric synthesis algorithms
    • …
    corecore