91 research outputs found

    Musical Behaviours: Algorithmic Composition Via Plug-ins

    Get PDF
    The author’s recent software research addresses a deficiency in commercial musical composition software: the limited ability to apply algorithmic processes to the practice of musical composition. The remedy takes the form of a software plug-in design called “musical behaviours” — compositional algorithms of limited scope that can be applied cumulatively and in real time to MIDI performance data. The software runs on the author’s software composition platform, The Transformation Engine

    Control mechanisms for the procedural generation of visual pattern designs

    Get PDF

    Simulation of 3D Model, Shape, and Appearance Aging by Physical, Chemical, Biological, Environmental, and Weathering Effects

    Get PDF
    Physical, chemical, biological, environmental, and weathering effects produce a range of 3D model, shape, and appearance changes. Time introduces an assortment of aging, weathering, and decay processes such as dust, mold, patina, and fractures. These time-varying imperfections provide the viewer with important visual cues for realism and age. Existing approaches that create realistic aging effects still require an excessive amount of time and effort by extremely skilled artists to tediously hand fashion blemishes or simulate simple procedural rules. Most techniques do not scale well to large virtual environments. These limitations have prevented widespread utilization of many aging and weathering algorithms. We introduce a novel method for geometrically and visually simulating these processes in order to create visually realistic scenes. This work proposes the ``mu-ton system, a framework for scattering numerous mu-ton particles throughout an environment to mutate and age the world. We take a point based representation to discretize both the decay effects and the underlying geometry. The mu-ton particles simulate interactions between multiple phenomena. This mutation process changes both the physical properties of the external surface layer and the internal volume substrate. The mutation may add or subtract imperfections into the environment as objects age. First we review related work in aging and weathering, and illustrate the limitations of the current data-driven and physically based approaches. We provide a taxonomy of aging processes. We then describe the structure for our ``mu-ton framework, and we provide the user a short tutorial how to setup different effects. The first application of the ``mu-ton system focuses on inorganic aging and decay. We demonstrate changing material properties on a variety of objects, and simulate their transformation. We show the application of our system aging a simple city alley on different materials. The second application of the ``mu-ton system focuses organic aging. We provide details on simulating a variety of growth processes. We then evaluate and analyze the ``mu-ton framework and compare our results with ``gamma-ton tracing. Finally, we outline the contributions this thesis provides to computer-based aging and weathering simulation

    Transferrable learning from synthetic data: novel texture synthesis using Domain Randomization for visual scene understanding

    Get PDF
    Modern supervised deep learning-based approaches typically rely on vast quantities of annotated data for training computer vision and robotics tasks. A key challenge is acquiring data that encompasses the diversity encountered in the real world. The use of synthetic or computer-generated data for solving these tasks has recently garnered attention for several reasons. The first being the efficiency of producing large amounts of annotated data at a fraction of the time required in reality, addressing the time expense of manually annotated data. The second addresses the inaccuracies and mistakes arising from the laborious task of manual annotations. Thirdly, it addresses the need for vast amounts of data typically required by data-driven state-of-the-art computer vision and robotics systems. Due to domain shift, models trained on synthetic data typically underperform those trained on real-world data when deployed in the real world. Domain Randomization is a data generation approach for the synthesis of artificial data. The Domain Randomization process can generate diverse synthetic images by randomizing rendering parameters in a simulator, such as the objects, their visual appearance, the lighting, and where they appear in the picture. This synthetic data can be used to train systems capable of performing well in reality. However, it is unclear how to best approach selecting Domain Randomization parameters such as the types of textures, object poses, or types of backgrounds. Furthermore, it is unclear how Domain Randomization generalizes across various vision tasks or whether there are potential improvements to the technique. This thesis explores novel Domain Randomization techniques to solve object localization, detection, and semantic segmentation in cluttered and occluded real-world scenarios. In particular, the four main contributions of this dissertation are: (i) The first contribution of the thesis proposes a novel method for quantifying the differences between Domain Randomized and realistic data distributions using a small number of samples. The approach ranks all commonly applied Domain Randomization texture techniques in the existing literature and finds that the ranking is reflected in the task-based performance of an object localization task. (ii) The second contribution of this work introduces the SRDR dataset - a large domain randomized dataset containing 291K frames of household objects widely used in robotics andvision benchmarking [23]. SRDR builds on the YCB-M [67] dataset by generating syntheticversions for images in YCB-M using a variety of domain randomized texture types and in 5 unique environments with varying scene complexity. The SRDR dataset is highly beneficial in cross-domain training, evaluation, and comparison investigations. (iii) The third contribution presents a study evaluating Domain Randomization’s generalizabilityand robustness in sim-to-real in complex scenes for object detection and semantic segmentation. We find that the performance ranking is largely similar across the two tasks when evaluating models trained on Domain Randomized synthetic data and evaluating on real-world data, indicating Domain Randomization performs similarly across multiple tasks. (iv) Finally, we present a fast, easy to execute, novel approach for conditionally generating domain randomized textures. The textures are generated by randomly sampling patches from real-world images to apply to objects of interest. This approach outperforms the most commonly used Domain Randomization texture method from 13.157 AP to 21.287 AP and 8.950 AP to 19.481 AP in object detection and semantic segmentation tasks. The technique eliminates manually defining texture distributions to sample Domain Randomized textures. We propose a further improvement to address low texture diversity when using a small number of real-world images. We propose to use a conditional GAN-based texture generator trained on a few real-world image patches to increase the texture diversity and outperform the most commonly applied Domain Randomization texture method from 13.157 AP to 20.287 AP and 8.950 AP to 17.636 AP in object detection and semantic segmentation tasks

    Non-interactive modeling tools and support environment for procedural geometry generation

    Get PDF
    This research examines procedural modeling in the eld of computer graphics. Procedural modeling automates the generation of objects by representing models as procedures that provide a description of the process required to create the model. The problem we solve with this research is the creation of a procedural modeling environment that consists of a procedural modeling language and a set of non-interactive modeling tools. A goal of this research is to provide comparisons between 3D manual modeling and procedural modeling, which focus on the modeling strategies, tools and model representations used by each modeling paradigm. A procedural modeling language is presented that has the same facilities and features of existing procedural modeling languages. In addition, features such as caching and a pseudorandom number generator is included, demonstrating the advantages of a procedural modeling paradigm. The non-interactive tools created within the procedural modeling framework are selection, extrusion, subdivision, curve shaping and stitching. In order to demonstrate the usefulness of the procedural modeling framework, human and furniture models are created using this procedural modeling environment. Various techniques are presented to generate these objects, and may be used to create a variety of other models. A detailed discussion of each technique is provided. Six experiments are conducted to test the support of the procedural modeling benets provided by this non- interactive modeling environment. The experiments test, namely parameterisation, re-usability, base-shape independence, model complexity, the generation of reproducible random numbers and caching. We prove that a number of distinct models can be generated from a single procedure through the use parameterisation. Modeling procedures and sub-procedures are re-usable and can be applied to different models. Procedures can be base-shape independent. The level of complexity of a model can be increased by repeatedly applying geometry to the model. The pseudo-random number generator is capable of generating reproducible random numbers. The caching facility reduces the time required to generate a model that uses repetitive geometry

    Learning to Interpret Fluid Type Phenomena via Images

    Get PDF
    Learning to interpret fluid-type phenomena via images is a long-standing challenging problem in computer vision. The problem becomes even more challenging when the fluid medium is highly dynamic and refractive due to its transparent nature. Here, we consider imaging through such refractive fluid media like water and air. For water, we design novel supervised learning-based algorithms to recover its 3D surface as well as the highly distorted underground patterns. For air, we design a state-of-the-art unsupervised learning algorithm to predict the distortion-free image given a short sequence of turbulent images. Specifically, we design a deep neural network that estimates the depth and normal maps of a fluid surface by analyzing the refractive distortion of a reference background pattern. Regarding the recovery of severely downgraded underwater images due to the refractive distortions caused by water surface fluctuations, we present the distortion-guided network (DG-Net) for restoring distortion-free underwater images. The key idea is to use a distortion map to guide network training. The distortion map models the pixel displacement caused by water refraction. Furthermore, we present a novel unsupervised network to recover the latent distortion-free image. The key idea is to model non-rigid distortions as deformable grids. Our network consists of a grid deformer that estimates the distortion field and an image generator that outputs the distortion-free image. By leveraging the positional encoding operator, we can simplify the network structure while maintaining fine spatial details in the recovered images. We also develop a combinational deep neural network that can simultaneously perform recovery of the latent distortion-free image as well as 3D reconstruction of the transparent and dynamic fluid surface. Through extensive experiments on simulated and real captured fluid images, we demonstrate that our proposed deep neural networks outperform the current state-of-the-art on solving specific tasks
    • …
    corecore