61,284 research outputs found

    Generative theatre of totality

    Get PDF
    Generative art can be used for creating complex multisensory and multimedia experiences within predetermined aesthetic parameters, characteristic of the performing arts and remarkably suitable to address Moholy-Nagy's Theatre of Totality vision. In generative artworks the artist will usually take on the role of an experience framework designer, and the system evolves freely within that framework and its defined aesthetic boundaries. Most generative art impacts visual arts, music and literature, but there does not seem to be any relevant work exploring the cross-medium potential, and one could confidently state that most generative art outcomes are abstract and visual, or audio. It is the goal of this article to propose a model for the creation of generative performances within the Theatre of Totality's scope, derived from stochastic Lindenmayer systems, where mapping techniques are proposed to address the seven variables addressed by Moholy-Nagy: light, space, plane, form, motion, sound and man ("man" is replaced in this article with "human", except where quoting from the author), with all the inherent complexities

    Cyclical Flow: Spatial Synthesis Sound Toy as Multichannel Composition Tool

    Get PDF
    This paper outlines and discusses an interactive system designed as a playful ‘sound toy’ for spatial composition. Proposed models of composition and design in this context are discussed. The design, functionality and application of the software system is then outlined and summarised. The paper concludes with observations from use, and discussion of future developments

    Adversarially Tuned Scene Generation

    Full text link
    Generalization performance of trained computer vision systems that use computer graphics (CG) generated data is not yet effective due to the concept of 'domain-shift' between virtual and real data. Although simulated data augmented with a few real world samples has been shown to mitigate domain shift and improve transferability of trained models, guiding or bootstrapping the virtual data generation with the distributions learnt from target real world domain is desired, especially in the fields where annotating even few real images is laborious (such as semantic labeling, and intrinsic images etc.). In order to address this problem in an unsupervised manner, our work combines recent advances in CG (which aims to generate stochastic scene layouts coupled with large collections of 3D object models) and generative adversarial training (which aims train generative models by measuring discrepancy between generated and real data in terms of their separability in the space of a deep discriminatively-trained classifier). Our method uses iterative estimation of the posterior density of prior distributions for a generative graphical model. This is done within a rejection sampling framework. Initially, we assume uniform distributions as priors on the parameters of a scene described by a generative graphical model. As iterations proceed the prior distributions get updated to distributions that are closer to the (unknown) distributions of target data. We demonstrate the utility of adversarially tuned scene generation on two real-world benchmark datasets (CityScapes and CamVid) for traffic scene semantic labeling with a deep convolutional net (DeepLab). We realized performance improvements by 2.28 and 3.14 points (using the IoU metric) between the DeepLab models trained on simulated sets prepared from the scene generation models before and after tuning to CityScapes and CamVid respectively.Comment: 9 pages, accepted at CVPR 201
    • 

    corecore