2,048 research outputs found
Interactive Evolution and Exploration within Latent Level-Design Space of Generative Adversarial Networks
Generative Adversarial Networks (GANs) are an emerging form of indirect
encoding. The GAN is trained to induce a latent space on training data, and a
real-valued evolutionary algorithm can search that latent space. Such Latent
Variable Evolution (LVE) has recently been applied to game levels. However, it
is hard for objective scores to capture level features that are appealing to
players. Therefore, this paper introduces a tool for interactive LVE of
tile-based levels for games. The tool also allows for direct exploration of the
latent dimensions, and allows users to play discovered levels. The tool works
for a variety of GAN models trained for both Super Mario Bros. and The Legend
of Zelda, and is easily generalizable to other games. A user study shows that
both the evolution and latent space exploration features are appreciated, with
a slight preference for direct exploration, but combining these features allows
users to discover even better levels. User feedback also indicates how this
system could eventually grow into a commercial design tool, with the addition
of a few enhancements.Comment: GECCO 202
Towards Co-Creative Generative Adversarial Networks for Fashion Designers
Originating from the premise that Generative Adversarial Networks (GANs)
enrich creative processes rather than diluting them, we describe an ongoing PhD
project that proposes to study GANs in a co-creative context. By asking How can
GANs be applied in co-creation, and in doing so, how can they contribute to
fashion design processes? the project sets out to investigate co-creative GAN
applications and further develop them for the specific application area of
fashion design. We do so by drawing on the field of mixed-initiative
co-creation. Combined with the technical insight into GANs' functioning, we aim
to understand how their algorithmic properties translate into interactive
interfaces for co-creation and propose new interactions.Comment: Published at GenAICHI, CHI 2022 Worksho
Fashion Style Generation: Evolutionary Search with Gaussian Mixture Models in the Latent Space
This paper presents a novel approach for guiding a Generative Adversarial
Network trained on the FashionGen dataset to generate designs corresponding to
target fashion styles. Finding the latent vectors in the generator's latent
space that correspond to a style is approached as an evolutionary search
problem. A Gaussian mixture model is applied to identify fashion styles based
on the higher-layer representations of outfits in a clothing-specific attribute
prediction model. Over generations, a genetic algorithm optimizes a population
of designs to increase their probability of belonging to one of the Gaussian
mixture components or styles. Showing that the developed system can generate
images of maximum fitness visually resembling certain styles, our approach
provides a promising direction to guide the search for style-coherent designs.Comment: - to be published at: International Conference on Computational
Intelligence in Music, Sound, Art and Design : EvoMUSART 2022 - typo
corrected in abstrac
Deep Fluids: A Generative Network for Parameterized Fluid Simulations
This paper presents a novel generative model to synthesize fluid simulations
from a set of reduced parameters. A convolutional neural network is trained on
a collection of discrete, parameterizable fluid simulation velocity fields. Due
to the capability of deep learning architectures to learn representative
features of the data, our generative model is able to accurately approximate
the training data set, while providing plausible interpolated in-betweens. The
proposed generative model is optimized for fluids by a novel loss function that
guarantees divergence-free velocity fields at all times. In addition, we
demonstrate that we can handle complex parameterizations in reduced spaces, and
advance simulations in time by integrating in the latent space with a second
network. Our method models a wide variety of fluid behaviors, thus enabling
applications such as fast construction of simulations, interpolation of fluids
with different parameters, time re-sampling, latent space simulations, and
compression of fluid simulation data. Reconstructed velocity fields are
generated up to 700x faster than re-simulating the data with the underlying CPU
solver, while achieving compression rates of up to 1300x.Comment: Computer Graphics Forum (Proceedings of EUROGRAPHICS 2019),
additional materials: http://www.byungsoo.me/project/deep-fluids
Deep learning for procedural content generation
Summarization: Procedural content generation in video games has a long history. Existing procedural content generation methods, such as search-based, solver-based, rule-based and grammar-based methods have been applied to various content types such as levels, maps, character models, and textures. A research field centered on content generation in games has existed for more than a decade. More recently, deep learning has powered a remarkable range of inventions in content production, which are applicable to games. While some cutting-edge deep learning methods are applied on their own, others are applied in combination with more traditional methods, or in an interactive setting. This article surveys the various deep learning methods that have been applied to generate game content directly or indirectly, discusses deep learning methods that could be used for content generation purposes but are rarely used today, and envisages some limitations and potential future directions of deep learning for procedural content generation.Presented on: Neural Computing and Application
Deep Learning Techniques for Music Generation -- A Survey
This paper is a survey and an analysis of different ways of using deep
learning (deep artificial neural networks) to generate musical content. We
propose a methodology based on five dimensions for our analysis:
Objective - What musical content is to be generated? Examples are: melody,
polyphony, accompaniment or counterpoint. - For what destination and for what
use? To be performed by a human(s) (in the case of a musical score), or by a
machine (in the case of an audio file).
Representation - What are the concepts to be manipulated? Examples are:
waveform, spectrogram, note, chord, meter and beat. - What format is to be
used? Examples are: MIDI, piano roll or text. - How will the representation be
encoded? Examples are: scalar, one-hot or many-hot.
Architecture - What type(s) of deep neural network is (are) to be used?
Examples are: feedforward network, recurrent network, autoencoder or generative
adversarial networks.
Challenge - What are the limitations and open challenges? Examples are:
variability, interactivity and creativity.
Strategy - How do we model and control the process of generation? Examples
are: single-step feedforward, iterative feedforward, sampling or input
manipulation.
For each dimension, we conduct a comparative analysis of various models and
techniques and we propose some tentative multidimensional typology. This
typology is bottom-up, based on the analysis of many existing deep-learning
based systems for music generation selected from the relevant literature. These
systems are described and are used to exemplify the various choices of
objective, representation, architecture, challenge and strategy. The last
section includes some discussion and some prospects.Comment: 209 pages. This paper is a simplified version of the book: J.-P.
Briot, G. Hadjeres and F.-D. Pachet, Deep Learning Techniques for Music
Generation, Computational Synthesis and Creative Systems, Springer, 201
- …