185 research outputs found
Tile Pattern KL-Divergence for Analysing and Evolving Game Levels
This paper provides a detailed investigation of using the Kullback-Leibler
(KL) Divergence as a way to compare and analyse game-levels, and hence to use
the measure as the objective function of an evolutionary algorithm to evolve
new levels. We describe the benefits of its asymmetry for level analysis and
demonstrate how (not surprisingly) the quality of the results depends on the
features used. Here we use tile-patterns of various sizes as features.
When using the measure for evolution-based level generation, we demonstrate
that the choice of variation operator is critical in order to provide an
efficient search process, and introduce a novel convolutional mutation operator
to facilitate this. We compare the results with alternative generators,
including evolving in the latent space of generative adversarial networks, and
Wave Function Collapse. The results clearly show the proposed method to provide
competitive performance, providing reasonable quality results with very fast
training and reasonably fast generation.Comment: 8 pages plus references. Proceedings of GECCO 201
Unsupervised Domain Adaptation for Acoustic Scene Classification Using Band-Wise Statistics Matching
The performance of machine learning algorithms is known to be negatively
affected by possible mismatches between training (source) and test (target)
data distributions. In fact, this problem emerges whenever an acoustic scene
classification system which has been trained on data recorded by a given device
is applied to samples acquired under different acoustic conditions or captured
by mismatched recording devices. To address this issue, we propose an
unsupervised domain adaptation method that consists of aligning the first- and
second-order sample statistics of each frequency band of target-domain acoustic
scenes to the ones of the source-domain training dataset. This model-agnostic
approach is devised to adapt audio samples from unseen devices before they are
fed to a pre-trained classifier, thus avoiding any further learning phase.
Using the DCASE 2018 Task 1-B development dataset, we show that the proposed
method outperforms the state-of-the-art unsupervised methods found in the
literature in terms of both source- and target-domain classification accuracy.Comment: 5 pages, 1 figure, 3 tables, submitted to EUSIPCO 202
COEGAN: Evaluating the Coevolution Effect in Generative Adversarial Networks
Generative adversarial networks (GAN) present state-of-the-art results in the
generation of samples following the distribution of the input dataset. However,
GANs are difficult to train, and several aspects of the model should be
previously designed by hand. Neuroevolution is a well-known technique used to
provide the automatic design of network architectures which was recently
expanded to deep neural networks. COEGAN is a model that uses neuroevolution
and coevolution in the GAN training algorithm to provide a more stable training
method and the automatic design of neural network architectures. COEGAN makes
use of the adversarial aspect of the GAN components to implement coevolutionary
strategies in the training algorithm. Our proposal was evaluated in the
Fashion-MNIST and MNIST dataset. We compare our results with a baseline based
on DCGAN and also with results from a random search algorithm. We show that our
method is able to discover efficient architectures in the Fashion-MNIST and
MNIST datasets. The results also suggest that COEGAN can be used as a training
algorithm for GANs to avoid common issues, such as the mode collapse problem.Comment: Published in GECCO 2019. arXiv admin note: text overlap with
arXiv:1912.0617
- …