3 research outputs found
Spatial Evolutionary Generative Adversarial Networks
Generative adversary networks (GANs) suffer from training pathologies such as
instability and mode collapse. These pathologies mainly arise from a lack of
diversity in their adversarial interactions. Evolutionary generative
adversarial networks apply the principles of evolutionary computation to
mitigate these problems. We hybridize two of these approaches that promote
training diversity. One, E-GAN, at each batch, injects mutation diversity by
training the (replicated) generator with three independent objective functions
then selecting the resulting best performing generator for the next batch. The
other, Lipizzaner, injects population diversity by training a two-dimensional
grid of GANs with a distributed evolutionary algorithm that includes neighbor
exchanges of additional training adversaries, performance based selection and
population-based hyper-parameter tuning. We propose to combine mutation and
population approaches to diversity improvement. We contribute a superior
evolutionary GANs training method, Mustangs, that eliminates the single loss
function used across Lipizzaner's grid. Instead, each training round, a loss
function is selected with equal probability, from among the three E-GAN uses.
Experimental analyses on standard benchmarks, MNIST and CelebA, demonstrate
that Mustangs provides a statistically faster training method resulting in more
accurate networks
Data augmentation for time series: traditional vs generative models on capacitive proximity time series
Large labeled quantities and diversities of training data are
often needed for supervised, data-based modelling. Data
distribution should cover a rich representation to support the
generalizability of the trained end-to-end inference model.
However, this is often hindered by limited labeled data and
the expensive data collection process, especially for human
activity recognition tasks. Extensive manual labeling is
required. Data augmentation is thus a widely used
regularization method for deep learning, especially applied
on image data to increase the classification accuracy. But it
is less researched for time series. In this paper, we
investigate the data augmentation task on continuous
capacitive time series with the example on exercise
recognition. We show that the traditional data augmentation
can enrich the source distribution and thus make the trained
inference model more generalized. This further increases the
recognition performance for unseen target data around 21.4
percentage points compared to inference model without data
augmentation. The generative models such as variational
autoencoder or conditional variational autoencoder can
further reduce the variance on the target data