3,633 research outputs found
A Hybrid Convolutional Variational Autoencoder for Text Generation
In this paper we explore the effect of architectural choices on learning a
Variational Autoencoder (VAE) for text generation. In contrast to the
previously introduced VAE model for text where both the encoder and decoder are
RNNs, we propose a novel hybrid architecture that blends fully feed-forward
convolutional and deconvolutional components with a recurrent language model.
Our architecture exhibits several attractive properties such as faster run time
and convergence, ability to better handle long sequences and, more importantly,
it helps to avoid some of the major difficulties posed by training VAE models
on textual data
Learning Latent Representations of Bank Customers With The Variational Autoencoder
Learning data representations that reflect the customers' creditworthiness
can improve marketing campaigns, customer relationship management, data and
process management or the credit risk assessment in retail banks. In this
research, we adopt the Variational Autoencoder (VAE), which has the ability to
learn latent representations that contain useful information. We show that it
is possible to steer the latent representations in the latent space of the VAE
using the Weight of Evidence and forming a specific grouping of the data that
reflects the customers' creditworthiness. Our proposed method learns a latent
representation of the data, which shows a well-defied clustering structure
capturing the customers' creditworthiness. These clusters are well suited for
the aforementioned banks' activities. Further, our methodology generalizes to
new customers, captures high-dimensional and complex financial data, and scales
to large data sets.Comment: arXiv admin note: substantial text overlap with arXiv:1806.0253
- …