A Chatbot serves as a communication tool between a human user and a machine
to achieve an appropriate answer based on the human input. In more recent
approaches, a combination of Natural Language Processing and sequential models
are used to build a generative Chatbot. The main challenge of these models is
their sequential nature, which leads to less accurate results. To tackle this
challenge, in this paper, a novel end-to-end architecture is proposed using
conditional Wasserstein Generative Adversarial Networks and a transformer model
for answer generation in Chatbots. While the generator of the proposed model
consists of a full transformer model to generate an answer, the discriminator
includes only the encoder part of a transformer model followed by a classifier.
To the best of our knowledge, this is the first time that a generative Chatbot
is proposed using the embedded transformer in both generator and discriminator
models. Relying on the parallel computing of the transformer model, the results
of the proposed model on the Cornell Movie-Dialog corpus and the Chit-Chat
datasets confirm the superiority of the proposed model compared to
state-of-the-art alternatives using different evaluation metrics