1 research outputs found
Parameter Sharing Decoder Pair for Auto Composing
Auto Composing is an active and appealing research area in the past few
years, and lots of efforts have been put into inventing more robust models to
solve this problem. With the fast evolution of deep learning techniques, some
deep neural network-based language models are becoming dominant. Notably, the
transformer structure has been proven to be very efficient and promising in
modeling texts. However, the transformer-based language models usually contain
huge number of parameters and the size of the model is usually too large to put
in production for some storage limited applications. In this paper, we propose
a parameter sharing decoder pair (PSDP), which reduces the number of parameters
dramatically and at the same time maintains the capability of generating
understandable and reasonable compositions. Works created by the proposed model
are presented to demonstrate the effectiveness of the model.Comment: The author information of the old version of this paper is wrong.
Removed it. Please use this version if need to cit