108 research outputs found
FNet: Mixing Tokens with Fourier Transforms
We show that Transformer encoder architectures can be massively sped up, with
limited accuracy costs, by replacing the self-attention sublayers with simple
linear transformations that "mix" input tokens. These linear transformations,
along with standard nonlinearities in feed-forward layers, prove competent at
modeling semantic relationships in several text classification tasks. Most
surprisingly, we find that replacing the self-attention sublayer in a
Transformer encoder with a standard, unparameterized Fourier Transform achieves
92-97% of the accuracy of BERT counterparts on the GLUE benchmark, but trains
nearly seven times faster on GPUs and twice as fast on TPUs. The resulting
model, FNet, also scales very efficiently to long inputs. Specifically, when
compared to the "efficient" Transformers on the Long Range Arena benchmark,
FNet matches the accuracy of the most accurate models, but is faster than the
fastest models across all sequence lengths on GPUs (and across relatively
shorter lengths on TPUs). Finally, FNet has a light memory footprint and is
particularly efficient at smaller model sizes: for a fixed speed and accuracy
budget, small FNet models outperform Transformer counterparts
ConvFormer: Revisiting Transformer for Sequential User Modeling
Sequential user modeling, a critical task in personalized recommender
systems, focuses on predicting the next item a user would prefer, requiring a
deep understanding of user behavior sequences. Despite the remarkable success
of Transformer-based models across various domains, their full potential in
comprehending user behavior remains untapped. In this paper, we re-examine
Transformer-like architectures aiming to advance state-of-the-art performance.
We start by revisiting the core building blocks of Transformer-based methods,
analyzing the effectiveness of the item-to-item mechanism within the context of
sequential user modeling. After conducting a thorough experimental analysis, we
identify three essential criteria for devising efficient sequential user
models, which we hope will serve as practical guidelines to inspire and shape
future designs. Following this, we introduce ConvFormer, a simple but powerful
modification to the Transformer architecture that meets these criteria,
yielding state-of-the-art results. Additionally, we present an acceleration
technique to minimize the complexity associated with processing extremely long
sequences. Experiments on four public datasets showcase ConvFormer's
superiority and confirm the validity of our proposed criteria
Multi-carrier CDMA using convolutional coding and interference cancellation
SIGLEAvailable from British Library Document Supply Centre-DSC:DXN016251 / BLDSC - British Library Document Supply CentreGBUnited Kingdo
- …