4 research outputs found

    FNet: Mixing Tokens with Fourier Transforms

    Full text link
    We show that Transformer encoder architectures can be massively sped up, with limited accuracy costs, by replacing the self-attention sublayers with simple linear transformations that "mix" input tokens. These linear transformations, along with standard nonlinearities in feed-forward layers, prove competent at modeling semantic relationships in several text classification tasks. Most surprisingly, we find that replacing the self-attention sublayer in a Transformer encoder with a standard, unparameterized Fourier Transform achieves 92-97% of the accuracy of BERT counterparts on the GLUE benchmark, but trains nearly seven times faster on GPUs and twice as fast on TPUs. The resulting model, FNet, also scales very efficiently to long inputs. Specifically, when compared to the "efficient" Transformers on the Long Range Arena benchmark, FNet matches the accuracy of the most accurate models, but is faster than the fastest models across all sequence lengths on GPUs (and across relatively shorter lengths on TPUs). Finally, FNet has a light memory footprint and is particularly efficient at smaller model sizes: for a fixed speed and accuracy budget, small FNet models outperform Transformer counterparts

    Using Fourier-Neural Recurrent Networks to Fit Sequential Input/Output Data

    No full text
    This paper suggests the use of Fourier-type activation functions in fully recurrent neural networks. The main theoretical advantage is that, in principle, the problem of recovering internal coefficients from input/output data is solvable in closed form. 1 Introduction Neural networks provide a useful approach to parallel computation. The subclass of recurrent architectures is characterized by the inclusion of feedback loops in the information flow among processing units. With feedback, one may exploit context-sensitivity and memory, characteristics essential in sequence processing as well as in the modeling and control of processes involving dynamical elements. Recent theoretical results about neural networks have established their universality as models for systems approximation as well as analog computing devices (see e.g. [14, 12]). The use of recurrent networks has been proposed in areas as varied as the design of control laws for robotic manipulators, in speech recognition, speak..
    corecore