9 research outputs found
FNet: Mixing Tokens with Fourier Transforms
We show that Transformer encoder architectures can be massively sped up, with
limited accuracy costs, by replacing the self-attention sublayers with simple
linear transformations that "mix" input tokens. These linear transformations,
along with standard nonlinearities in feed-forward layers, prove competent at
modeling semantic relationships in several text classification tasks. Most
surprisingly, we find that replacing the self-attention sublayer in a
Transformer encoder with a standard, unparameterized Fourier Transform achieves
92-97% of the accuracy of BERT counterparts on the GLUE benchmark, but trains
nearly seven times faster on GPUs and twice as fast on TPUs. The resulting
model, FNet, also scales very efficiently to long inputs. Specifically, when
compared to the "efficient" Transformers on the Long Range Arena benchmark,
FNet matches the accuracy of the most accurate models, but is faster than the
fastest models across all sequence lengths on GPUs (and across relatively
shorter lengths on TPUs). Finally, FNet has a light memory footprint and is
particularly efficient at smaller model sizes: for a fixed speed and accuracy
budget, small FNet models outperform Transformer counterparts
A Cross-Attention Augmented Model for Event-Triggered Context-Aware Story Generation
Despite recent advancements, existing story generation systems continue to
encounter difficulties in effectively incorporating contextual and event
features, which greatly influence the quality of generated narratives. To
tackle these challenges, we introduce a novel neural generation model, EtriCA,
that enhances the relevance and coherence of generated stories by employing a
cross-attention mechanism to map context features onto event sequences through
residual mapping. This feature capturing mechanism enables our model to exploit
logical relationships between events more effectively during the story
generation process. To further enhance our proposed model, we employ a
post-training framework for knowledge enhancement (KeEtriCA) on a large-scale
book corpus. This allows EtriCA to adapt to a wider range of data samples. This
results in approximately 5\% improvement in automatic metrics and over 10\%
improvement in human evaluation. We conduct extensive experiments, including
comparisons with state-of-the-art (SOTA) baseline models, to evaluate the
performance of our framework on story generation. The experimental results,
encompassing both automated metrics and human assessments, demonstrate the
superiority of our model over existing state-of-the-art baselines. These
results underscore the effectiveness of our model in leveraging context and
event features to improve the quality of generated narratives.Comment: Submitted to CS
A Trainable Optimal Transport Embedding for Feature Aggregation and its Relationship to Attention
We address the problem of learning on sets of features, motivated by the need
of performing pooling operations in long biological sequences of varying sizes,
with long-range dependencies, and possibly few labeled data. To address this
challenging task, we introduce a parametrized representation of fixed size,
which embeds and then aggregates elements from a given input set according to
the optimal transport plan between the set and a trainable reference. Our
approach scales to large datasets and allows end-to-end training of the
reference, while also providing a simple unsupervised learning mechanism with
small computational cost. Our aggregation technique admits two useful
interpretations: it may be seen as a mechanism related to attention layers in
neural networks, or it may be seen as a scalable surrogate of a classical
optimal transport-based kernel. We experimentally demonstrate the effectiveness
of our approach on biological sequences, achieving state-of-the-art results for
protein fold recognition and detection of chromatin profiles tasks, and, as a
proof of concept, we show promising results for processing natural language
sequences. We provide an open-source implementation of our embedding that can
be used alone or as a module in larger learning models at
https://github.com/claying/OTK.Comment: ICLR 202