The attention mechanism, a cornerstone of state-of-the-art neural models,
faces computational hurdles in processing long sequences due to its quadratic
complexity. Consequently, research efforts in the last few years focused on
finding more efficient alternatives. Among them, Hyena (Poli et al., 2023)
stands out for achieving competitive results in both language modeling and
image classification, while offering sub-quadratic memory and computational
complexity. Building on these promising results, we propose ConfHyena, a
Conformer whose encoder self-attentions are replaced with an adaptation of
Hyena for speech processing, where the long input sequences cause high
computational costs. Through experiments in automatic speech recognition (for
English) and translation (from English into 8 target languages), we show that
our best ConfHyena model significantly reduces the training time by 27%, at the
cost of minimal quality degradation (~1%), which, in most cases, is not
statistically significant.Comment: Accepted at LREC-COLING 202