The self-attention mechanism, which equips with a strong capability of
modeling long-range dependencies, is one of the extensively used techniques in
the sequential recommendation field. However, many recent studies represent
that current self-attention based models are low-pass filters and are
inadequate to capture high-frequency information. Furthermore, since the items
in the user behaviors are intertwined with each other, these models are
incomplete to distinguish the inherent periodicity obscured in the time domain.
In this work, we shift the perspective to the frequency domain, and propose a
novel Frequency Enhanced Hybrid Attention Network for Sequential
Recommendation, namely FEARec. In this model, we firstly improve the original
time domain self-attention in the frequency domain with a ramp structure to
make both low-frequency and high-frequency information could be explicitly
learned in our approach. Moreover, we additionally design a similar attention
mechanism via auto-correlation in the frequency domain to capture the periodic
characteristics and fuse the time and frequency level attention in a union
model. Finally, both contrastive learning and frequency regularization are
utilized to ensure that multiple views are aligned in both the time domain and
frequency domain. Extensive experiments conducted on four widely used benchmark
datasets demonstrate that the proposed model performs significantly better than
the state-of-the-art approaches.Comment: 11 pages, 7 figures, The 46th International ACM SIGIR Conference on
Research and Development in Information Retrieva