Learning an effective speaker representation is crucial for achieving
reliable performance in speaker verification tasks. Speech signals are
high-dimensional, long, and variable-length sequences that entail a complex
hierarchical structure. Signals may contain diverse information at each
time-frequency (TF) location. For example, it may be more beneficial to focus
on high-energy parts for phoneme classes such as fricatives. The standard
convolutional layer that operates on neighboring local regions cannot capture
the complex TF global context information. In this study, a general global
time-frequency context modeling framework is proposed to leverage the context
information specifically for speaker representation modeling. First, a
data-driven attention-based context model is introduced to capture the
long-range and non-local relationship across different time-frequency
locations. Second, a data-independent 2D-DCT based context model is proposed to
improve model interpretability. A multi-DCT attention mechanism is presented to
improve modeling power with alternate DCT base forms. Finally, the global
context information is used to recalibrate salient time-frequency locations by
computing the similarity between the global context and local features. The
proposed lightweight blocks can be easily incorporated into a speaker model
with little additional computational costs and effectively improves the speaker
verification performance compared to the standard ResNet model and
Squeeze\&Excitation block by a large margin. Detailed ablation studies are also
performed to analyze various factors that may impact performance of the
proposed individual modules. Results from experiments show that the proposed
global context modeling framework can efficiently improve the learned speaker
representations by achieving channel-wise and time-frequency feature
recalibration