The representation learning of speech, without textual resources, is an area
of significant interest for many low resource speech applications. In this
paper, we describe an approach to self-supervised representation learning from
raw audio using a hidden unit clustering (HUC) framework. The input to the
model consists of audio samples that are windowed and processed with 1-D
convolutional layers. The learned "time-frequency" representations from the
convolutional neural network (CNN) module are further processed with long short
term memory (LSTM) layers which generate a contextual vector representation for
every windowed segment. The HUC framework, allowing the categorization of the
representations into a small number of phoneme-like units, is used to train the
model for learning semantically rich speech representations. The targets
consist of phoneme-like pseudo labels for each audio segment and these are
generated with an iterative k-means algorithm. We explore techniques that
improve the speaker invariance of the learned representations and illustrate
the effectiveness of the proposed approach on two settings, i) completely
unsupervised speech applications on the sub-tasks described as part of the
ZeroSpeech 2021 challenge and ii) semi-supervised automatic speech recognition
(ASR) applications on the TIMIT dataset and on the GramVaani challenge Hindi
dataset. In these experiments, we achieve state-of-art results for various
ZeroSpeech tasks. Further, on the ASR experiments, the HUC representations are
shown to improve significantly over other established benchmarks based on
Wav2vec, HuBERT and Best-RQ