In Spoken language understanding (SLU), a natural solution is concatenating
pre-trained speech models (e.g. HuBERT) and pretrained language models (PLM,
e.g. T5). Most previous works use pretrained language models with subword-based
tokenization. However, the granularity of input units affects the alignment of
speech model outputs and language model inputs, and PLM with character-based
tokenization is underexplored. In this work, we conduct extensive studies on
how PLMs with different tokenization strategies affect spoken language
understanding task including spoken question answering (SQA) and speech
translation (ST). We further extend the idea to create T5lephone(pronounced as
telephone), a variant of T5 that is pretrained using phonemicized text. We
initialize T5lephone with existing PLMs to pretrain it using relatively
lightweight computational resources. We reached state-of-the-art on NMSQA, and
the T5lephone model exceeds T5 with other types of units on end-to-end SQA and
ST