Towards the multimodal unit of meaning: a multimodal corpus-based approach
- Publication date
- Publisher
Abstract
While there has been a wealth of research that uses textually rendered spoken corpora (i.e. written transcripts of spoken language) and corpus methods to investigate utterance meaning in various contexts, multimodal corpus-based research beyond the text is still rare. Monomodal corpus-based research often limits our description and understanding of the meaning of words and phrases, mainly due to the fact that meaning is constructed by multiple modes (e.g. speech, gesture, prosody, etc.).
Hence, focusing on speech and gesture, the thesis explores multimodal corpus-based approaches for investigating multimodal units of meaning, using recurrent phrases, e.g. “(do) you know/see what I mean”, and gesture as two different, yet complementary points of entry. The primary goal is to identify the patterned uses of gesture and speech that can assist in the description of multimodal units of meaning. The Nottingham Multimodal Corpus (250,000 running words) is used as the data base for the research.
The main original contributions of the thesis include a new coding scheme for segmenting gestures, two multimodal profiles for a target recurrent speech and gesture pattern, and a new framework for classifying and describing the role of gestures in discourse. Moreover, the thesis makes important implications for our understanding of the temporal, cognitive and functional relationship between speech and gesture; it also discusses potential applications, particularly in English language teaching and Human-Computer Interaction. These findings are of value to the methodological and theoretical development of multimodal corpus-based research on units of meaning