This software demonstration overviews the developments made during the 3-year NCeSS funded Understanding New Forms of the
Digital Record for e-Social Science project (DReSS) that was based at the University of Nottingham. The demo highlights the
outcomes of a specific ‘driver project’ hosted by DReSS, which sought to combine the knowledge of linguists and the expertise of
computer scientists in the construction of the multi-modal (MM hereafter) corpus software: the Digital Replay System (DRS). DRS
presents ‘data’ in three different modes, as spoken (audio), video and textual records of real-life interactions, accurately aligning within
a functional, searchable corpus setting (known as the Nottingham Multi-Modal Corpus: NMMC herein). The DRS environment
therefore allows for the exploration of the lexical, prosodic and gestural features of conversation and how they interact in everyday
speech. Further to this, the demonstration introduces a computer vision based gesture recognition system which has been constructed
to allow for the detection and preliminary codification of gesture sequences. This gesture tracking system can be imported into DRS to
enable an automated approach to the analysis of MM datasets