Evaluating child engagement in digital story stems using facial data

Abstract

Engagement is a key factor in understanding people’s psychology and behaviours and is an understudied topic in children. The area of focus in this thesis is child engagement in the story-stems used in child Attachment evaluations such as the Manchester Child Attachment Task (MCAST). Due to the high cost and time required for conducting Attachment assessments, automated assessments are being developed. These present story-stems in a cost-effective way on a laptop screen to digitalise the interaction between the child and the story, without disrupting the storytelling. However, providing such tests via computer relies on the child being engaged in the digital story-stem. If they are not engaged, then the tests will not be successful and the collected data will be of poor-quality, which will not allow for successful detection of Attachment status. Therefore, the aim of this research is to investigate a range of aspects of child engagement to understand how to engage children in story-stems, and how to measure their engagement levels. This thesis focuses on measuring the levels of child engagement in digital story-stems and specifically on understanding the effect of multimedia digital story-stems on children’s engagement levels to create a better and more engaging digital story-stem. Data sources used in this thesis include the observation of each child’s facial behaviours and a questionnaire with Smiley-o-meter scale. Measurement tools are developed and validated through analyses of facial data from children when watching digital story-stems with different presentation and voice types. Results showed that facial data analysis, using eye-tracking measures and facial action units (AUs) recognition, can be used to measure children’s engagement levels in the context of viewing digital story-stems. Using eye-tracking measures, engaged children have longer fixation durations in both mean and sum of fixation durations, which reflect that a child was deeply engaged in the story-stems. Facial AU recognition had better performance in a binary classification for discriminating engaged or disengaged children than eye-tracking measurements. The most frequently occurring facial action units taken from the engaged classes show that children’s facial action units indicated signs of fear, which suggest that children felt anxiety and distress while watching the story-stems. These feeling of anxiety and distress show that children have a strong emotional engagement and can locate themselves in the story-stems, showing that they were strong engaged. A further contribution in this thesis was to investigate the best way of creating an engaging story-stem. Results showed that an animated video narrated by a female expressive voice was most engaging. Compared to the live-action MCAST video, data showed that children were more engaged in the animated videos. Voice gender and voice expressiveness were two factors of the quality of storytelling voice that were evaluated and both affected children’s engagement levels. The distribution of child engagement across different voice types was compared to find the best storytelling voice type for story-stem design. A female expressive voice had a better performance for displaying the ‘distress’ in the story-stem than other voice types and engaged children more in the story-stems. The quality of the storytelling voice used to narrate story-stems and animated videos both significantly affected children’s levels of engagement. Such digital story-stems make children more engaged in the digital MCAST test

    Similar works