87,072 research outputs found

    Gaze aversion during social style interactions in autism spectrum disorder and Williams syndrome

    Get PDF
    During face-to-face interactions typically developing individuals use gaze aversion (GA), away from their questioner, when thinking. GA is also used when individuals with autism (ASD) and Williams syndrome (WS) are thinking during question-answer interactions. We investigated GA strategies during face-to-face social style interactions with familiar and unfamiliar interlocutors. Participants with WS and ASD used overall typical amounts/patterns of GA with all participants looking away most while thinking and remembering (in contrast to listening and speaking). However there were a couple of specific disorder related differences: participants with WS looked away less when thinking and interacting with unfamiliar interlocutors; in typical development and WS familiarity was associated with reduced gaze aversion, however no such difference was evident in ASD. Results inform typical/atypical social and cognitive phenotypes. We conclude that gaze aversion serves some common functions in typical and atypical development in terms of managing the cognitive and social load of interactions. There are some specific idiosyncracies associated with managing familiarity in ASD and WS with elevated sociability with unfamiliar others in WS and a lack of differentiation to interlocutor familiarity in ASD. Regardless of the familiarity of the interlocutor, GA is associated with thinking for typically developing as well as atypically developing groups. Social skills training must take this into account

    Don't look now... I'm trying to think

    Get PDF
    What was the name of your first headteacher? Stop and think for a while... did you just look to the heavens for the answer? During difficult cognitive activity, for example remembering information, thinking of an answer to a question, planning what we are going to say, and speaking, we often close our eyes, look up at the sky, or look away from the person we are in conversation with. Adults are very good at switching off from environmental stimulation (both live faces and other sorts of visual display) in order to concentrate better. Until recently we knew very little about whether children use gaze aversion in a similar way. This is a potentially important omission, since the efficiency with which children process information influences many aspects of their development, including school progress. In this article I'll describe what our research team at stirling has been doing to investigate children's gaze aversion, including past and current work. children's patterns of gaze promise to yield important cues to their thinking, concentration and mental processing that will be useful to paretns, teachers, psychologists and anyone engaged in assessing children's knowledge and development

    A single case study of a family-centred intervention with a young girl with cerebral palsy who is a multimodal communicator

    Get PDF
    Background - This paper describes the impact of a family-centred intervention that used video to enhance communication in a young girl with cerebral palsy. This single case study describes how the video-based intervention worked in the context of multimodal communication, which included high-tech augmentative and alternative communication (AAC) device use. This paper includes the family's perspective of the video intervention and they describe the impact of it on their family. Methods - This single case study was based on the premise that the video interaction guidance intervention would increase attentiveness between participants during communication. It tests a hypothesis that eye gaze is a fundamental prerequisite for all communicative initiatives, regardless of modality in the child. Multimodality is described as the range of communicative behaviours used by the child and these are coded as AAC communication, vocalizations (intelligible and unintelligible), sign communication, nodding and pointing. Change was analysed over time with multiple testing both pre and post intervention. Data were analysed within INTERACT, a computer software to analyse behaviourally observed data. Behaviours were analysed for frequency and duration, contingency and co-occurrence. Results - Results indicated increased duration of mother's and girl's eye gaze, increased frequency and duration in AAC communication by the girl and significant change in frequency [χ2 (5, n = 1) = 13.25, P < 0.05] and duration [χ2 (5, n = 1) = 12.57, P < 0.05] of the girl's multimodal communicative behaviours. Contingency and co-occurrence analysis indicated that mother's eye gaze followed by AAC communication was the most prominent change between the pre- and post-intervention assessments. Conclusions - There was a trend for increased eye gaze in both mum and girl and AAC communication in the girl following the video intervention. The family's perspective concurs with the results

    Helping children think: Gaze aversion and teaching

    Get PDF
    Looking away from an interlocutor's face during demanding cognitive activity can help adults answer challenging arithmetic and verbal-reasoning questions (Glenberg, Schroeder, & Robertson, 1998). However, such `gaze aversion' (GA) is poorly applied by 5-year-old school children (Doherty-Sneddon, Bruce, Bonner, Longbotham, & Doyle, 2002). In Experiment 1 we trained ten 5-year-old children to use GA while thinking about answers to questions. This trained group performed significantly better on challenging questions compared with 10 controls given no GA training. In Experiment 2 we found significant and monotonic age-related increments in spontaneous use of GA across three cohorts of ten 5-year-old school children (mean ages: 5;02, 5;06 and 5;08). Teaching and encouraging GA during challenging cognitive activity promises to be invaluable in promoting learning, particularly during early primary years

    Explorations in engagement for humans and robots

    Get PDF
    This paper explores the concept of engagement, the process by which individuals in an interaction start, maintain and end their perceived connection to one another. The paper reports on one aspect of engagement among human interactors--the effect of tracking faces during an interaction. It also describes the architecture of a robot that can participate in conversational, collaborative interactions with engagement gestures. Finally, the paper reports on findings of experiments with human participants who interacted with a robot when it either performed or did not perform engagement gestures. Results of the human-robot studies indicate that people become engaged with robots: they direct their attention to the robot more often in interactions where engagement gestures are present, and they find interactions more appropriate when engagement gestures are present than when they are not.Comment: 31 pages, 5 figures, 3 table

    Speaking to twin children: evidence against the "impoverishment" thesis

    Get PDF
    It is often claimed that parents’ talk to twins is less rich than talk to singletons and that this delays their language development. This case study suggests that talk to twins need not be impoverished. We identify highly sophisticated ways in which a mother responds to her 4-year-old twin children, both individually and jointly, as a way of ensuring an inclusive interactional environment. She uses gaze to demonstrate concurrent recipiency in response to simultaneous competition for attention from both children, and we see how the twins constantly monitor the ongoing interaction in order to appropriately position their own contributions to talk. In conclusion, we argue for the need to take twins’ interactional abilities into account when drawing linguistic comparisons between twins and singletons. Data are in Australian English

    Atypical audiovisual speech integration in infants at risk for autism

    Get PDF
    The language difficulties often seen in individuals with autism might stem from an inability to integrate audiovisual information, a skill important for language development. We investigated whether 9-month-old siblings of older children with autism, who are at an increased risk of developing autism, are able to integrate audiovisual speech cues. We used an eye-tracker to record where infants looked when shown a screen displaying two faces of the same model, where one face is articulating/ba/and the other/ga/, with one face congruent with the syllable sound being presented simultaneously, the other face incongruent. This method was successful in showing that infants at low risk can integrate audiovisual speech: they looked for the same amount of time at the mouths in both the fusible visual/ga/− audio/ba/and the congruent visual/ba/− audio/ba/displays, indicating that the auditory and visual streams fuse into a McGurk-type of syllabic percept in the incongruent condition. It also showed that low-risk infants could perceive a mismatch between auditory and visual cues: they looked longer at the mouth in the mismatched, non-fusible visual/ba/− audio/ga/display compared with the congruent visual/ga/− audio/ga/display, demonstrating that they perceive an uncommon, and therefore interesting, speech-like percept when looking at the incongruent mouth (repeated ANOVA: displays x fusion/mismatch conditions interaction: F(1,16) = 17.153, p = 0.001). The looking behaviour of high-risk infants did not differ according to the type of display, suggesting difficulties in matching auditory and visual information (repeated ANOVA, displays x conditions interaction: F(1,25) = 0.09, p = 0.767), in contrast to low-risk infants (repeated ANOVA: displays x conditions x low/high-risk groups interaction: F(1,41) = 4.466, p = 0.041). In some cases this reduced ability might lead to the poor communication skills characteristic of autism

    Cognitive demands of face monitoring: Evidence for visuospatial overload

    Get PDF
    Young children perform difficult communication tasks better face to face than when they cannot see one another (e.g., Doherty-Sneddon & Kent, 1996). However, in recent studies, it was found that children aged 6 and 10 years, describing abstract shapes, showed evidence of face-to-face interference rather than facilitation. For some communication tasks, access to visual signals (such as facial expression and eye gaze) may hinder rather than help children’s communication. In new research we have pursued this interference effect. Five studies are described with adults and 10- and 6-year-old participants. It was found that looking at a face interfered with children’s abilities to listen to descriptions of abstract shapes. Children also performed visuospatial memory tasks worse when they looked at someone’s face prior to responding than when they looked at a visuospatial pattern or at the floor. It was concluded that performance on certain tasks was hindered by monitoring another person’s face. It is suggested that processing of visual communication signals shares certain processing resources with the processing of other visuospatial information

    Continuous Interaction with a Virtual Human

    Get PDF
    Attentive Speaking and Active Listening require that a Virtual Human be capable of simultaneous perception/interpretation and production of communicative behavior. A Virtual Human should be able to signal its attitude and attention while it is listening to its interaction partner, and be able to attend to its interaction partner while it is speaking – and modify its communicative behavior on-the-fly based on what it perceives from its partner. This report presents the results of a four week summer project that was part of eNTERFACE’10. The project resulted in progress on several aspects of continuous interaction such as scheduling and interrupting multimodal behavior, automatic classification of listener responses, generation of response eliciting behavior, and models for appropriate reactions to listener responses. A pilot user study was conducted with ten participants. In addition, the project yielded a number of deliverables that are released for public access
    corecore