1,082 research outputs found

    Tablets for two: how dual tablets can facilitate other-awareness and communication in learning disabled children with autism

    Get PDF
    Learning-disabled children with autism (LDA) are impaired in other-awareness, joint attention and imitation, with a poor prognosis for developing language competence. However, better joint attention and imitation skills are predictors of increased language ability. Our study demonstrates that a collaborative activity delivered on a novel dual-tablet configuration (two wifi-linked tablets) facilitates active other-awareness, incorporating imitation and communicative behaviour, in 8 LDA boys with limited or no language, aged 5 - 12 years. LDA children did a picture-sequencing activity using single and linked dual tablets, partnered by an adult or by an LDA peer. Overall, the dual-tablet configuration generated significantly more active other-awareness than children sharing a single tablet. Active other-awareness was observed in LDA peer partnerships using dual tablets, behaviour absent when peer partnerships shared a single tablet. Dual tablets facilitated more communicative behaviour in adult-child partnerships than single tablets. Hence, supporting collaborative activities in LDA children can facilitate other-awareness and communicative behaviour and adult and peer partnerships make different, but essential contributions to social-cognitive development through the collaborative process

    A comparative developmental approach to multimodal communication in chimpanzees (Pan troglodytes)

    Get PDF
    Studying how communication of our closest relatives, the great-apes, develops can inform our understanding of the socio-ecological drivers shaping language evolution. However, despite a now recognized ability of great apes to produce multimodal signal combinations, a key feature of human language, we lack knowledge about when or how this ability manifests throughout ontogeny. In this thesis, I aimed to address this issue by examining the development of multimodal signal combinations (also referred to as multimodal combinations) in chimpanzees. To establish an ontogenetic trajectory of combinatorial signalling, my first empirical study examined age and context related variation in the production of multimodal combinations in relation to unimodal signals. Results showed that older individuals used multimodal combinations at significantly higher frequencies than younger individuals although the unimodal signalling remained dominant. In addition, I found a strong influence of playful and aggressive contexts on multimodal communication, supporting previous suggestions that combinations function to disambiguate messages in high-stakes interactions. Subsequently, I looked at influences in the social environment which may contribute to patterns of communication development. I turned first to the mother-infant relationship which characterises early infancy before moving onto interactive behaviour in the wider social environment and the role of multimodal combinations in communicative interactions. Results indicate that mothers support the development of communicative signalling in their infants, transitioning from more action-based to signalling behaviours with infant age. Furthermore, mothers responded more to communicative signals than physical actions overall, which may help young chimpanzees develop effective communication skills. Within the wider community, I found that interacting with a wider number of individuals positively influenced multimodal combination production. Moreover, in contrast to the literature surrounding unimodal signals, these multimodal signals appeared highly contextually specific. Finally, I found that within communicative interactions, young chimpanzees showed increasing awareness of recipient visual orientation with age, producing multimodal combinations most often when the holistic signal could be received. Moreover, multimodal combinations were more effective in soliciting recipient responses and satisfactory interactional outcomes irrespective of age. Overall, these findings highlight the relevance of studying ape communication development from a multimodal perspective and provide new evidence of developmental patterns that echo those seen in humans, while simultaneously highlighting important species differences. Multimodal communication development appears to be influenced by varying socio-environmental factors including the context and patterns of communicative interaction

    Saliency-based identification and recognition of pointed-at objects

    Full text link
    Abstract — When persons interact, non-verbal cues are used to direct the attention of persons towards objects of interest. Achieving joint attention this way is an important aspect of natural communication. Most importantly, it allows to couple verbal descriptions with the visual appearance of objects, if the referred-to object is non-verbally indicated. In this contri-bution, we present a system that utilizes bottom-up saliency and pointing gestures to efficiently identify pointed-at objects. Furthermore, the system focuses the visual attention by steering a pan-tilt-zoom camera towards the object of interest and thus provides a suitable model-view for SIFT-based recognition and learning. We demonstrate the practical applicability of the proposed system through experimental evaluation in different environments with multiple pointers and objects

    Vision systems with the human in the loop

    Get PDF
    The emerging cognitive vision paradigm deals with vision systems that apply machine learning and automatic reasoning in order to learn from what they perceive. Cognitive vision systems can rate the relevance and consistency of newly acquired knowledge, they can adapt to their environment and thus will exhibit high robustness. This contribution presents vision systems that aim at flexibility and robustness. One is tailored for content-based image retrieval, the others are cognitive vision systems that constitute prototypes of visual active memories which evaluate, gather, and integrate contextual knowledge for visual analysis. All three systems are designed to interact with human users. After we will have discussed adaptive content-based image retrieval and object and action recognition in an office environment, the issue of assessing cognitive systems will be raised. Experiences from psychologically evaluated human-machine interactions will be reported and the promising potential of psychologically-based usability experiments will be stressed

    Design and semantics of form and movement (DeSForM 2006)

    Get PDF
    Design and Semantics of Form and Movement (DeSForM) grew from applied research exploring emerging design methods and practices to support new generation product and interface design. The products and interfaces are concerned with: the context of ubiquitous computing and ambient technologies and the need for greater empathy in the pre-programmed behaviour of the ‘machines’ that populate our lives. Such explorative research in the CfDR has been led by Young, supported by Kyffin, Visiting Professor from Philips Design and sponsored by Philips Design over a period of four years (research funding £87k). DeSForM1 was the first of a series of three conferences that enable the presentation and debate of international work within this field: • 1st European conference on Design and Semantics of Form and Movement (DeSForM1), Baltic, Gateshead, 2005, Feijs L., Kyffin S. & Young R.A. eds. • 2nd European conference on Design and Semantics of Form and Movement (DeSForM2), Evoluon, Eindhoven, 2006, Feijs L., Kyffin S. & Young R.A. eds. • 3rd European conference on Design and Semantics of Form and Movement (DeSForM3), New Design School Building, Newcastle, 2007, Feijs L., Kyffin S. & Young R.A. eds. Philips sponsorship of practice-based enquiry led to research by three teams of research students over three years and on-going sponsorship of research through the Northumbria University Design and Innovation Laboratory (nuDIL). Young has been invited on the steering panel of the UK Thinking Digital Conference concerning the latest developments in digital and media technologies. Informed by this research is the work of PhD student Yukie Nakano who examines new technologies in relation to eco-design textiles

    Social gaze and symbolic skills in typically developing infants and children with autism

    Get PDF
    The aim of this thesis was to investigate, through two observational studies, the relation among social gaze, play and language in 27 typically developing infants (study 1) and 18 young children with autism (study 2). The child's spontaneous play behaviour and their spontaneous social gaze behaviours were assessed in a five-minute free-play observation session. Measures of children's language were obtained using the MacArthur Communicative Development Inventory, and measures of overall mental ability were obtained using the Bayley Mental Scale of Infant Development-II. Two hypotheses were tested. The first concerned the relation between play and language. The hypothesis was that symbolic play and language reflect the emergence of a common underlying symbolic ability in 18-24 month olds infants. Results did not show a link between these twin symbolic abilities supporting the view that later in development word-learning diverges from other form of symbol development. The second concerned the relation between play and social gaze. The hypothesis was that social gaze is important for the emergence of symbolic development in typically developing infants and preschool children with autism and developmental delays. Results supported the view that social interaction is important for symbolic and pre-symbolic skills but suggested that the use of social gaze may have a general rather than a specific role in assisting symbolic activity. The implications of these findings for the developmental accounts of typically infants and children with autism are discussed

    The role of phonology in visual word recognition: evidence from Chinese

    Get PDF
    Posters - Letter/Word Processing V: abstract no. 5024The hypothesis of bidirectional coupling of orthography and phonology predicts that phonology plays a role in visual word recognition, as observed in the effects of feedforward and feedback spelling to sound consistency on lexical decision. However, because orthography and phonology are closely related in alphabetic languages (homophones in alphabetic languages are usually orthographically similar), it is difficult to exclude an influence of orthography on phonological effects in visual word recognition. Chinese languages contain many written homophones that are orthographically dissimilar, allowing a test of the claim that phonological effects can be independent of orthographic similarity. We report a study of visual word recognition in Chinese based on a mega-analysis of lexical decision performance with 500 characters. The results from multiple regression analyses, after controlling for orthographic frequency, stroke number, and radical frequency, showed main effects of feedforward and feedback consistency, as well as interactions between these variables and phonological frequency and number of homophones. Implications of these results for resonance models of visual word recognition are discussed.postprin

    Gesture and Speech in Interaction - 4th edition (GESPIN 4)

    Get PDF
    International audienceThe fourth edition of Gesture and Speech in Interaction (GESPIN) was held in Nantes, France. With more than 40 papers, these proceedings show just what a flourishing field of enquiry gesture studies continues to be. The keynote speeches of the conference addressed three different aspects of multimodal interaction:gesture and grammar, gesture acquisition, and gesture and social interaction. In a talk entitled Qualitiesof event construal in speech and gesture: Aspect and tense, Alan Cienki presented an ongoing researchproject on narratives in French, German and Russian, a project that focuses especially on the verbal andgestural expression of grammatical tense and aspect in narratives in the three languages. Jean-MarcColletta's talk, entitled Gesture and Language Development: towards a unified theoretical framework,described the joint acquisition and development of speech and early conventional and representationalgestures. In Grammar, deixis, and multimodality between code-manifestation and code-integration or whyKendon's Continuum should be transformed into a gestural circle, Ellen Fricke proposed a revisitedgrammar of noun phrases that integrates gestures as part of the semiotic and typological codes of individuallanguages. From a pragmatic and cognitive perspective, Judith Holler explored the use ofgaze and hand gestures as means of organizing turns at talk as well as establishing common ground in apresentation entitled On the pragmatics of multi-modal face-to-face communication: Gesture, speech andgaze in the coordination of mental states and social interaction.Among the talks and posters presented at the conference, the vast majority of topics related, quitenaturally, to gesture and speech in interaction - understood both in terms of mapping of units in differentsemiotic modes and of the use of gesture and speech in social interaction. Several presentations explored the effects of impairments(such as diseases or the natural ageing process) on gesture and speech. The communicative relevance ofgesture and speech and audience-design in natural interactions, as well as in more controlled settings liketelevision debates and reports, was another topic addressed during the conference. Some participantsalso presented research on first and second language learning, while others discussed the relationshipbetween gesture and intonation. While most participants presented research on gesture and speech froman observer's perspective, be it in semiotics or pragmatics, some nevertheless focused on another importantaspect: the cognitive processes involved in language production and perception. Last but not least,participants also presented talks and posters on the computational analysis of gestures, whether involvingexternal devices (e.g. mocap, kinect) or concerning the use of specially-designed computer software forthe post-treatment of gestural data. Importantly, new links were made between semiotics and mocap data

    Multimodal communication development in semiwild chimpanzees

    Get PDF
    Human language is characterized by the integration of multiple signal modalities, including speech, facial and gestural signals. While language likely has deep evolutionary roots that are shared with some of our closest living relatives, studies of great ape communication have largely focused on each modality separately, thus hindering insights into the origins of its multimodal nature. Studying when multimodal signals emerge during great ape ontogeny can inform about both the proximate and ultimate mechanisms underlying their communication systems, shedding light on potential evolutionary continuity between humans and other apes. To this end, the current study investigated developmental patterns of multimodal signal production by 28 semiwild chimpanzees, Pan troglodytes, ranging in age from infancy to early adolescence. We examined the production of facial expressions, gestures and vocalizations across a range of behavioural contexts, both when produced separately and as part of multimodal signal combinations (henceforth multimodal). Overall, we found that while unimodal signals were produced consistently more often than multimodal combinations across all ages and contexts, the frequency of multimodal combinations increased significantly in older individuals and most within the aggression and play contexts, where the costs of signalling ambiguity may be higher. Furthermore, older individuals were more likely to produce a multimodal than a unimodal signal and, again, especially in aggressive contexts. Variation in production of individual signal modalities across ages and contexts are also presented and discussed. Overall, evidence that multimodality increases with age in chimpanzees is consistent with patterns of developing communicative complexity in human infancy, revealing apparent evolutionary continuity. Findings from this study contribute novel insights into the evolution and development of multimodality and highlight the importance of adopting a multimodal approach in the comparative study of primate communication
    corecore