1,249 research outputs found

    Children’s Reading of Sublexical Units in Years Three to Five: A Combined Analysis of Eye-Movements and Voice Recording

    Get PDF
    Purpose Children progress from making grapheme–phoneme connections to making grapho-syllabic connections before whole-word connections during reading development (Ehri, 2005a). More is known about the development of grapheme–phoneme connections than is known about grapho-syllabic connections. Therefore, we explored the trajectory of syllable use in English developing readers during oral reading. Method Fifty-one English-speaking children (mean age: 8.9 years, 55% females, 88% monolinguals) in year groups three, four, and five read aloud sentences with an embedded target word, while their eye movements and voices were recorded. The targets contained six letters and were either one or two syllables. Result Children in grade five had shorter gaze duration, shorter articulation duration, and larger spatial eye-voice span (EVS) than children in grade four. Children in grades three and four did not significantly differ on these measures. A syllable number effect was found for gaze duration but not for articulation duration and spatial EVS. Interestingly, one-syllable words took longer to process compared to two-syllable words, suggesting that more syllables may not always signify greater processing difficulty. Conclusion Overall, children are sensitive to sublexical reading units; however, due to sample and stimuli limitations, these findings should be interpreted with caution and further research conducted

    A usage-based approach to language processing and intervention in aphasia

    Get PDF
    Non-fluent aphasia (NFA) is characterized by grammatically impoverished language output. Yet there is evidence that a restricted set of multi-word utterances (e.g., “don’t know”) are retained. Analyses of connected speech often dismiss these as stereotypical, however, these high-frequency phrases are an interactional resource in both neurotypical and aphasic discourse. One approach that can account for these forms is usage-based grammar, where linguistic knowledge is thought of as an inventory of constructions, i.e., form-meaning pairings such as familiar collocations (“wait a minute”) and semi-fixed phrases (“I want X”). This approach is used in language development and second language learning research, but its application to aphasiology is currently limited. This thesis applied a usage-based perspective to language processing and intervention in aphasia. Study 1 investigated use of word combinations in conversations of nine participants with Broca’s aphasia (PWA) and their conversation partners (CPs), combining analysis of form (frequency-based approach) and function (interactional linguistics approach). In study 2, an on-line word monitoring task was used to examine whether individuals with aphasia and neurotypical controls showed sensitivity to collocation strength (degree of association between units of a word combination). Finally, the impact of a novel intervention involving loosening of slots in semi-fixed phrases was piloted with five participants with NFA. Study 1 revealed that PWA used stronger collocated word combinations compared to CPs, and familiar collocations are a resource adapted to the constraints of aphasia. Findings from study 2 indicated that words were recognised more rapidly when preceded by strongly collocated words in both neurotypical and aphasic listeners, although effects were stronger for controls. Study 3 resulted in improved connected speech for some participants. Future research is needed to refine outcome measures for connected speech interventions. This thesis suggests that usage-based grammar has potential to explain grammatical behaviour in aphasia, and to inform interventions

    Language control in bilingual production: Insights from error rate and error type in sentence production

    Get PDF
    First published online: 16 October 2020Most research showing that cognates are named faster than non-cognates has focused on isolated word production which might not realistically reflect cognitive demands in sentence production. Here, we explored whether cognates elicit interference by examining error rates during sentence production, and how this interference is resolved by language control mechanisms. Twenty highly proficient Spanish–English bilinguals described visual scenes with sentence structures ‘NP1-verb-NP2’ (NP = noun-phrase). Half the nouns and half the verbs were cognates and two manipulations created high control demands. Both situations that demanded higher inhibitory control pushed the cognate effect from facilitation towards interference. These findings suggest that cognates, similar to phonologically similar words within a language, can induce not only facilitation but robust interference.We thank Michael Freund and Nicholas McCloskey for their help with data collection. This work was supported in part by the Therapeutic Cognitive Neuroscience Fund endowed to the Cognitive Neurology division of the Neurology Department at Johns Hopkins University. C.D. Martin was supported by the Spanish Ministry of Economy and Competitiveness (SEV-2015-490; PSI2017-82941-P; Europa-Excelencia ERC2018-092833), the Basque Government (PIBA18-29), and the European Research Council (ERC-2018-COG-819093). N. Nozari was also supported by a NSF grant (NSF BCS-1949631)

    The effect of word structure on the processing of Chinese two-character compound words and its acquisition in Hong Kong school-aged children

    Get PDF
    Also available in print."A dissertation submitted in partial fulfilment of the requirements for the Bachelor of Science (Speech and Hearing Sciences), The University of Hong Kong, December 31, 2004."Thesis (B.Sc)--University of Hong Kong, 2004.published_or_final_versionSpeech and Hearing SciencesBachelorBachelor of Science in Speech and Hearing Science

    Towards a vygotskyan cognitive robotics: the role of language as a cognitive tool

    Get PDF
    Cognitive Robotics can be defined as the study of cognitive phenomena by their modeling in physical artifacts such as robots. This is a very lively and fascinating field which has already given fundamental contributions to our understanding of natural cognition. Nonetheless, robotics has to date addressed mainly very basic, low-level cognitive phenomena like sensory-motor coordination, perception, and navigation, and it is not clear how the current approach might scale up to explain high-level human cognition. In this paper we argue that a promising way to do that is to merge current ideas and methods of \u27embodied cognition\u27 with the Russian tradition of theoretical psychology which views language not only as a communication system but also as a cognitive tool, that is by developing a Vygotskyan Cognitive Robotics. We substantiate this idea by discussing several domains in which language can improve basic cognitive abilities and permit the development of high-level cognition: learning, categorization, abstraction, memory, voluntary control, and mental life

    Connection between movements of mouth and hand : Perspectives on development and evolution of speech

    Get PDF
    Mounting evidence shows interaction between manipulative hand movements and movements of tongue, lips and mouth in a vocal and non-vocal context. The current article reviews this evidence and discusses its contribution to perspectives of development and evolution of speech. In particular, the article aims to present novel insight on how processes controlling the two primary grasp components of manipulative hand movements, the precision and power grip, might be systematically connected to motor processes involved in producing certain articulatory gestures. This view assumes that due to these motor overlaps between grasping and articulation, development of these grip types in infancy can facilitate development of specific articulatory gestures. In addition, the hand-mouth connections might have even boosted the evolution of some articulatory gestures. This account also proposes that some semantic sound-symbolic pairings between a speech sound and a referent concept might be partially based on these hand-mouth interactions.Peer reviewe

    Effects of labelling on object perception and categorisation in infants

    Get PDF
    How do labels impact object perception and enhance categorisation? This question has been the focus of substantial theoretical debate, particularly in the developmental literature, with conflicting results. Specifically, whether labels for objects act as additional perceptual features or instead as referential pointers to category concepts has been the subject of intense debate. In this thesis, we attempted to shed a new light on this question, combining empirical results on both infants and adults, and neurocomputational models. First, we developed a dual-memory neurocomputational model of long-term learning inspired by Westermann and Mareschal's (2014) model, to test predictions of the two mains theories on labelling and categorisation on existing infant data, and to generate predictions for a follow-up study. Our modelling work suggested that for the empirical designs considered and age groups tested, labels were processed as object features, as opposed to having a more referential role. We then focused on explicitly testing potential attentional effects of auditory labels during categorisation in an empirical study. More precisely, we studied the interaction between feature salience, feature diagnosticity, and auditory labels, in a categorisation task. Surprisingly, we found that 15-month-old infants and adults could learn labelled categories in which the salient feature (head of line-drawn novel animals) was non-diagnostic of category membership, but the non-salient feature (tail) was, without adopting a different pattern of looking compared to participants in a control group. Although our data did not provide clear evidence for a true null effect, this finding was once again more compatible with the theory that labels act as features, not referents. This finding also led us to reconsider the use of eye movements and looking times as a proxy for learning, as it seemed that participants could learn more without looking more. Given our empirical results on salience and diagnosticity of features, and given the methodological differences in the handling of feature salience and diagnosticity in the categorisation literature, we developed a simple auto-encoder model to further study the impact of salience differences between features in the context of a categorisation task, with or without a label. Our simulations suggested that bigger disparities in salience between different features of an object can result in differences in terms of learning speed and compactness of categories in internal representations, hinting that future empirical studies should consider feature salience in their design. Overall then, this thesis provides some evidence in favour of the labels-as-features theory through the use of empirical eye-tracking data on infants and adults, and neurocomputational modelling. This thesis further asks new questions on the importance of feature salience in categorisation tasks, and the interpretation of eye movement and looking time data in general

    Native and Non-native Idiom Processing: Same Difference

    Get PDF
    This dissertation looks at idiom processing in native (L1) and non-native (L2) speakers. The duality of meaning represented by idioms (e.g., the idiom piece of cake means figuratively very easy but literally describes dessert) poses issues for theories of language processing and composition. While L1 speakers can easily comprehend idioms, L2 speakers have more difficulty in doing so. However, it is still unclear whether these difficulties are evidence of differential processing in L1 and L2 listeners. This work looks at idiom processing in both speaker groups via a collection of experimental studies in order to answer the overarching question: How do L1 and L2 idiom processing compare? In doing so, a number of issues are considered, such as: the timeline of meaning activation for figurative (idiomatic) meaning as well as literal constituent and phrasal meaning; the flexibility in this process during comprehension; the impact of idiomatic properties on processing; recognition memory for equal figurative and literal phrases after learning; and brain activation during comprehension. The work includes a database of American English idioms with L1 and L2 (German L1) norming values as well as experimental methods in L1 and L2 speakers such as cross-modal priming, eye-tracking, self-paced reading, training and recognition, and fMRI. The evidence presented suggests that L1 and L2 idiom processing differ based on general L1 and L2 differences, however, a single idiomatic processing method which considers both figurative and literal meaning is responsible for both speaker groups
    corecore