73 research outputs found

    Orthographic Contamination of Broca’s Area

    Get PDF
    Strong evidence has accumulated over the past years suggesting that orthography plays a role in spoken language processing. It is still unclear, however, whether the influence of orthography on spoken language results from a co-activation of posterior brain areas dedicated to low-level orthographic processing or whether it results from orthographic restructuring of phonological representations located in the anterior perisylvian speech network itself. To test these hypotheses, we ran a fMRI study that tapped orthographic processing in the visual and auditory modalities. As a marker for orthographic processing, we used the orthographic decision task in the visual modality and the orthographic consistency effect in the auditory modality. Results showed no specific orthographic activation neither for the visual nor the auditory modality in left posterior occipito-temporal brain areas that are thought to host the visual word form system. In contrast, specific orthographic activation was found both for the visual and auditory modalities at anterior sites belonging to the perisylvian region: the left dorsal–anterior insula and the left inferior frontal gyrus. These results are in favor of the restructuring hypothesis according to which learning to read acts like a “virus” that permanently contaminates the spoken language system

    Complementary neural representations for faces and words: A computational exploration

    Full text link

    Simple and complex tool use improve language syntax comprehension

    No full text
    International audienceDo language and motor skills share cognitive processes? It has recently been suggested that a form of hierarchical nesting processing may operate in both skills: motor syntax in manual actions and language syntax in sentence comprehension. Language syntax characterizes the rules governing sentence structures. A sentence is a linear sequence of elements (words) which are dependent one to the other in several levels of nesting. Syntax thus makes it possible to form more or less complex and recursive language structures to generate an infinite diversity of linguistic expressions from a finite number of elements. Goal-directed actions have more recently been described in comparable terms: in an action such as serving coffee, or a simpler action such as moving an object, each element of the operative chain is carried out in a sequential order that similarly includes dependencies between elements and hierarchical nesting. In a previous work relying on the principles of learning transfer, we showed that adult participants trained to enter pegs on a board using a tool significantly improved their performance in a subsequent language syntax comprehension task. In contrast, the control group trained in the same task with the bare hand showed no improvement in the language task. We concluded that using a tool significantly increased the motor syntactic complexity of the operative chain, which resulted in a transfer of syntactic learning from action to language. Here we aimed to investigate further the transfer of syntactic learning by manipulating the hierarchical complexity of action and the presence/absence of a tool. As in the previous study, adult participants performed a language syntax comprehension task, before and after a motor training session. Two groups (n=20 each) were trained with blocks of embedded motor structures (each trial requiring one to three movements), the first group using their hand alone (CH: complex hand), the second group using a tool (CT). Two other groups (n=20 each) were trained with blocks of simple non-embedded actions (only one peg movement), either with their hand (SH: simple hand) or a tool (ST). Our results show that following tool-use training participants improved their overall performance in the language syntax comprehension task, no matter the complexity of the action trained. In addition, we observed (1) a better comprehension of complex syntactic constructions in the CT group; (2) a positive correlation between motor and language performance in the CT group, suggesting that language syntax abilities can predict motor proficiency for complex tool use. Importantly, we did not observe an effect for SH nor CH groups, meaning that motor embedding complexity alone is not sufficient to produce learning transfer. To conclude, our results are in line with the idea that motor skills and language share common domain-general syntactic processes. Also, the learning transfer due to tool use can benefit from the addition of embedded actions

    Phonology Matters: The Phonological Frequency Effect in Written Chinese

    No full text
    Does phonology play a role in silent reading? This issue was addressed in Chinese. Phonology effects are less expected in Chinese than in alphabetical languages like English because the basic-units of written Chinese (the characters) map directly into units of meaning (morphemes). This linguistic property gave rise to the view that phonology could be bypassed altogether in Chinese. The present study, however, shows that this is not the case. We report two experiments that demonstrate pure phonological frequency effects in processing written Chinese. Characters with a high phonological frequency were processed faster than characters with a low phonological frequency, despite the fact that the characters were matched on orthographic (printed) frequency. The present research points to a universal phonological principle according to which phonological information is routinely activated as a part of word identification. The research further suggests that part of the classic word-frequency effect may be phonological.link_to_subscribed_fulltex

    Embodied time: Effect of reading expertise on the spatial representation of past and future

    No full text
    How do people grasp the abstract concept of time? It has been argued that abstract concepts, such as future and past, are grounded in sensorimotor experience. When responses to words that refer to the past or the future are either spatially compatible or incompatible with a left-to-right timeline, a space-time congruency effect is observed. In the present study, we investigated whether reading expertise determines the strength of the space-time congruency effect, which would suggest that learning to read and write drives the effect. Using a temporal categorization task, we compared two types of space-time congruency effects, one where spatial incongruency was generated by the location of the stimuli on the screen and one where it was generated by the location of the responses on the keyboard. While the first type of incongruency was visuo-spatial only, the second involved the motor system. Results showed stronger space-time congruency effects for the second type of incongruency (i.e., when the motor system was involved) than for the first type (visuo-spatial). Crucially, reading expertise, as measured by a standardized reading test, predicted the size of the space-time congruency effects. Altogether, these results reinforce the claim that the spatial representation of time is partially mediated by the motor system and partially grounded in spatially-directed movement, such as reading or writing

    Orthographic effects in spoken language: on-line activation or phonological restructuring?

    No full text
    Previous research has shown that literacy (i.e. learning to read and spell) affects spoken language processing. However, there is an on-going debate about the nature of this influence. Some argued that orthography is co-activated on-line whenever we hear a spoken word. Others suggested that orthography is not activated on-line but has changed the nature of the phonological representations. Finally, both effects might occur simultaneously, that is, orthography might be activated on-line in addition to having changed the nature of the phonological representations. Previous studies have not been able to tease apart these hypotheses. The present study started by replicating the finding of an orthographic consistency effect in spoken word recognition using event-related brain potentials (ERPs): words with multiple spellings (i.e. inconsistent words) differed from words with unique spellings (i.e. consistent words) as early as 330 ms after the onset of the target. We then employed standardized low resolution electromagnetic tomography (sLORETA) to determine the possible underlying cortical generators of this effect. The results showed that the orthographic consistency effect was clearly localized in a classic phonological area (left BA40). No evidence was found for activation in the posterior cortical areas coding orthographic information, such as the visual word form area in the left fusiform gyrus (BA37). This finding is consistent with the restructuring hypothesis according to which phonological representations are "contaminated" by orthographic knowledge.Comparative StudyJournal ArticleResearch Support, Non-U.S. Gov'tSCOPUS: ar.jinfo:eu-repo/semantics/publishe
    corecore