87 research outputs found

    Word naming slows picture naming but does not affect cumulative semantic interference

    Get PDF
    Two experiments are reported which investigate the effect of processing words prior to naming target pictures. In Experiment 1, participants named (read aloud) sequences of five printed prime words and five target pictures from the same semantic category, and also sequences of five prime words from a different unrelated semantic category to the five related target pictures. Picture and words were interleaved, with two unrelated filler stimuli in between prime and target stimuli (i.e. a lag of 3 between primes and targets). Results showed that across the five target picture naming trials (i.e. across ordinal position of picture), picture naming times increased linearly, replicating the cumulative semantic interference (CSI) effect (e.g., Howard, Nickels, Coltheart, & Cole-Virtue, 2006). Related prime words slowed picture naming, replicating the effects found in paired word prime and picture target studies (e.g., Tree & Hirsh, 2003). However, the naming of the five related prime words did not modify the picture naming CSI effect, with this null result converging with findings from a different word and picture design (e.g., Navarrete, Mahon, & Caramazza, 2010). In Experiment 2, participants categorised the prime word stimuli as manmade versus natural, so that words were more fully processed at a conceptual level. The interaction between word prime relatedness and ordinal position of the named target picture was significant. These results are consistent with adjustments at the conceptual level (Belke, 2013; Roelofs, 2018) which last over several trials at least. By contrast, we conclude that the distinct word-to-picture naming interference effect from Experiment 1 must originate outside of the conceptual level and outside of the mappings between semantics and lexical representations. We discuss the results with reference to recent theoretical accounts of the CSI picture naming effect and word naming models

    A Trade-Off between Somatosensory and Auditory Related Brain Activity during Object Naming But Not Reading.

    Get PDF
    The parietal operculum, particularly the cytoarchitectonic area OP1 of the secondary somatosensory area (SII), is involved in somatosensory feedback. Using fMRI with 58 human subjects, we investigated task-dependent differences in SII/OP1 activity during three familiar speech production tasks: object naming, reading and repeatedly saying "1-2-3." Bilateral SII/OP1 was significantly suppressed (relative to rest) during object naming, to a lesser extent when repeatedly saying "1-2-3" and not at all during reading. These results cannot be explained by task difficulty but the contrasting difference between naming and reading illustrates how the demands on somatosensory activity change with task, even when motor output (i.e., production of object names) is matched. To investigate what determined SII/OP1 deactivation during object naming, we searched the whole brain for areas where activity increased as that in SII/OP1 decreased. This across subject covariance analysis revealed a region in the right superior temporal sulcus (STS) that lies within the auditory cortex, and is activated by auditory feedback during speech production. The tradeoff between activity in SII/OP1 and STS was not observed during reading, which showed significantly more activation than naming in both SII/OP1 and STS bilaterally. These findings suggest that, although object naming is more error prone than reading, subjects can afford to rely more or less on somatosensory or auditory feedback during naming. In contrast, fast and efficient error-free reading places more consistent demands on both types of feedback, perhaps because of the potential for increased competition between lexical and sublexical codes at the articulatory level

    Semantic priming over unrelated trials: evidence for different effects in word and picture naming

    Get PDF
    Two naming experiments are reported that replicated previous findings of semantic interference as a result of naming related word or picture primes three trials before picture targets. We also examined whether semantic interference occurred when the materials were reversed and picture or word primes were named before word targets. The interest in semantic interference during word naming followed a suggestion made by Humphreys, Lloyd-Jones, and Fias (1995) that word naming, like picture naming, may be reliant on a semantic route to name retrieval when the two stimuli are mixed. In contrast to their findings, we found no evidence for semantic interference during target word naming; in fact, we found facilitation from related picture primes. No priming was found for the related word prime and word target condition. The data allow us to rule out the possibility that word naming is reliant on a semantic route when mixed with pictures in this priming paradigm and to conclude that there is no clear evidence of semantic activation during word naming. We also conclude, in line with other research, that word naming and picture naming involve different processes

    Does normal processing provide evidence of specialised semantic subsystems?

    Get PDF
    Category-specific disorders are frequently explained by suggesting that living and non-living things are processed in separate subsystems (e.g. Caramazza & Shelton, 1998). If subsystems exist, there should be benefits for normal processing, beyond the influence of structural similarity. However, no previous study has separated the relative influences of similarity and semantic category. We created novel examples of living and non-living things so category and similarity could be manipulated independently. Pre-tests ensured that our images evoked appropriate semantic information and were matched for familiarity. Participants were trained to associate names with the images and then performed a name-verification task under two levels of time pressure. We found no significant advantage for living things alongside strong effects of similarity. Our results suggest that similarity rather than category is the key determinant of speed and accuracy in normal semantic processing. We discuss the implications of this finding for neuropsychological studies. © 2005 Psychology Press Ltd

    The Natural Statistics of Audiovisual Speech

    Get PDF
    Humans, like other animals, are exposed to a continuous stream of signals, which are dynamic, multimodal, extended, and time varying in nature. This complex input space must be transduced and sampled by our sensory systems and transmitted to the brain where it can guide the selection of appropriate actions. To simplify this process, it's been suggested that the brain exploits statistical regularities in the stimulus space. Tests of this idea have largely been confined to unimodal signals and natural scenes. One important class of multisensory signals for which a quantitative input space characterization is unavailable is human speech. We do not understand what signals our brain has to actively piece together from an audiovisual speech stream to arrive at a percept versus what is already embedded in the signal structure of the stream itself. In essence, we do not have a clear understanding of the natural statistics of audiovisual speech. In the present study, we identified the following major statistical features of audiovisual speech. First, we observed robust correlations and close temporal correspondence between the area of the mouth opening and the acoustic envelope. Second, we found the strongest correlation between the area of the mouth opening and vocal tract resonances. Third, we observed that both area of the mouth opening and the voice envelope are temporally modulated in the 2–7 Hz frequency range. Finally, we show that the timing of mouth movements relative to the onset of the voice is consistently between 100 and 300 ms. We interpret these data in the context of recent neural theories of speech which suggest that speech communication is a reciprocally coupled, multisensory event, whereby the outputs of the signaler are matched to the neural processes of the receiver
    corecore