3,051 research outputs found

    Towards a complete multiple-mechanism account of predictive language processing [Commentary on Pickering & Garrod]

    Get PDF
    Although we agree with Pickering & Garrod (P&G) that prediction-by-simulation and prediction-by-association are important mechanisms of anticipatory language processing, this commentary suggests that they: (1) overlook other potential mechanisms that might underlie prediction in language processing, (2) overestimate the importance of prediction-by-association in early childhood, and (3) underestimate the complexity and significance of several factors that might mediate prediction during language processing

    An integrated theory of language production and comprehension

    Get PDF
    Currently, production and comprehension are regarded as quite distinct in accounts of language processing. In rejecting this dichotomy, we instead assert that producing and understanding are interwoven, and that this interweaving is what enables people to predict themselves and each other. We start by noting that production and comprehension are forms of action and action perception. We then consider the evidence for interweaving in action, action perception, and joint action, and explain such evidence in terms of prediction. Specifically, we assume that actors construct forward models of their actions before they execute those actions, and that perceivers of others' actions covertly imitate those actions, then construct forward models of those actions. We use these accounts of action, action perception, and joint action to develop accounts of production, comprehension, and interactive language. Importantly, they incorporate well-defined levels of linguistic representation (such as semantics, syntax, and phonology). We show (a) how speakers and comprehenders use covert imitation and forward modeling to make predictions at these levels of representation, (b) how they interweave production and comprehension processes, and (c) how they use these predictions to monitor the upcoming utterances. We show how these accounts explain a range of behavioral and neuroscientific data on language processing and discuss some of the implications of our proposal

    Event-related potential evidence

    Get PDF
    Numerous past studies have investigated neurophysiological correlates of music-syntactic processing. However, only little is known about how prior knowledge about an upcoming syntactically irregular event modulates brain correlates of music-syntactic processing. Two versions of a short chord sequence were presented repeatedly to non-musicians (n = 20) and musicians (n = 20). One sequence version ended on a syntactically regular chord, and the other one ended on a syntactically irregular chord. Participants were either informed (cued condition), or not informed (non-cued condition) about whether the sequence would end on the regular or the irregular chord. Results indicate that in the cued condition (compared to the non-cued condition) the peak latency of the early right anterior negativity (ERAN), elicited by irregular chords, was earlier in both non-musicians and musicians. However, the expectations due to the knowledge about the upcoming event (veridical expectations) did not influence the amplitude of the ERAN. These results suggest that veridical expectations modulate only the speed, but not the principle mechanisms, of music-syntactic processing

    The Priming Function of In-car Audio Instruction

    Get PDF
    Studies to date have focused on the priming power of visual road signs, but not the priming potential of audio road scene instruction. Here, the relative priming power of visual, audio and multisensory road scene instructions were assessed. In a lab-based study, participants responded to target road scene turns following visual, audio or multisensory road turn primes which were congruent or incongruent to the primes in direction, or control primes. All types of instruction (visual, audio, multisensory) were successful in priming responses to a road scene. Responses to multisensory-primed targets (both audio and visual) were faster than responses to either audio or visual primes alone. Incongruent audio primes did not affect performance negatively in the manner of incongruent visual or multisensory primes. Results suggest that audio instructions have the potential to prime drivers to respond quickly and safely to their road environment. Peak performance will be observed if audio and visual road instruction primes can be timed to co-occur

    The electrophysiological reality of parafoveal processing: On the validity of language-related ERPs in natural reading

    Get PDF
    A central question in psycholinguistics is how the human brain processes language in real time. To answer this question, the differences between auditory and visual processing have to be considered. The present dissertation examines the extent to which event-related potentials (ERPs) in the human electroencephalogram (EEG) interact with different modes of presentation during sentence comprehension. Besides the two classical modalities, auditory and rapid serial visual presentation (RSVP), the monitoring of readers’ eye movements was chosen as a new mode of presentation. Here, the temporal paradox between neuronal ERP effects and behavioral effects in the eye movement record were of particular interest. Specifically, by concurrently measuring ERPs and eye movements in natural reading, the dissertation aimed to shed light on the counterintuitive fact that difficulties in sentence comprehension arise earlier in eye movement measures than in the corresponding neuronal ERP effects. In contrast to RSVP and the auditory modality, reading offers a parafoveal preview of upcoming words (Rayner 1998), which enables the brain to process information of words before these are fixated for the first time (in foveal vision). When the word Gegenteil in example (1) below is fixated and processed, the brain concurrently processes some information of the upcoming parafoveal words von and weiß. (1) Schwarz ist das Gegenteil von weiß. (2) Schwarz [
] blau. (3) Schwarz [
] nett. The parafoveal preview mostly provides orthographic (word form) information, while semantic information is not conveyed (Inhoff & Starr 2004; White 2008). Whereas word form and lexical meaning are processed simultaneously with RSVP and auditory presentation, the parafoveal preview in natural reading allows for a temporal decoupling such that word forms are processed before meaning. This is one reason for the faster information uptake in reading. The present dissertation is the first to systematically investigate the influence of the parafoveal preview in sentence processing. Participants read sentences such as in (1)-(3), in which two adjectives were either antonyms (1), semantically related non-antonyms (2), or semantically unrelated non-antonyms (3). ERPs were computed for the last fixation before the target word (the sentence-final word in 1-3), which was assumed to capture parafoveal processing, and for the first fixation on the target, that should reflect foveal processing. The results were compared to two experiments using identical stimuli with auditory and RSVP presentation, and the parafoveal preview clearly led to different ERP results. While the RSVP and auditory presentations replicated the finding of a P300 to the second antonym in (1) (Kutas & Iragui 1998; Roehm et al. 2007), there was no P300 in response to antonyms at any fixation position in natural reading. However, the dissociation of parafoveal and foveal processing in reading also made it possible to disentangle different processes underlying the N400. There was a reduced parafoveal N400 for (1,2) compared with (3), which could be attributed to the preactivation of the word forms of the expected antonyms and of semantically related non-antonyms. In foveal vision, all non-antonyms (2,3) showed an enhanced N400 compared with (1) because they were unexpected and implausible in the sentence context. This dissociation between the preactivation of a word-form and the contextual fit of a word’s meaning is impossible with the other two modes of presentation, because orthographic and semantic information become available almost at the same time and are thus processed simultaneously. Furthermore, the parafoveal N400 effect was not accompanied by changes in the duration of the corresponding fixation, whereas the foveal N400 was. Similarly, with the concurrent measurement of ERPs and eye movements, the temporal paradox described above remained, as effects in the eye movement record preceded the neuronal ERP effects. Further support for these central findings came from two additional experiments that investigated different stimuli with concurrent ERP-eye tracking measures. Altogether, the experiments revealed that the previous findings on the language-related N400 can be replicated with natural reading, but they can also be differentiated qualitatively by virtue of the characteristics of natural reading. Although the behavioral and neuronal effects mirrored one another, not every neuronal effect necessarily translates into a behavioral output. Finally, even concurrent ERP-eye tracking measures cannot resolve the temporal paradox

    Schizophrenia research under the framework of predictive coding: body, language, and others

    Full text link
    Although there have been so many studies on schizophrenia under the framework of predictive coding, works focusing on treatment are very preliminary. A model-oriented, operationalist, and comprehensive understanding of schizophrenia would promote the therapy turn of further research. We summarize predictive coding models of embodiment, co-occurrence of over- and under-weighting priors, subjective time processing, language production or comprehension, self-or-other inference, and social interaction. Corresponding impairments and clinical manifestations of schizophrenia are reviewed under these models at the same time. Finally, we discuss why and how to inaugurate a therapy turn of further research under the framework of predictive coding

    A Cognitive Neuroscience Examination Of Rhythm And Reading And Their Translation To Neurological Conditions

    Get PDF
    The goal of the current research was to provide a novel and comprehensive examination of the connection between rhythm and reading through the combination of multiple experimental stimuli, and to translate the reading aloud research to neurological patients. Both speech and music perception/production involve sequences of rhythmic events that unfold over time, and the presence of rhythm in both processes has motivated researchers to consider whether musical and speech rhythm engage shared neural regions (Patel, 2008), and whether musical rhythm can influence speech processing (Cason & Schön, 2012). The experimental paradigm involved examining whether reading aloud is affected by the presentation of a rhythmic prime that was either congruent or incongruent with the syllabic stress of the target letter string. The experiments in Chapter 2 used targets that were words that placed the stress on either the first or second syllable (practice vs. police), as well as their corresponding pseudohomophones (praktis vs. poleese), which allowed us to compare lexical and sublexical reading, respectively. In Chapter 3, the experiments involved a paradigm in which target words have stress on the first syllable for nouns, and on the second syllable for verbs. Thus, the design used identical noun-verb word pairs (conflict vs. conflict), as well as their corresponding pseudohomophones (konflikt vs. konflikt). The results from the behavioural experiments demonstrated that naming reaction times were faster for words and pseudohomophones when the rhythmic prime was congruent with the syllabic stress, and slower when the rhythmic prime was incongruent, which suggests that a rhythmic prime matched to the syllabic stress of a letterstring aids reading processes. Functional magnetic resonance imaging (fMRI) was also used in Chapters 2 and 3 to test whether a network involving the putamen is involved in the effect of rhythm on reading aloud. The fMRI results revealed that a network involving the putamen is associated with the effect of congruency between rhythmic stress and syllabic stress on reading aloud, which is consistent with previous literature that has shown this region is involved in reading, rhythm processing, and predicting upcoming events. Chapter 4 was to provide a behavioural and neuroanatomical examination of reading processes in two patients. Case Study 1 examined the effect of rhythmic priming on reading aloud in a patient with Parkinson’s disease (PD), given that these patients exhibit abnormalities in the putamen, which has been associated with rhythm and reading processes. The patient demonstrated the same behavioural effect as normal participants, whereby individuals benefited from the rhythm prime being congruent with the syllabic stress of the target letter string, and the fMRI results revealed that despite disruptions in basal ganglia functioning following PD, there was still activation in the putamen for reading real words. Case Study 2 examined a patient with intractable left temporal lobe epilepsy (TLE) who was undergoing a temporal lobectomy that involved removing regions of the left temporal lobe that are often thought to be important in language processing. The fMRI results showed that all four reading tasks activated the right posterior occipitotemporal region in the ventral visual stream, confirming the right hemisphere dominance in this patient. Together, these findings have implications for developing neurobiological models of reading, translation to localization of function in neurological conditions such as PD and TLE, and may also reveal potential remedial applications for treating speech deficits in patient populations, such as Parkinson’s disease, stuttering, aphasia, and dyslexia

    Single-trial multisensory memories affect later auditory and visual object discrimination.

    Get PDF
    Multisensory memory traces established via single-trial exposures can impact subsequent visual object recognition. This impact appears to depend on the meaningfulness of the initial multisensory pairing, implying that multisensory exposures establish distinct object representations that are accessible during later unisensory processing. Multisensory contexts may be particularly effective in influencing auditory discrimination, given the purportedly inferior recognition memory in this sensory modality. The possibility of this generalization and the equivalence of effects when memory discrimination was being performed in the visual vs. auditory modality were at the focus of this study. First, we demonstrate that visual object discrimination is affected by the context of prior multisensory encounters, replicating and extending previous findings by controlling for the probability of multisensory contexts during initial as well as repeated object presentations. Second, we provide the first evidence that single-trial multisensory memories impact subsequent auditory object discrimination. Auditory object discrimination was enhanced when initial presentations entailed semantically congruent multisensory pairs and was impaired after semantically incongruent multisensory encounters, compared to sounds that had been encountered only in a unisensory manner. Third, the impact of single-trial multisensory memories upon unisensory object discrimination was greater when the task was performed in the auditory vs. visual modality. Fourth, there was no evidence for correlation between effects of past multisensory experiences on visual and auditory processing, suggestive of largely independent object processing mechanisms between modalities. We discuss these findings in terms of the conceptual short term memory (CSTM) model and predictive coding. Our results suggest differential recruitment and modulation of conceptual memory networks according to the sensory task at hand

    Change blindness: eradication of gestalt strategies

    Get PDF
    Arrays of eight, texture-defined rectangles were used as stimuli in a one-shot change blindness (CB) task where there was a 50% chance that one rectangle would change orientation between two successive presentations separated by an interval. CB was eliminated by cueing the target rectangle in the first stimulus, reduced by cueing in the interval and unaffected by cueing in the second presentation. This supports the idea that a representation was formed that persisted through the interval before being 'overwritten' by the second presentation (Landman et al, 2003 Vision Research 43149–164]. Another possibility is that participants used some kind of grouping or Gestalt strategy. To test this we changed the spatial position of the rectangles in the second presentation by shifting them along imaginary spokes (by ±1 degree) emanating from the central fixation point. There was no significant difference seen in performance between this and the standard task [F(1,4)=2.565, p=0.185]. This may suggest two things: (i) Gestalt grouping is not used as a strategy in these tasks, and (ii) it gives further weight to the argument that objects may be stored and retrieved from a pre-attentional store during this task

    Exploring modality switching effects in negated sentences: further evidence for grounded representations

    Get PDF
    Theories of embodied cognition (e.g., Perceptual Symbol Systems Theory; Barsalou, 1999, 2009) suggest that modality specific simulations underlie the representation of concepts. Supporting evidence comes from modality switch costs: participants are slower to verify a property in one modality (e.g., auditory, BLENDER-loud) after verifying a property in a different modality (e.g., gustatory, CRANBERRIES-tart) compared to the same modality (e.g., LEAVES-rustling, Pecher et al., 2003). Similarly, modality switching costs lead to a modulation of the N400 effect in event-related potentials (ERPs; Collins et al., 2011; Hald et al., 2011). This effect of modality switching has also been shown to interact with the veracity of the sentence (Hald et al., 2011). The current ERP study further explores the role of modality match/mismatch on the processing of veracity as well as negation (sentences containing “not”). Our results indicate a modulation in the ERP based on modality and veracity, plus an interaction. The evidence supports the idea that modality specific simulations occur during language processing, and furthermore suggest that these simulations alter the processing of negation
    • 

    corecore