8 research outputs found

    Predicting "when" in discourse engages the human dorsal auditory stream: an fMRI study using naturalistic stories

    No full text
    The hierarchical organisation of human cortical circuits integrates information across different timescales via temporal receptive windows (TRWs), which increase in length from lower to higher levels of the cortical hierarchy (Hasson et al., 2015). A recent neurobiological model of higher-order language processing (Bornkessel-Schlesewsky et al., 2015) posits that TRWs in the dorsal auditory stream provide the basis for a hierarchically organised predictive coding architecture (Friston and Kiebel, 2009). In this stream, a nested set of internal models generates time-based (“when”) predictions for upcoming input at different linguistic levels (sounds, words, sentences, discourse). Here, we used naturalistic stories to test the hypothesis that multi-sentence, discourse-level predictions are processed in the dorsal auditory stream, yielding attenuated blood-oxygen-level-dependent (BOLD) responses for highly predicted versus less strongly predicted language input. The results were as hypothesised: discourse-related cues such as passive voice, which effect a higher predictability of remention for a character at a later point within a story, led to attenuated BOLD responses for auditory input of high versus low predictability within the dorsal auditory stream, specifically in the inferior parietal lobule (IPL), middle frontal gyrus (MFG) and dorsal parts of the inferior frontal gyrus (IFG) among other areas. Additionally, we found effects of content-related (“what”) predictions in ventral regions. These findings provide novel evidence that hierarchical predictive coding extends to discourse-level processing in natural language. Importantly, they ground language processing on a hierarchically organised predictive network, as a common underlying neurobiological basis shared with other brain functions

    Experimental gingivitis in type 1 diabetics: A controlled clinical and microbiological study

    No full text
    Objective: To monitor clinical and microbiological changes during experimental gingivitis in type 1 diabetics and non-diabetics. Materials and Methods: Nine type 1 diabetics with good/moderate metabolic control and nine age-gender matched non-diabetics were recruited. Probing pocket depths in all subjects did not exceed 4 mm and none were affected by attachment loss. According to the original model, an experimental 3-week plaque accumulation resulting in experimental gingivitis development and a subsequent 2-week period of optimal plaque control were staged. Subgingival plaque samples were collected at days 0, 21 and 35 from one site per quadrant, pooled and analysed using checkerboard DNA-DNA hybridization. Results: Diabetics (mean age 25.6 ± 5.8 standard deviation (SD), range 16-35 years) had a mean HbA1c level of 8.1 ± 0.7% (SD), while non-diabetics (mean age 24.8 ± 5.7 (SD), range 15-36 years) were metabolically controlled (HbA1c ≀6.5%). Between Days 0, 21 and 35, no statistically significant differences in mean plaque and gingival index scores were observed between diabetics and non-diabetics. At days 7 and 21, however, diabetics showed statistically significantly higher percentages of sites with gingival index scores ≄2 compared with non-diabetics. Mean DNA probe counts of the red and orange complex species increased significantly (p < 0.05) between days 0 and 21 and decreased significantly (p < 0.05) between days 21 and 35 in both groups. Conclusion: Both diabetics and non-diabetics react to experimental plaque accumulation with gingival inflammation. Type 1 diabetics, however, develop an earlier and higher inflammatory response to a comparable bacterial challenge. Copyright © Blackwell Munksgaard 2005.link_to_subscribed_fulltex

    Journal of Cognitive Neuroscience

    No full text
    While listening to continuous speech, humans process beat information to correctly identify word boundaries. The beats of language are stress patterns that are created by combining lexical (word-specific) stress patterns and the rhythm of a specific language. Sometimes, the lexical stress pattern needs to be altered to obey the rhythm of the language. This study investigated the interplay of lexical stress patterns and rhythmical well-formedness in natural speech with fMRI. Previous electrophysiological studies on cases in which a regular lexical stress pattern may be altered to obtain rhythmical well-formedness showed that even subtle rhythmic deviations are detected by the brain if attention is directed toward prosody. Here, we present a new approach to this phenomenon by having participants listen to contextually rich stories in the absence of a task targeting the manipulation. For the interaction of lexical stress and rhythmical well-formedness, we found one suprathreshold cluster localized between the cerebellum and the brain stem. For the main effect of lexical stress, we found higher BOLD responses to the retained lexical stress pattern in the bilateral SMA, bilateral postcentral gyrus, bilateral middle fontal gyrus, bilateral inferior and right superior parietal lobule, and right precuneus. These results support the view that lexical stress is processed as part of a sensorimotor network of speech comprehension. Moreover, our results connect beat processing in language to domain-independent timing perception

    No evidence for differences among language regions in their temporal receptive windows

    No full text
    corecore