128 research outputs found

    Rapid Statistical Learning Supporting Word Extraction From Continuous Speech.

    Get PDF
    The identification of words in continuous speech, known as speech segmentation, is a critical early step in language acquisition. This process is partially supported by statistical learning, the ability to extract patterns from the environment. Given that speech segmentation represents a potential bottleneck for language acquisition, patterns in speech may be extracted very rapidly, without extensive exposure. This hypothesis was examined by exposing participants to continuous speech streams composed of novel repeating nonsense words. Learning was measured on-line using a reaction time task. After merely one exposure to an embedded novel word, learners demonstrated significant learning effects, as revealed by faster responses to predictable than to unpredictable syllables. These results demonstrate that learners gained sensitivity to the statistical structure of unfamiliar speech on a very rapid timescale. This ability may play an essential role in early stages of language acquisition, allowing learners to rapidly identify word candidates and break in to an unfamiliar language

    The human brain processes syntax in the absence of conscious awareness.

    Get PDF
    Syntax is the core computational component of language. A longstanding idea about syntactic processing is that it is generally not available to conscious access, operating autonomously and automatically. However, there is little direct neurocognitive evidence on this issue. By measuring event-related potentials while human observers performed a novel cross-modal distraction task, we demonstrated that syntactic violations that were not consciously detected nonetheless produced a characteristic early neural response pattern, and also significantly delayed reaction times to a concurrent task. This early neural response was distinct from later neural activity that was observed only to syntactic violations that were consciously detected. These findings provide direct evidence that the human brain reacts to violations of syntax even when these violations are not consciously detected, indicating that even highly complex computational processes such as syntactic processing can occur outside the narrow window of conscious awareness

    Online neural monitoring of statistical learning.

    Get PDF
    The extraction of patterns in the environment plays a critical role in many types of human learning, from motor skills to language acquisition. This process is known as statistical learning. Here we propose that statistical learning has two dissociable components: (1) perceptual binding of individual stimulus units into integrated composites and (2) storing those integrated representations for later use. Statistical learning is typically assessed using post-learning tasks, such that the two components are conflated. Our goal was to characterize the online perceptual component of statistical learning. Participants were exposed to a structured stream of repeating trisyllabic nonsense words and a random syllable stream. Online learning was indexed by an EEG-based measure that quantified neural entrainment at the frequency of the repeating words relative to that of individual syllables. Statistical learning was subsequently assessed using conventional measures in an explicit rating task and a reaction-time task. In the structured stream, neural entrainment to trisyllabic words was higher than in the random stream, increased as a function of exposure to track the progression of learning, and predicted performance on the reaction time (RT) task. These results demonstrate that monitoring this critical component of learning via rhythmic EEG entrainment reveals a gradual acquisition of knowledge whereby novel stimulus sequences are transformed into familiar composites. This online perceptual transformation is a critical component of learning

    Sleep-based memory processing facilitates grammatical generalization: Evidence from targeted memory reactivation.

    Get PDF
    Generalization-the ability to abstract regularities from specific examples and apply them to novel instances-is an essential component of language acquisition. Generalization not only depends on exposure to input during wake, but may also improve offline during sleep. Here we examined whether targeted memory reactivation during sleep can influence grammatical generalization. Participants gradually acquired the grammatical rules of an artificial language through an interactive learning procedure. Then, phrases from the language (experimental group) or stimuli from an unrelated task (control group) were covertly presented during an afternoon nap. Compared to control participants, participants re-exposed to the language during sleep showed larger gains in grammatical generalization. Sleep cues produced a bias, not necessarily a pure gain, suggesting that the capacity for memory replay during sleep is limited. We conclude that grammatical generalization was biased by auditory cueing during sleep, and by extension, that sleep likely influences grammatical generalization in general

    Rhythmically modulating neural entrainment during exposure to regularities influences statistical learning

    Get PDF
    The ability to discover regularities in the environment, such as syllable patterns in speech, is known as statistical learning. Previous studies have shown that statistical learning is accompanied by neural entrainment, in which neural activity temporally aligns with repeating patterns over time. However, it is unclear whether these rhythmic neural dynamics play a functional role in statistical learning, or whether they largely reflect the downstream consequences of learning, such as the enhanced perception of learned words in speech. To better understand this issue, we manipulated participants’ neural entrainment during statistical learning using continuous rhythmic visual stimulation. Participants were exposed to a speech stream of repeating nonsense words while viewing either (1) a visual stimulus with a “congruent” rhythm that aligned with the word structure, (2) a visual stimulus with an incongruent rhythm, or (3) a static visual stimulus. Statistical learning was subsequently measured using both an explicit and implicit test. Participants in the congruent condition showed a significant increase in neural entrainment over auditory regions at the relevant word frequency, over and above effects of passive volume conduction, indicating that visual stimulation successfully altered neural entrainment within relevant neural substrates. Critically, during the subsequent implicit test, participants in the congruent condition showed an enhanced ability to predict upcoming syllables and stronger neural phase synchronization to component words, suggesting that they had gained greater sensitivity to the statistical structure of the speech stream relative to the incongruent and static groups. This learning benefit could not be attributed to strategic processes, as participants were largely unaware of the contingencies between the visual stimulation and embedded words. These results indicate that manipulating neural entrainment during exposure to regularities influences statistical learning outcomes, suggesting that neural entrainment may functionally contribute to statistical learning. Our findings encourage future studies using non-invasive brain stimulation methods to further understand the role of entrainment in statistical learning

    Functional differences between statistical learning with and without explicit training.

    Get PDF
    Humans are capable of rapidly extracting regularities from environmental input, a process known as statistical learning. This type of learning typically occurs automatically, through passive exposure to environmental input. The presumed function of statistical learning is to optimize processing, allowing the brain to more accurately predict and prepare for incoming input. In this study, we ask whether the function of statistical learning may be enhanced through supplementary explicit training, in which underlying regularities are explicitly taught rather than simply abstracted through exposure. Learners were randomly assigned either to an explicit group or an implicit group. All learners were exposed to a continuous stream of repeating nonsense words. Prior to this implicit training, learners in the explicit group received supplementary explicit training on the nonsense words. Statistical learning was assessed through a speeded reaction-time (RT) task, which measured the extent to which learners used acquired statistical knowledge to optimize online processing. Both RTs and brain potentials revealed significant differences in online processing as a function of training condition. RTs showed a crossover interaction; responses in the explicit group were faster to predictable targets and marginally slower to less predictable targets relative to responses in the implicit group. P300 potentials to predictable targets were larger in the explicit group than in the implicit group, suggesting greater recruitment of controlled, effortful processes. Taken together, these results suggest that information abstracted through passive exposure during statistical learning may be processed more automatically and with less effort than information that is acquired explicitly

    Of words and whistles: Statistical learning operates similarly for identical sounds perceived as speech and non-speechOf words and whistles: Statistical learning operates similarly for identical sounds perceived as speech and non-speech

    Get PDF
    Statistical learning is an ability that allows individuals to effortlessly extract patterns from the environment, such as sound patterns in speech. Some prior evidence suggests that statistical learning operates more robustly for speech compared to non-speech stimuli, supporting the idea that humans are predisposed to learn language. However, any apparent statistical learning advantage for speech could be driven by signal acoustics, rather than the subjective perception per se of sounds as speech. To resolve this issue, the current study assessed whether there is a statistical learning advantage for ambiguous sounds that are subjectively perceived as speech-like compared to the same sounds perceived as non-speech, thereby controlling for acoustic features. We first induced participants to perceive sine-wave speech (SWS)—a degraded form of speech not immediately perceptible as speech—as either speech or non-speech. After this induction phase, participants were exposed to a continuous stream of repeating trisyllabic nonsense words, composed of SWS syllables, and then completed an explicit familiarity rating task and an implicit target detection task to assess learning. Critically, participants showed robust and equivalent performance on both measures, regardless of their subjective speech perception. In contrast, participants who perceived the SWS syllables as more speech-like showed better detection of individual syllables embedded in speech streams. These results suggest that speech perception facilitates processing of individual sounds, but not the ability to extract patterns across sounds. Our findings suggest that statistical learning is not influenced by the perceived linguistic relevance of sounds, and that it may be conceptualized largely as an automatic, stimulus-driven mechanism

    Musical instrument familiarity affects statistical learning of tone sequences.

    Get PDF
    Most listeners have an implicit understanding of the rules that govern how music unfolds over time. This knowledge is acquired in part through statistical learning, a robust learning mechanism that allows individuals to extract regularities from the environment. However, it is presently unclear how this prior musical knowledge might facilitate or interfere with the learning of novel tone sequences that do not conform to familiar musical rules. In the present experiment, participants listened to novel, statistically structured tone sequences composed of pitch intervals not typically found in Western music. Between participants, the tone sequences either had the timbre of artificial, computerized instruments or familiar instruments (piano or violin). Knowledge of the statistical regularities was measured as by a two-alternative forced choice recognition task, requiring discrimination between novel sequences that followed versus violated the statistical structure, assessed at three time points (immediately post-training, as well as one day and one week post-training). Compared to artificial instruments, training on familiar instruments resulted in reduced accuracy. Moreover, sequences from familiar instruments - but not artificial instruments - were more likely to be judged as grammatical when they contained intervals that approximated those commonly used in Western music, even though this cue was non-informative. Overall, these results demonstrate that instrument familiarity can interfere with the learning of novel statistical regularities, presumably through biasing memory representations to be aligned with Western musical structures. These results demonstrate that real-world experience influences statistical learning in a non-linguistic domain, supporting the view that statistical learning involves the continuous updating of existing representations, rather than the establishment of entirely novel ones

    Neural Measures Reveal Implicit Learning during Language Processing.

    Get PDF
    Language input is highly variable; phonological, lexical, and syntactic features vary systematically across different speakers, geographic regions, and social contexts. Previous evidence shows that language users are sensitive to these contextual changes and that they can rapidly adapt to local regularities. For example, listeners quickly adjust to accented speech, facilitating comprehension. It has been proposed that this type of adaptation is a form of implicit learning. This study examined a similar type of adaptation, syntactic adaptation, to address two issues: (1) whether language comprehenders are sensitive to a subtle probabilistic contingency between an extraneous feature (font color) and syntactic structure and (2) whether this sensitivity should be attributed to implicit learning. Participants read a large set of sentences, 40% of which were garden-path sentences containing temporary syntactic ambiguities. Critically, but unbeknownst to participants, font color probabilistically predicted the presence of a garden-path structure, with 75% of garden-path sentences (and 25% of normative sentences) appearing in a given font color. ERPs were recorded during sentence processing. Almost all participants indicated no conscious awareness of the relationship between font color and sentence structure. Nonetheless, after sufficient time to learn this relationship, ERPs time-locked to the point of syntactic ambiguity resolution in garden-path sentences differed significantly as a function of font color. End-of-sentence grammaticality judgments were also influenced by font color, suggesting that a match between font color and sentence structure increased processing fluency. Overall, these findings indicate that participants can implicitly detect subtle co-occurrences between physical features of sentences and abstract, syntactic properties, supporting the notion that implicit learning mechanisms are generally operative during online language processing

    Phase of Spontaneous Slow Oscillations during Sleep Influences Memory-Related Processing of Auditory Cues.

    Get PDF
    UNLABELLED: Slow oscillations during slow-wave sleep (SWS) may facilitate memory consolidation by regulating interactions between hippocampal and cortical networks. Slow oscillations appear as high-amplitude, synchronized EEG activity, corresponding to upstates of neuronal depolarization and downstates of hyperpolarization. Memory reactivations occur spontaneously during SWS, and can also be induced by presenting learning-related cues associated with a prior learning episode during sleep. This technique, targeted memory reactivation (TMR), selectively enhances memory consolidation. Given that memory reactivation is thought to occur preferentially during the slow-oscillation upstate, we hypothesized that TMR stimulation effects would depend on the phase of the slow oscillation. Participants learned arbitrary spatial locations for objects that were each paired with a characteristic sound (eg, cat-meow). Then, during SWS periods of an afternoon nap, one-half of the sounds were presented at low intensity. When object location memory was subsequently tested, recall accuracy was significantly better for those objects cued during sleep. We report here for the first time that this memory benefit was predicted by slow-wave phase at the time of stimulation. For cued objects, location memories were categorized according to amount of forgetting from pre- to post-nap. Conditions of high versus low forgetting corresponded to stimulation timing at different slow-oscillation phases, suggesting that learning-related stimuli were more likely to be processed and trigger memory reactivation when they occurred at the optimal phase of a slow oscillation. These findings provide insight into mechanisms of memory reactivation during sleep, supporting the idea that reactivation is most likely during cortical upstates. SIGNIFICANCE STATEMENT: Slow-wave sleep (SWS) is characterized by synchronized neural activity alternating between active upstates and quiet downstates. The slow-oscillation upstates are thought to provide a window of opportunity for memory consolidation, particularly conducive to cortical plasticity. Recent evidence shows that sensory cues associated with previous learning can be delivered subtly during SWS to selectively enhance memory consolidation. Our results demonstrate that this behavioral benefit is predicted by slow-oscillation phase at stimulus presentation time. Cues associated with high versus low forgetting based on analysis of subsequent recall performance were delivered at opposite slow-oscillation phases. These results provide evidence of an optimal slow-oscillation phase for memory consolidation during sleep, supporting the idea that memory processing occurs preferentially during cortical upstates
    • …
    corecore