14 research outputs found

    Separate and concurrent symbolic predictions of sound features are processed differently: Separate and concurrent symbolic predictions of sound features areprocessed differently

    Get PDF
    The studies investigated the impact of predictive visual information about the pitch and location of a forthcoming sound on the sound processing. In Symbol-to-Sound matching paradigms, symbols induced predictions of particular sounds. The brain’s error signals (IR and N2b components of the event-related potential) were measured in response to occasional violations of the prediction, i.e. when a sound was incongruent to the corresponding symbol. IR and N2b index the detection of prediction violations at different levels, IR at a sensory and N2b at a cognitive level. Participants evaluated the congruency between prediction and actual sound by button press. When the prediction referred to only the pitch or only the location feature (Exp. 1), the violation of each feature elicited IR and N2b. The IRs to pitch and location violations revealed differences in the in time course and topography, suggesting that they were generated in feature-specific sensory areas. When the prediction referred to both features concurrently (Exp. 2), that is, the symbol predicted the sound´s pitch and location, either one or both predictions were violated. Unexpectedly, no significant effects in the IR range were obtained. However, N2b was elicited in response to all violations. N2b in response to concurrent violations of pitch and location had a shorter latency. We conclude that associative predictions can be established by arbitrary rule-based symbols and for different sound features, and that concurrent violations are processed in parallel. In complex situations as in Exp. 2, capacity limitations appear to affect processing in a hierarchical manner. While predictions were presumably not reliably established at sensory levels (absence of IR), they were established at more cognitive levels, where sounds are represented categorially (presence of N2b)

    Separate and concurrent symbolic predictions of sound features are processed differently: Separate and concurrent symbolic predictions of sound features areprocessed differently

    Get PDF
    The studies investigated the impact of predictive visual information about the pitch and location of a forthcoming sound on the sound processing. In Symbol-to-Sound matching paradigms, symbols induced predictions of particular sounds. The brain’s error signals (IR and N2b components of the event-related potential) were measured in response to occasional violations of the prediction, i.e. when a sound was incongruent to the corresponding symbol. IR and N2b index the detection of prediction violations at different levels, IR at a sensory and N2b at a cognitive level. Participants evaluated the congruency between prediction and actual sound by button press. When the prediction referred to only the pitch or only the location feature (Exp. 1), the violation of each feature elicited IR and N2b. The IRs to pitch and location violations revealed differences in the in time course and topography, suggesting that they were generated in feature-specific sensory areas. When the prediction referred to both features concurrently (Exp. 2), that is, the symbol predicted the sound´s pitch and location, either one or both predictions were violated. Unexpectedly, no significant effects in the IR range were obtained. However, N2b was elicited in response to all violations. N2b in response to concurrent violations of pitch and location had a shorter latency. We conclude that associative predictions can be established by arbitrary rule-based symbols and for different sound features, and that concurrent violations are processed in parallel. In complex situations as in Exp. 2, capacity limitations appear to affect processing in a hierarchical manner. While predictions were presumably not reliably established at sensory levels (absence of IR), they were established at more cognitive levels, where sounds are represented categorially (presence of N2b)

    Separate and concurrent symbolic predictions of sound features are processed differently: Separate and concurrent symbolic predictions of sound features areprocessed differently

    No full text
    The studies investigated the impact of predictive visual information about the pitch and location of a forthcoming sound on the sound processing. In Symbol-to-Sound matching paradigms, symbols induced predictions of particular sounds. The brain’s error signals (IR and N2b components of the event-related potential) were measured in response to occasional violations of the prediction, i.e. when a sound was incongruent to the corresponding symbol. IR and N2b index the detection of prediction violations at different levels, IR at a sensory and N2b at a cognitive level. Participants evaluated the congruency between prediction and actual sound by button press. When the prediction referred to only the pitch or only the location feature (Exp. 1), the violation of each feature elicited IR and N2b. The IRs to pitch and location violations revealed differences in the in time course and topography, suggesting that they were generated in feature-specific sensory areas. When the prediction referred to both features concurrently (Exp. 2), that is, the symbol predicted the sound´s pitch and location, either one or both predictions were violated. Unexpectedly, no significant effects in the IR range were obtained. However, N2b was elicited in response to all violations. N2b in response to concurrent violations of pitch and location had a shorter latency. We conclude that associative predictions can be established by arbitrary rule-based symbols and for different sound features, and that concurrent violations are processed in parallel. In complex situations as in Exp. 2, capacity limitations appear to affect processing in a hierarchical manner. While predictions were presumably not reliably established at sensory levels (absence of IR), they were established at more cognitive levels, where sounds are represented categorially (presence of N2b)

    Separate and concurrent symbolic predictions of sound features are processed differently: Separate and concurrent symbolic predictions of sound features areprocessed differently

    No full text
    The studies investigated the impact of predictive visual information about the pitch and location of a forthcoming sound on the sound processing. In Symbol-to-Sound matching paradigms, symbols induced predictions of particular sounds. The brain’s error signals (IR and N2b components of the event-related potential) were measured in response to occasional violations of the prediction, i.e. when a sound was incongruent to the corresponding symbol. IR and N2b index the detection of prediction violations at different levels, IR at a sensory and N2b at a cognitive level. Participants evaluated the congruency between prediction and actual sound by button press. When the prediction referred to only the pitch or only the location feature (Exp. 1), the violation of each feature elicited IR and N2b. The IRs to pitch and location violations revealed differences in the in time course and topography, suggesting that they were generated in feature-specific sensory areas. When the prediction referred to both features concurrently (Exp. 2), that is, the symbol predicted the sound´s pitch and location, either one or both predictions were violated. Unexpectedly, no significant effects in the IR range were obtained. However, N2b was elicited in response to all violations. N2b in response to concurrent violations of pitch and location had a shorter latency. We conclude that associative predictions can be established by arbitrary rule-based symbols and for different sound features, and that concurrent violations are processed in parallel. In complex situations as in Exp. 2, capacity limitations appear to affect processing in a hierarchical manner. While predictions were presumably not reliably established at sensory levels (absence of IR), they were established at more cognitive levels, where sounds are represented categorially (presence of N2b)

    Grand-averaged, unfiltered ERPs of the complete trial (nose referenced).

    No full text
    <p>Cue and tone onset are marked for all four trial categories (green: frequent cue before frequent tone, STA; blue: rare cue before rare tone, A; red: rare cue before frequent tone, V; purple: frequent cue before rare tone, VA). Left: Visual ERPs to the cues are best observable in occipital electrodes (here Oz). Right: Please note that the data were not filtered to clearly show the CNVs which are influenced by the cue probabilities (more pronounced for the rare cue, type A and V). Tones were presented with an onset 600 ms after the trial started, eliciting auditory ERPs in fronto-central regions. Negative is plotted upwards.</p

    The Human Brain Maintains Contradictory and Redundant Auditory Sensory Predictions

    Get PDF
    <div><p>Computational and experimental research has revealed that auditory sensory predictions are derived from regularities of the current environment by using internal generative models. However, so far, what has not been addressed is how the auditory system handles situations giving rise to redundant or even contradictory predictions derived from different sources of information. To this end, we measured error signals in the event-related brain potentials (ERPs) in response to violations of auditory predictions. Sounds could be predicted on the basis of overall probability, i.e., one sound was presented frequently and another sound rarely. Furthermore, each sound was predicted by an informative visual cue. Participants’ task was to use the cue and to discriminate the two sounds as fast as possible. Violations of the probability based prediction (i.e., a rare sound) as well as violations of the visual-auditory prediction (i.e., an incongruent sound) elicited error signals in the ERPs (Mismatch Negativity [MMN] and Incongruency Response [IR]). Particular error signals were observed even in case the overall probability and the visual symbol predicted different sounds. That is, the auditory system concurrently maintains and tests contradictory predictions. Moreover, if the same sound was predicted, we observed an additive error signal (scalp potential and primary current density) equaling the sum of the specific error signals. Thus, the auditory system maintains and tolerates functionally independently represented redundant and contradictory predictions. We argue that the auditory system exploits all currently active regularities in order to optimally prepare for future events.</p> </div

    Illustration of the paradigm and systematic matrix of the four trial categories.

    No full text
    <p>Cue-sound combinations STA (frequent cue and tone, exemplarily displayed as high-pitched), A, V and VA are shown with their different probabilities. At least two standards precede each deviant as shown for type STA, the relevant tone is colored. The trial starts with the presentation of the high or low note symbol (cue), followed by one of the two tones (target) after an SOA of 600 ms. Type V and type A represent situations with contradictory auditorily and visually induced predictions whereas type VA means to violate redundant predictions from the two modalities.</p

    RTs (left) and accuracy data (right).

    No full text
    <p>Tested pairs with their significance levels are tagged (STA: frequent cue before frequent tone vs. A: rare cue before rare tone; V: rare cue before frequent tone vs. VA: frequent cue before rare tone).</p
    corecore