31 research outputs found

    What is a Rhythm for the Brain? The Impact of Contextual Temporal Variability on Auditory Perception

    Get PDF
    Temporal predictions can be formed and impact perception when sensory timing is fully predictable: for instance, the discrimination of a target sound is enhanced if it is presented on the beat of an isochronous rhythm. However, natural sensory stimuli, like speech or music, are not entirely predictable, but still possess statistical temporal regularities. We investigated whether temporal expectations can be formed in non-fully predictable contexts, and how the temporal variability of sensory contexts affects auditory perception. Specifically, we asked how “rhythmic” an auditory stimulation needs to be in order to observe temporal predictions effects on auditory discrimination performances. In this behavioral auditory oddball experiment, participants listened to auditory sound sequences where the temporal interval between each sound was drawn from gaussian distributions with distinct standard deviations. Participants were asked to discriminate sounds with a deviant pitch in the sequences. Auditory discrimination performances, as measured with deviant sound discrimination accuracy and response times, progressively declined as the temporal variability of the sound sequence increased. Moreover, both global and local temporal statistics impacted auditory perception, suggesting that temporal statistics are promptly integrated to optimize perception. Altogether, these results suggests that temporal predictions can be set up quickly based on the temporal statistics of past sensory events and are robust to a certain amount of temporal variability. Therefore, temporal predictions can be built on sensory stimulations that are not purely periodic nor temporally deterministic

    Neural Dynamics: Speech is Special

    No full text
    Human speech is undoubtedly a particularly relevant acoustic stimulus for our species, due to its role of information transmission during communication. Previous research has aimed to map speech processing onto macro-anatomical brain structures specifically dedicated to this purpose, an endeavour that turned out to be challenging. A more recent line of research addressed the fact that speech is a dynamic signal, and instead focused on similarly dynamic neural activity. We here focus on the question of whether and how speech is special for such neural dynamics. We review findings that suggest that this is indeed the case: First, properties of human speech are strikingly similar to those found in neural dynamics, indicating that the two co-evolved. Second, neural dynamics produced by human speech have distinct, characteristic properties that are not only an amplified version of those observed during other, non-speech sounds. Although open questions remain, speech seems special for neural dynamics in humans

    Temporal Structure in Audiovisual Sensory Selection

    Get PDF
    In natural environments, sensory information is embedded in temporally contiguous streams of events. This is typically the case when seeing and listening to a speaker or when engaged in scene analysis. In such contexts, two mechanisms are needed to single out and build a reliable representation of an event (or object): the temporal parsing of information and the selection of relevant information in the stream. It has previously been shown that rhythmic events naturally build temporal expectations that improve sensory processing at predictable points in time. Here, we asked to which extent temporal regularities can improve the detection and identification of events across sensory modalities. To do so, we used a dynamic visual conjunction search task accompanied by auditory cues synchronized or not with the color change of the target (horizontal or vertical bar). Sounds synchronized with the visual target improved search efficiency for temporal rates below 1.4 Hz but did not affect efficiency above that stimulation rate. Desynchronized auditory cues consistently impaired visual search below 3.3 Hz. Our results are interpreted in the context of the Dynamic Attending Theory: specifically, we suggest that a cognitive operation structures events in time irrespective of the sensory modality of input. Our results further support and specify recent neurophysiological findings by showing strong temporal selectivity for audiovisual integration in th

    Main effect of temporal rates collapsed across all set sizes on reaction times and identification rate.

    No full text
    <p>Mean response times (A) and detection rates (B) per condition (V: crosses, AVc: filled circles and AVi: open circles). Bars denote are two SEM. A sound synchronized with the visual target color change fastens RTs for all temporal rates (Fig. 3a) and improves target detection only below 1.4 Hz (Fig. 3b). The level of significance between AV (AVc and AVi) and V conditions are reported as follows: *p<0.05; **p<0.01; ***p<0.001, <sup>#</sup>p<0.055.</p

    Effect of number of distracters on RT and identification rate collapsed over all temporal rates.

    No full text
    <p>Mean response times (A) and detection rates (B) per condition and per subject as a function of set size. Bars denote two SEM. A significant interaction was found between display condition and set size for RTs. The slope of the curve RTs  =  f(set size) was significantly lower in condition AVc than in condition AVi. The number of distracters affected the visual search less when a sound was synchronized with a target color change than in the absence of sound (V) or in desynchronized condition (AVi).</p

    Hysteresis in Audiovisual Synchrony Perception

    Get PDF
    International audienceThe effect of stimulation history on the perception of a current event can yield two opposite effects, namely: adaptation or hysteresis. The perception of the current event thus goes in the opposite or in the same direction as prior stimulation, respectively. In audiovisual (AV) synchrony perception, adaptation effects have primarily been reported. Here, we tested if perceptual hysteresis could also be observed over adaptation in AV timing perception by varying different experimental conditions. Participants were asked to judge the synchrony of the last (test) stimulus of an AV sequence with either constant or gradually changing AV intervals (constant and dynamic condition, respectively). The onset timing of the test stimulus could be cued or not (prospective vs. retrospective condition, respectively). We observed hysteretic effects for AV synchrony judgments in the retrospective condition that were independent of the constant or dynamic nature of the adapted stimuli; these effects disappeared in the prospective condition. The present findings suggest that knowing when to estimate a stimulus property has a crucial impact on perceptual simultaneity judgments. Our results extend beyond AV timing perception, and have strong implications regarding the comparative study of hysteresis and adaptation phenomena

    Experimental Paradigm.

    No full text
    <p>Each trial started with a fixation point lasting between 1 and 4 seconds (randomized across trials). In the audiovisual conditions (AVc, AVi) two or three sounds appeared before the visual display in order to avoid a surprise effect at the onset of the first sound. This was followed by the visual display with or without a sound (AVc and AVi or V, respectively). Participants were asked to find a horizontal or a vertical bar in the visual display while maintaining their gaze on the fixation point at all times. They were asked to answer as fast and as accurately as possible by pressing the space bar on the keyboard. One trial lasted a maximum of 10 seconds during which the participant was expected to have detected the target. After detection, participants were asked to identify the orientation of the detected target (vertical or horizontal). If the participant had not detected the target, he was nevertheless asked to make a guess. Therefore, this design allowed quantifying two dependent variables: reaction times (RTs – with a 10 sec imparted limit for the participant’s detection) and identification rate. In subsequent analysis, trials in which the target was not detected within 10 s were discarded for RTs. The experiment was run in 3 pseudo-randomized blocks corresponding to the display condition (V, AVc and AVi).</p

    Summary of linear mixed regression analyses.

    No full text
    <p>Regression model minimization used the Akaike Information Criterion (AIC) and likelihood ratio. Three factors were analyzed: display condition (3 levels), set size (continuous factor) and temporal rate (7 discrete levels) plus one random effect (24: participants). Six models were tested to explain the data with increasing order of complexity, namely: model (1): the effect of display condition; model (2): model 1+ set size; model (3): model 2+ temporal rate; model (4): model 3+ display condition Ă— set size; model (5): model 4+ display condition Ă— temporal rate; model (6): model 5+ set size Ă— temporal rate. Bold models designate those variables significantly contributing to model estimate.</p

    Bilateral Gamma/Delta Transcranial Alternating Current Stimulation Affects Interhemispheric Speech Sound Integration

    Full text link
    Perceiving speech requires the integration of different speech cues, that is, formants. When the speech signal is split so that different cues are presented to the right and left ear (dichotic listening), comprehension requires the integration of binaural information. Based on prior electrophysiological evidence, we hypothesized that the integration of dichotically presented speech cues is enabled by interhemispheric phase synchronization between primary and secondary auditory cortex in the gamma frequency band. We tested this hypothesis by applying transcranial alternating current stimulation (TACS) bilaterally above the superior temporal lobe to induce or disrupt interhemispheric gamma-phase coupling. In contrast to initial predictions, we found that gamma TACS applied in-phase above the two hemispheres (interhemispheric lag 0°) perturbs interhemispheric integration of speech cues, possibly because the applied stimulation perturbs an inherent phase lag between the left and right auditory cortex. We also observed this disruptive effect when applying antiphasic delta TACS (interhemispheric lag 180°). We conclude that interhemispheric phase coupling plays a functional role in interhemispheric speech integration. The direction of this effect may depend on the stimulation frequency

    Neural tracking of speech envelope does not unequivocally reflect intelligibility

    No full text
    During listening, brain activity tracks the rhythmic structures of speech signals. Here, we directly dissociated the contribution of neural envelope tracking in the processing of speech acoustic cues from that related to linguistic processing. We examined the neural changes associated with the comprehension of Noise-Vocoded (NV) speech using magnetoencephalography (MEG). Participants listened to NV sentences in a 3-phase training paradigm: (1) pre-training, where NV stimuli were barely comprehended, (2) training with exposure of the original clear version of speech stimulus, and (3) post-training, where the same stimuli gained intelligibility from the training phase. Using this paradigm, we tested if the neural responses of a speech signal was modulated by its intelligibility without any change in its acoustic structure. To test the influence of spectral degradation on neural envelope tracking independently of training, participants listened to two types of NV sentences (4-band and 2-band NV speech), but were only trained to understand 4-band NV speech. Significant changes in neural tracking were observed in the delta range in relation to the acoustic degradation of speech. However, we failed to find a direct effect of intelligibility on the neural tracking of speech envelope in both theta and delta ranges, in both auditory regions-of-interest and whole-brain sensor-space analyses. This suggests that acoustics greatly influence the neural tracking response to speech envelope, and that caution needs to be taken when choosing the control signals for speech-brain tracking analyses, considering that a slight change in acoustic parameters can have strong effects on the neural tracking response
    corecore