260 research outputs found

    Selective Attention and Audiovisual Integration: Is Attending to Both Modalities a Prerequisite for Early Integration?

    Get PDF
    Interactions between multisensory integration and attention were studied using a combined audiovisual streaming design and a rapid serial visual presentation paradigm. Event-related potentials (ERPs) following audiovisual objects (AV) were compared with the sum of the ERPs following auditory (A) and visual objects (V). Integration processes were expressed as the difference between these AV and (A + V) responses and were studied while attention was directed to one or both modalities or directed elsewhere. Results show that multisensory integration effects depend on the multisensory objects being fully attended—that is, when both the visual and auditory senses were attended. In this condition, a superadditive audiovisual integration effect was observed on the P50 component. When unattended, this effect was reversed; the P50 components of multisensory ERPs were smaller than the unisensory sum. Additionally, we found an enhanced late frontal negativity when subjects attended the visual component of a multisensory object. This effect, bearing a strong resemblance to the auditory processing negativity, appeared to reflect late attention-related processing that had spread to encompass the auditory component of the multisensory object. In conclusion, our results shed new light on how the brain processes multisensory auditory and visual information, including how attention modulates multisensory integration processes

    The mutual influence of gaze and head orientation in the analysis of social attention direction

    Get PDF
    Three experiments are reported that investigate the hypothesis that head orientation and gaze direction interact in the processing of another individual's direction of social attention. A Stroop-type interference paradigm was adopted, in which gaze and head cues were placed into conflict. In separate blocks of trials, participants were asked to make speeded keypress responses contingent on either the direction of gaze, or the orientation of the head displayed in a digitized photograph of a male face. In Experiments 1 and 2, head and gaze cues showed symmetrical interference effects. Compared with congruent arrangements, incongruent head cues slowed responses to gaze cues, and incongruent gaze cues slowed responses to head cues, suggesting that head and gaze are mutually influential in the analysis of social attention direction. This mutuality was also evident in a cross-modal version of the task (Experiment 3) where participants responded to spoken directional words whilst ignoring the head/gaze images. It is argued that these interference effects arise from the independent influences of gaze and head orientation on decisions concerning social attention direction

    Audio-visual synchrony and spatial attention enhance processing of dynamic visual stimulation independently and in parallel: A frequency-tagging study

    Get PDF
    The neural processing of a visual stimulus can be facilitated by attending to its position or by a co-occurring auditory tone. Using frequency-tagging, we investigated whether facilitation by spatial attention and audio-visual synchrony rely on similar neural processes. Participants attended to one of two flickering Gabor patches (14.17 and 17 Hz) located in opposite lower visual fields. Gabor patches further “pulsed” (i.e. showed smooth spatial frequency variations) at distinct rates (3.14 and 3.63 Hz). Frequency-modulating an auditory stimulus at the pulse-rate of one of the visual stimuli established audio-visual synchrony. Flicker and pulsed stimulation elicited stimulus-locked rhythmic electrophysiological brain responses that allowed tracking the neural processing of simultaneously presented Gabor patches. These steady-state responses (SSRs) were quantified in the spectral domain to examine visual stimulus processing under conditions of synchronous vs. asynchronous tone presentation and when respective stimulus positions were attended vs. unattended. Strikingly, unique patterns of effects on pulse- and flicker driven SSRs indicated that spatial attention and audiovisual synchrony facilitated early visual processing in parallel and via different cortical processes. We found attention effects to resemble the classical top-down gain effect facilitating both, flicker and pulse-driven SSRs. Audio-visual synchrony, in turn, only amplified synchrony-producing stimulus aspects (i.e. pulse-driven SSRs) possibly highlighting the role of temporally co-occurring sights and sounds in bottom-up multisensory integration

    Consonant and dissonant music chords improve visual attention capture

    Get PDF
    Recent research has suggested that music may enhance or reduce cognitive interference, depending on whether it is tonally consonant or dissonant. Tonal consonance is often described as being pleasant and agreeable, while tonal dissonance is often described as being unpleasant and harsh. However, the exact cognitive mechanisms underlying these effects remain unclear. We hypothesize that tonal dissonance may increase cognitive interference through its effects on attentional cueing. We predict that (a) consonant musical chords are attentionally demanding, but (b) dissonant musical chords are more attentionally demanding than consonant musical chords. Using a Posner cueing task, a standard measure of attention capture, we measured the differential effects of consonant chords, dissonant chords, and no music on attentional cueing. Musical chords were presented binaurally at the same time as a visual cue which correctly predicted the spatial location of a subsequent target in 80% of trials. As in previous studies, valid cues led to faster response times (RTs) compared to invalid cues; however, contrary to our predictions, both consonant and dissonant music chords produced faster RTs compared to the no music condition. Although inconsistent with our hypotheses, these results support previous research on cross-modal cueing, which suggests that non-predictive auditory cues enhance the effectiveness of visual cues. Our study further demonstrates that this effect is not influenced by auditory qualities such as tonal consonance and dissonance, suggesting that previously reported cognitive interference effects for tonal dissonance may depend on high-level changes in mood and arousal

    Got the gist? The effects of visually evoked expectations and cross-modal stimulation on the rapid processing of real-world scenes

    Get PDF
    Scene meaning is processed rapidly, with ‘gist’ extracted even when presentation duration spans a few dozen milliseconds. This has led some to suggest a primacy of bottom-up visual information. However, gist research has typically relied on showing successions of unrelated scene images, contrary to our everyday experience of a multisensory world unfolding around us in a predictable manner. To address this lack of ecological validity, Study 1 investigated whether top-down information – in the form of observers’ predictions of an upcoming scene – facilitates gist processing. Participants (N=336) experienced a series of images, organised to represent an approach to a destination (e.g., walking down a sidewalk), followed by a final target scene either congruous or incongruous with the expected destination (e.g., a store interior or a bedroom). A series of behavioural experiments revealed that (i) appropriate expectations facilitated gist processing, (ii) inappropriate expectations interfered with gist processing, (iii) the effect of congruency was driven by provision of contextual information rather than the thematic coherence of approach images, and (iv) expectation-based facilitation was most apparent when destination duration was most curtailed. We then investigated the neural correlates of predictability on scene processing using ERP (N=26). Congruency-related differences were found in a putative scene-selective ERP component, related to integrating visual properties (P2), and in later components related to contextual integration including semantic and syntactic coherence (N400 and P600, respectively). Study 2 (N=206) then investigated the influence of simultaneous auditory information on gist processing, across two eye-tracking experiments. Search performance as a function of target sound congruency was measured using a flash-preview moving window paradigm. This revealed that a cross-modal effect did exist. Taken together, these results suggest that in real-world situations, both prior expectations and simultaneous cross-modal information influence the earliest stages of scene processing, affecting the integration of visual properties and meaning. Keywords: scene processing, gist, top-down information, event-related potentials, audio-visual processing, eye trackin

    A transition from unimodal to multimodal activations in four sensory modalities in humans: an electrophysiological study

    Get PDF
    <p>Abstract</p> <p>Background</p> <p>To investigate the long-latency activities common to all sensory modalities, electroencephalographic responses to auditory (1000 Hz pure tone), tactile (electrical stimulation to the index finger), visual (simple figure of a star), and noxious (intra-epidermal electrical stimulation to the dorsum of the hand) stimuli were recorded from 27 scalp electrodes in 14 healthy volunteers.</p> <p>Results</p> <p>Results of source modeling showed multimodal activations in the anterior part of the cingulate cortex (ACC) and hippocampal region (Hip). The activity in the ACC was biphasic. In all sensory modalities, the first component of ACC activity peaked 30–56 ms later than the peak of the major modality-specific activity, the second component of ACC activity peaked 117–145 ms later than the peak of the first component, and the activity in Hip peaked 43–77 ms later than the second component of ACC activity.</p> <p>Conclusion</p> <p>The temporal sequence of activations through modality-specific and multimodal pathways was similar among all sensory modalities.</p

    The Advantage of Ambiguity? Enhanced Neural Responses to Multi-Stable Percepts Correlate with the Degree of Perceived Instability

    Get PDF
    Artwork can often pique the interest of the viewer or listener as a result of the ambiguity or instability contained within it. Our engagement with uncertain sensory experiences might have its origins in early cortical responses, in that perceptually unstable stimuli might preclude neural habituation and maintain activity in early sensory areas. To assess this idea, participants engaged with an ambiguous visual stimulus wherein two squares alternated with one another, in terms of simultaneously opposing vertical and horizontal locations relative to fixation (i.e., stroboscopic alternating motion; von Schiller, 1933). At each trial, participants were invited to interpret the movement of the squares in one of five ways: traditional vertical or horizontal motion, novel clockwise or counter-clockwise motion, and, a free-view condition in which participants were encouraged to switch the direction of motion as often as possible. Behavioral reports of perceptual stability showed clockwise and counter-clockwise motion to possess an intermediate level of stability compared to relatively stable vertical and horizontal motion, and, relatively unstable motion perceived during free-view conditions. Early visual evoked components recorded at parietal–occipital sites such as C1, P1, and N1 modulated as a function of visual intention. Both at a group and individual level, increased perceptual instability was related to increased negativity in all three of these early visual neural responses. Engagement with increasingly ambiguous input may partly result from the underlying exaggerated neural response to it. The study underscores the utility of combining neuroelectric recording with the presentation of perceptually multi-stable yet physically identical stimuli, in revealing brain activity associated with the purely internal process of interpreting and appreciating the sensory world that surrounds us

    Psychologie und Gehirn 2007

    Get PDF
    Die Fachtagung "Psychologie und Gehirn" ist eine traditionelle Tagung aus dem Bereich psychophysiologischer Grundlagenforschung. 2007 fand diese Veranstaltung, die 33. Jahrestagung der „Deutschen Gesellschaft für Psychophysiologie und ihre Anwendungen (DGPA)“, in Dortmund unter der Schirmherrschaft des Instituts für Arbeitsphysiologie (IfADo) statt. Neben der Grundlagenforschung ist auch die Umsetzung in die Anwendung erklärtes Ziel der DGPA und dieser Tradition folgend wurden Beiträge aus vielen Bereichen moderner Neurowissenschaft (Elektrophysiologie, bildgebende Verfahren, Peripherphysiologie, Neuroendokrinologie, Verhaltensgenetik, u.a.) präsentiert und liegen hier in Kurzform vor

    Syntax through the looking glass: A review on two-word linguistic processing across behavioral, neuroimaging and neurostimulation studies

    Get PDF
    In recent years a growing number of studies on syntactic processing has employed basic two-word constructions (e.g., “the tree”) to characterize the fundamental aspects of linguistic composition. This large body of evidence allows, for the first time, to closely examine which cognitive processes and neural substrates support the combination of two syntactic units into a more complex one, mirroring the nature of combinatory operations described in theoretical linguistics. The present review comprehensively examines behavioural, neuroimaging and neurostimulation studies investigating basic syntactic composition, covering more than forty years of psycho- and neuro-linguistic research. Across several paradigms, four key features of syntactic composition have emerged: (1) the rule-based and (2) automatic nature of the combinatorial process, (3) a central role of Broca’s area and the posterior temporal lobe in representing and combining syntactic features, and (4) the reliance on efficient bottom-up integration rather than top-down prediction
    corecore