70 research outputs found

    Audiovisual temporal correspondence modulates human multisensory superior temporal sulcus plus primary sensory cortices

    Get PDF
    The brain should integrate related but not unrelated information from different senses. Temporal patterning of inputs to different modalities may provide critical information about whether those inputs are related or not. We studied effects of temporal correspondence between auditory and visual streams on human brain activity with functional magnetic resonance imaging ( fMRI). Streams of visual flashes with irregularly jittered, arrhythmic timing could appear on right or left, with or without a stream of auditory tones that coincided perfectly when present ( highly unlikely by chance), were noncoincident with vision ( different erratic, arrhythmic pattern with same temporal statistics), or an auditory stream appeared alone. fMRI revealed blood oxygenation level-dependent ( BOLD) increases in multisensory superior temporal sulcus (mSTS), contralateral to a visual stream when coincident with an auditory stream, and BOLD decreases for noncoincidence relative to unisensory baselines. Contralateral primary visual cortex and auditory cortex were also affected by audiovisual temporal correspondence or noncorrespondence, as confirmed in individuals. Connectivity analyses indicated enhanced influence from mSTS on primary sensory areas, rather than vice versa, during audiovisual correspondence. Temporal correspondence between auditory and visual streams affects a network of both multisensory ( mSTS) and sensory-specific areas in humans, including even primary visual and auditory cortex, with stronger responses for corresponding and thus related audiovisual inputs

    High-field fMRI reveals brain activation patterns underlying saccade execution in the human superior colliculus

    Get PDF
    Background The superior colliculus (SC) has been shown to play a crucial role in the initiation and coordination of eye- and head-movements. The knowledge about the function of this structure is mainly based on single-unit recordings in animals with relatively few neuroimaging studies investigating eye-movement related brain activity in humans. Methodology/Principal Findings The present study employed high-field (7 Tesla) functional magnetic resonance imaging (fMRI) to investigate SC responses during endogenously cued saccades in humans. In response to centrally presented instructional cues, subjects either performed saccades away from (centrifugal) or towards (centripetal) the center of straight gaze or maintained fixation at the center position. Compared to central fixation, the execution of saccades elicited hemodynamic activity within a network of cortical and subcortical areas that included the SC, lateral geniculate nucleus (LGN), occipital cortex, striatum, and the pulvinar. Conclusions/Significance Activity in the SC was enhanced contralateral to the direction of the saccade (i.e., greater activity in the right as compared to left SC during leftward saccades and vice versa) during both centrifugal and centripetal saccades, thereby demonstrating that the contralateral predominance for saccade execution that has been shown to exist in animals is also present in the human SC. In addition, centrifugal saccades elicited greater activity in the SC than did centripetal saccades, while also being accompanied by an enhanced deactivation within the prefrontal default-mode network. This pattern of brain activity might reflect the reduced processing effort required to move the eyes toward as compared to away from the center of straight gaze, a position that might serve as a spatial baseline in which the retinotopic and craniotopic reference frames are aligned

    Introduction: Ideology, propaganda, and political discourse in the Xi Jinping era

    Get PDF
    The ideology, propaganda, and political discourse of the Communist Party of China (CPC) have continued to function as key elements of the political system of the People’s Republic of China (PRC) in the post-Maoist period since 1978. In the first term of the Xi Jinping leadership (2012–2017), the CPC, for instance, elaborated on its guiding ideological concepts, devised inventive ideational framings of phenomena usually perceived as tangible (such as the BNew Normal^), engaged in complex intellectual debates on crucial topics (such as Beco-civilization^), intensified and diversified its argumentation patterns and discursive strategies, and consolidated ideational governance over some citizens’ individual values, beliefs, and loyalties. Furthermore, it is often no longer possible to differentiate between the CPC’s internal and external propaganda, as seemingly exclusively domestic ideational and discursive issues increasingly correlate with international phenomena. However, the trends in the Xi era do not present paradigmatic shifts, but rather an overall reassertion-cuminnovation of previous Maoist and post-Maoist uses of ideology, propaganda, and political discourse, primarily aiming at strengthening one-party rule

    Investigating human audio-visual object perception with a combination of hypothesis-generating and hypothesis-testing fMRI analysis tools

    Get PDF
    Primate multisensory object perception involves distributed brain regions. To investigate the network character of these regions of the human brain, we applied data-driven group spatial independent component analysis (ICA) to a functional magnetic resonance imaging (fMRI) data set acquired during a passive audio-visual (AV) experiment with common object stimuli. We labeled three group-level independent component (IC) maps as auditory (A), visual (V), and AV, based on their spatial layouts and activation time courses. The overlap between these IC maps served as definition of a distributed network of multisensory candidate regions including superior temporal, ventral occipito-temporal, posterior parietal and prefrontal regions. During an independent second fMRI experiment, we explicitly tested their involvement in AV integration. Activations in nine out of these twelve regions met the max-criterion (A < AV > V) for multisensory integration. Comparison of this approach with a general linear model-based region-of-interest definition revealed its complementary value for multisensory neuroimaging. In conclusion, we estimated functional networks of uni- and multisensory functional connectivity from one dataset and validated their functional roles in an independent dataset. These findings demonstrate the particular value of ICA for multisensory neuroimaging research and using independent datasets to test hypotheses generated from a data-driven analysis

    Synchronized Audio-Visual Transients Drive Efficient Visual Search for Motion-in-Depth

    Get PDF
    In natural audio-visual environments, a change in depth is usually correlated with a change in loudness. In the present study, we investigated whether correlating changes in disparity and loudness would provide a functional advantage in binding disparity and sound amplitude in a visual search paradigm. To test this hypothesis, we used a method similar to that used by van der Burg et al. to show that non-spatial transient (square-wave) modulations of loudness can drastically improve spatial visual search for a correlated luminance modulation. We used dynamic random-dot stereogram displays to produce pure disparity modulations. Target and distractors were small disparity-defined squares (either 6 or 10 in total). Each square moved back and forth in depth in front of the background plane at different phases. The target’s depth modulation was synchronized with an amplitude-modulated auditory tone. Visual and auditory modulations were always congruent (both sine-wave or square-wave). In a speeded search task, five observers were asked to identify the target as quickly as possible. Results show a significant improvement in visual search times in the square-wave condition compared to the sine condition, suggesting that transient auditory information can efficiently drive visual search in the disparity domain. In a second experiment, participants performed the same task in the absence of sound and showed a clear set-size effect in both modulation conditions. In a third experiment, we correlated the sound with a distractor instead of the target. This produced longer search times, indicating that the correlation is not easily ignored

    Looking to Score: The Dissociation of Goal Influence on Eye Movement and Meta-Attentional Allocation in a Complex Dynamic Natural Scene

    Get PDF
    Several studies have reported that task instructions influence eye-movement behavior during static image observation. In contrast, during dynamic scene observation we show that while the specificity of the goal of a task influences observers’ beliefs about where they look, the goal does not in turn influence eye-movement patterns. In our study observers watched short video clips of a single tennis match and were asked to make subjective judgments about the allocation of visual attention to the items presented in the clip (e.g., ball, players, court lines, and umpire). However, before attending to the clips, observers were either told to simply watch clips (non-specific goal), or they were told to watch the clips with a view to judging which of the two tennis players was awarded the point (specific goal). The results of subjective reports suggest that observers believed that they allocated their attention more to goal-related items (e.g. court lines) if they performed the goal-specific task. However, we did not find the effect of goal specificity on major eye-movement parameters (i.e., saccadic amplitudes, inter-saccadic intervals, and gaze coherence). We conclude that the specificity of a task goal can alter observer’s beliefs about their attention allocation strategy, but such task-driven meta-attentional modulation does not necessarily correlate with eye-movement behavior

    Timing and Sequence of Brain Activity in Top-Down Control of Visual-Spatial Attention

    Get PDF
    Recent brain imaging studies using functional magnetic resonance imaging (fMRI) have implicated a frontal-parietal network in the top-down control of attention. However, little is known about the timing and sequence of activations within this network. To investigate these timing questions, we used event-related electrical brain potentials (ERPs) and a specially designed visual-spatial attentional-cueing paradigm, which were applied as part of a multi-methodological approach that included a closely corresponding event-related fMRI study using an identical paradigm. In the first 400 ms post cue, attention-directing and control cues elicited similar general cue-processing activity, corresponding to the more lateral subregions of the frontal-parietal network identified with the fMRI. Following this, the attention-directing cues elicited a sustained negative-polarity brain wave that was absent for control cues. This activity could be linked to the more medial frontal–parietal subregions similarly identified in the fMRI as specifically involved in attentional orienting. Critically, both the scalp ERPs and the fMRI-seeded source modeling for this orienting-related activity indicated an earlier onset of frontal versus parietal contribution (∼400 versus ∼700 ms). This was then followed (∼800–900 ms) by pretarget biasing activity in the region-specific visual-sensory occipital cortex. These results indicate an activation sequence of key components of the attentional-control brain network, providing insight into their functional roles. More specifically, these results suggest that voluntary attentional orienting is initiated by medial portions of frontal cortex, which then recruit medial parietal areas. Together, these areas then implement biasing of region-specific visual-sensory cortex to facilitate the processing of upcoming visual stimuli

    Efficient Visual Search from Synchronized Auditory Signals Requires Transient Audiovisual Events

    Get PDF
    BACKGROUND: A prevailing view is that audiovisual integration requires temporally coincident signals. However, a recent study failed to find any evidence for audiovisual integration in visual search even when using synchronized audiovisual events. An important question is what information is critical to observe audiovisual integration. METHODOLOGY/PRINCIPAL FINDINGS: Here we demonstrate that temporal coincidence (i.e., synchrony) of auditory and visual components can trigger audiovisual interaction in cluttered displays and consequently produce very fast and efficient target identification. In visual search experiments, subjects found a modulating visual target vastly more efficiently when it was paired with a synchronous auditory signal. By manipulating the kind of temporal modulation (sine wave vs. square wave vs. difference wave; harmonic sine-wave synthesis; gradient of onset/offset ramps) we show that abrupt visual events are required for this search efficiency to occur, and that sinusoidal audiovisual modulations do not support efficient search. CONCLUSIONS/SIGNIFICANCE: Thus, audiovisual temporal alignment will only lead to benefits in visual search if the changes in the component signals are both synchronized and transient. We propose that transient signals are necessary in synchrony-driven binding to avoid spurious interactions with unrelated signals when these occur close together in time

    An Alternative Theoretical Approach to Escape Decision-Making: The Role of Visual Cues

    Get PDF
    Escape enables prey to avoid an approaching predator. The escape decision-making process has traditionally been interpreted using theoretical models that consider ultimate explanations based on the cost/benefit paradigm. Ultimate approaches, however, suffer from inseparable extra-assumptions due to an inability to accurately parameterize the model's variables and their interactive relationships. In this study, we propose a mathematical model that uses intensity of predator-mediated visual stimuli as a basic cue for the escape response. We consider looming stimuli (i.e. expanding retinal image of the moving predator) as a cue to flight initiation distance (FID; distance at which escape begins) of incubating Mallards (Anas platyrhynchos). We then examine the relationship between FID, vegetation cover and directness of predator trajectory, and fit the resultant model to experimental data. As predicted by the model, vegetation concealment and directness of predator trajectory interact, with FID decreasing with increased concealment during a direct approach toward prey, but not during a tangential approach. Thus, we show that a simple proximate expectation, which involves only visual processing of a moving predator, may explain interactive effects of environmental and predator-induced variables on an escape response. We assume that our proximate approach, which offers a plausible and parsimonious explanation for variation in FID, may serve as an evolutionary background for traditional, ultimate explanations and should be incorporated into interpretation of escape behavior
    corecore