228 research outputs found

    Relationship between saccadic eye movements and cortical activity as measured by fMRI: quantitative and qualitative aspects

    Get PDF
    We investigated the quantitative relationship between saccadic activity (as reflected in frequency of occurrence and amplitude of saccades) and blood oxygenation level dependent (BOLD) changes in the cerebral cortex using functional magnetic resonance imaging (fMRI). Furthermore, we investigated quantitative changes in cortical activity associated with qualitative changes in the saccade task for comparable levels of saccadic activity. All experiments required the simultaneous acquisition of eye movement and fMRI data. For this purpose we used a new high-resolution limbus-tracking technique for recording eye movements in the magnetic resonance tomograph. In the first two experimental series we varied both frequency and amplitude of saccade stimuli (target jumps). In the third series we varied task difficulty; subjects performed either pro-saccades or anti-saccades. The brain volume investigated comprised the frontal and supplementary eye fields, parietal as well as striate cortex, and the motion sensitive area of the parieto-occipital cortex. All these regions showed saccade-related BOLD responses. The responses in these regions were highly correlated with saccade frequency, indicating that repeated processing of saccades is integrated over time in the BOLD response. In contrast, there was no comparable BOLD change with variation of saccade amplitude. This finding speaks for a topological rather than activity-dependent coding of saccade amplitudes in most cortical regions. In the experiments comparing pro- vs anti-saccades we found higher BOLD activation in the "anti" task than in the "pro" task. A comparison of saccade parameters revealed that saccade frequency and cumulative amplitude were comparable between the two tasks, whereas reaction times were longer in the "anti" task than the pro task. The latter finding is taken to indicate a more demanding cortical processing in the "anti" task than the "pro" task, which could explain the observed difference in BOLD activation. We hold that a quantitative analysis of saccade parameters (especially saccade frequency and latency) is important for the interpretation of the BOLD changes observed with visual stimuli in fMRI

    Rumination-focused cognitive behaviour therapy vs. cognitive behaviour therapy for depression: study protocol for a randomised controlled superiority trial.

    Get PDF
    Published onlineJournal ArticleResearch Support, Non-U.S. Gov'tBACKGROUND: Cognitive behavioural therapy is an effective treatment for depression. However, one third of the patients do not respond satisfactorily, and relapse rates of around 30 % within the first post-treatment year were reported in a recent meta-analysis. In total, 30-50 % of remitted patients present with residual symptoms by the end of treatment. A common residual symptom is rumination, a process of recurrent negative thinking and dwelling on negative affect. Rumination has been demonstrated as a major factor in vulnerability to depression, predicting the onset, severity, and duration of future depression. Rumination-focused cognitive behavioural therapy is a psychotherapeutic treatment targeting rumination. Because rumination plays a major role in the initiation and maintenance of depression, targeting rumination with rumination-focused cognitive behavioural therapy may be more effective in treating depression and reducing relapse than standard cognitive behavioural therapy. METHOD/DESIGN: This study is a two-arm pragmatic randomised controlled superiority trial comparing the effectiveness of group-based rumination-focused cognitive behaviour therapy with the effectiveness of group-based cognitive behavioural therapy for treatment of depression. One hundred twenty-eight patients with depression will be recruited from and given treatment in an outpatient service at a psychiatric hospital in Denmark. Our primary outcome will be severity of depressive symptoms (Hamilton Rating Scale for Depression) at completion of treatment. Secondary outcomes will be level of rumination, worry, anxiety, quality of life, behavioural activation, experimental measures of cognitive flexibility, and emotional attentional bias. A 6-month follow-up is planned and will include the primary outcome measure and assessment of relapse. DISCUSSION: The clinical outcome of this trial may guide clinicians to decide on the merits of including rumination-focused cognitive behavioural therapy in the treatment of depression in outpatient services. TRIAL REGISTRATION: ClinicalTrials.gov Identifier: NCT02278224 , registered 28 Oct. 2014.The study was funded by the University of Copenhagen, the Capital Region of Denmark, and TrygFonden

    Intermodal attention affects the processing of the temporal alignment of audiovisual stimuli

    Get PDF
    The temporal asynchrony between inputs to different sensory modalities has been shown to be a critical factor influencing the interaction between such inputs. We used scalp-recorded event-related potentials (ERPs) to investigate the effects of attention on the processing of audiovisual multisensory stimuli as the temporal asynchrony between the auditory and visual inputs varied across the audiovisual integration window (i.e., up to 125 ms). Randomized streams of unisensory auditory stimuli, unisensory visual stimuli, and audiovisual stimuli (consisting of the temporally proximal presentation of the visual and auditory stimulus components) were presented centrally while participants attended to either the auditory or the visual modality to detect occasional target stimuli in that modality. ERPs elicited by each of the contributing sensory modalities were extracted by signal processing techniques from the combined ERP waveforms elicited by the multisensory stimuli. This was done for each of the five different 50-ms subranges of stimulus onset asynchrony (SOA: e.g., V precedes A by 125–75 ms, by 75–25 ms, etc.). The extracted ERPs for the visual inputs of the multisensory stimuli were compared among each other and with the ERPs to the unisensory visual control stimuli, separately when attention was directed to the visual or to the auditory modality. The results showed that the attention effects on the right-hemisphere visual P1 was largest when auditory and visual stimuli were temporally aligned. In contrast, the N1 attention effect was smallest at this latency, suggesting that attention may play a role in the processing of the relative temporal alignment of the constituent parts of multisensory stimuli. At longer latencies an occipital selection negativity for the attended versus unattended visual stimuli was also observed, but this effect did not vary as a function of SOA, suggesting that by that latency a stable representation of the auditory and visual stimulus components has been established

    Gravitational Waves From Known Pulsars: Results From The Initial Detector Era

    Get PDF
    We present the results of searches for gravitational waves from a large selection of pulsars using data from the most recent science runs (S6, VSR2 and VSR4) of the initial generation of interferometric gravitational wave detectors LIGO (Laser Interferometric Gravitational-wave Observatory) and Virgo. We do not see evidence for gravitational wave emission from any of the targeted sources but produce upper limits on the emission amplitude. We highlight the results from seven young pulsars with large spin-down luminosities. We reach within a factor of five of the canonical spin-down limit for all seven of these, whilst for the Crab and Vela pulsars we further surpass their spin-down limits. We present new or updated limits for 172 other pulsars (including both young and millisecond pulsars). Now that the detectors are undergoing major upgrades, and, for completeness, we bring together all of the most up-to-date results from all pulsars searched for during the operations of the first-generation LIGO, Virgo and GEO600 detectors. This gives a total of 195 pulsars including the most recent results described in this paper.United States National Science FoundationScience and Technology Facilities Council of the United KingdomMax-Planck-SocietyState of Niedersachsen/GermanyAustralian Research CouncilInternational Science Linkages program of the Commonwealth of AustraliaCouncil of Scientific and Industrial Research of IndiaIstituto Nazionale di Fisica Nucleare of ItalySpanish Ministerio de Economia y CompetitividadConselleria d'Economia Hisenda i Innovacio of the Govern de les Illes BalearsNetherlands Organisation for Scientific ResearchPolish Ministry of Science and Higher EducationFOCUS Programme of Foundation for Polish ScienceRoyal SocietyScottish Funding CouncilScottish Universities Physics AllianceNational Aeronautics and Space AdministrationOTKA of HungaryLyon Institute of Origins (LIO)National Research Foundation of KoreaIndustry CanadaProvince of Ontario through the Ministry of Economic Development and InnovationNational Science and Engineering Research Council CanadaCarnegie TrustLeverhulme TrustDavid and Lucile Packard FoundationResearch CorporationAlfred P. Sloan FoundationAstronom

    Monkeys and Humans Share a Common Computation for Face/Voice Integration

    Get PDF
    Speech production involves the movement of the mouth and other regions of the face resulting in visual motion cues. These visual cues enhance intelligibility and detection of auditory speech. As such, face-to-face speech is fundamentally a multisensory phenomenon. If speech is fundamentally multisensory, it should be reflected in the evolution of vocal communication: similar behavioral effects should be observed in other primates. Old World monkeys share with humans vocal production biomechanics and communicate face-to-face with vocalizations. It is unknown, however, if they, too, combine faces and voices to enhance their perception of vocalizations. We show that they do: monkeys combine faces and voices in noisy environments to enhance their detection of vocalizations. Their behavior parallels that of humans performing an identical task. We explored what common computational mechanism(s) could explain the pattern of results we observed across species. Standard explanations or models such as the principle of inverse effectiveness and a “race” model failed to account for their behavior patterns. Conversely, a “superposition model”, positing the linear summation of activity patterns in response to visual and auditory components of vocalizations, served as a straightforward but powerful explanatory mechanism for the observed behaviors in both species. As such, it represents a putative homologous mechanism for integrating faces and voices across primates

    Neural correlates of audiovisual motion capture

    Get PDF
    Visual motion can affect the perceived direction of auditory motion (i.e., audiovisual motion capture). It is debated, though, whether this effect occurs at perceptual or decisional stages. Here, we examined the neural consequences of audiovisual motion capture using the mismatch negativity (MMN), an event-related brain potential reflecting pre-attentive auditory deviance detection. In an auditory-only condition occasional changes in the direction of a moving sound (deviant) elicited an MMN starting around 150 ms. In an audiovisual condition, auditory standards and deviants were synchronized with a visual stimulus that moved in the same direction as the auditory standards. These audiovisual deviants did not evoke an MMN, indicating that visual motion reduced the perceptual difference between sound motion of standards and deviants. The inhibition of the MMN by visual motion provides evidence that auditory and visual motion signals are integrated at early sensory processing stages

    The effects of stereo disparity on the behavioural and electrophysiological correlates of audio-visual motion in depth.

    Get PDF
    Motion is represented by low-level signals, such as size-expansion in vision or loudness changes in the auditory modality. The visual and auditory signals from the same object or event may be integrated and facilitate detection. We explored behavioural and electrophysiological correlates of congruent and incongruent audio-visual depth motion in conditions where auditory level changes, visual expansion, and visual disparity cues were manipulated. In Experiment 1 participants discriminated auditory motion direction whilst viewing looming or receding, 2D or 3D, visual stimuli. Responses were faster and more accurate for congruent than for incongruent audio-visual cues, and the congruency effect (i.e., difference between incongruent and congruent conditions) was larger for visual 3D cues compared to 2D cues. In Experiment 2, event-related potentials (ERPs) were collected during presentation of the 2D and 3D, looming and receding, audio-visual stimuli, while participants detected an infrequent deviant sound. Our main finding was that audio-visual congruity was affected by retinal disparity at an early processing stage (135 – 160 ms) over occipito-parietal scalp. Topographic analyses suggested that similar brain networks were activated for the 2D and 3D congruity effects, but that cortical responses were stronger in the 3D condition. Differences between congruent and incongruent conditions were observed between 140 – 200 ms, 220 – 280 ms, and 350 – 500 ms after stimulus onset
    corecore