59 research outputs found

    The time delay of the quadruple quasar RX J0911.4+0551

    Full text link
    We present optical lightcurves of the gravitationally lensed components A (=A1+A2+A3) and B of the quadruple quasar RX J0911.4+0551 (z = 2.80). The observations were primarily obtained at the Nordic Optical Telescope between 1997 March and 2001 April and consist of 74 I-band data points for each component. The data allow the measurement of a time delay of 146 +- 8 days (2 sigma) between A and B, with B as the leading component. This value is significantly shorter than that predicted from simple models and indicates a very large external shear. Mass models including the main lens galaxy and the surrounding massive cluster of galaxies at z = 0.77, responsible for the external shear, yield H_0 = 71 +- 4 (random, 2 sigma) +- 8 (systematic) km/s/Mpc. The systematic model uncertainty is governed by the surface-mass density (convergence) at the location of the multiple images.Comment: 12 pages, 3 figures, ApJL, in press (June 20, 2002

    Decoding sounds depicting hand-object interactions in primary somatosensory cortex

    No full text
    International audienceNeurons, even in the earliest sensory regions of cortex, are subject to a great deal of contextual influences from both within and across modality connections. Recent work has shown that primary sensory areas can respond to and, in some cases, discriminate stimuli that are not of their target modality: for example, primary somatosensory cortex (SI) discriminates visual images of graspable objects. In the present work, we investigated whether SI would discriminate sounds depicting hand–object interactions (e.g. bouncing a ball). In a rapid event-related functional magnetic resonance imaging experiment, participants listened attentively to sounds from 3 categories: hand–object interactions, and control categories of pure tones and animal vocalizations, while performing a one-back repetition detection task. Multivoxel pattern analysis revealed significant decoding of hand–object interaction sounds within SI, but not for either control category. Crucially, in the hand-sensitive voxels defined from an independent tactile localizer, decoding accuracies were significantly higher for hand–object interactions compared to pure tones in left SI. Our findings indicate that simply hearing sounds depicting familiar hand–object interactions elicit different patterns of activity in SI, despite the complete absence of tactile stimulation. These results highlight the rich contextual information that can be transmitted across sensory modalities even to primary sensory areas

    Decoding sounds depicting hand-object interactions in primary somatosensory cortex

    No full text
    Neurons, even in earliest sensory regions of cortex, are subject to a great deal of contextual influences from both within and across modality connections. Recent work has shown that primary sensory areas can respond to and in some cases discriminate stimuli not of their target modality: for example, primary somatosensory cortex (SI) discriminates visual images of graspable objects. In the present work, we investigated whether SI would discriminate sounds depicting hand-object interactions (e.g. bouncing a ball). In a rapid event-related functional magnetic resonance imaging (fMRI) experiment, participants listened attentively to sounds from three categories: hand-object interactions, and control categories of pure tones and animal vocalizations, while performing a one-back repetition detection task. Multi-voxel pattern analysis revealed significant decoding of different hand-object interactions within SI, but not for either control category. Crucially, in the hand-sensitive voxels defined from an independent tactile localizer, decoding accuracies were significantly higher for hand-object interactions compared to pure tones in left SI. Our findings indicate that simply hearing sounds depicting familiar hand-object interactions elicit different patterns of activity in SI, despite the complete absence of tactile stimulation. These results highlight the rich information that can be transmitted across sensory modalities even to primary sensory areas

    Optimizing fMRI experimental design for MVPA-based BCI control:Combining the strengths of block and event-related designs

    No full text
    Functional Magnetic Resonance Imaging (fMRI) has been successfully used for Brain Computer Interfacing (BCI) to classify (imagined) movements of different limbs. However, reliable classification of more subtle signals originating from co-localized neural networks in the sensorimotor cortex, e.g. individual movements of fingers of the same hand, has proved to be more challenging, especially when taking into account the requirement for high single trial reliability in the BCI context. In recent years, Multi Voxel Pattern Analysis (MVPA) has gained momentum as a suitable method to disclose such weak, distributed activation patterns. Much attention has been devoted to developing and validating data analysis strategies, but relatively little guidance is available on the choice of experimental design, even less so in the context of BCI-MVPA. When applicable, block designs are considered the safest choice, but the expectations, strategies and adaptation induced by blocking of similar trials can make it a sub-optimal strategy. Fast event-related designs, in contrast, require a more complicated analysis and show stronger dependence on linearity assumptions but allow for randomly alternating trials. However, they lack resting intervals that enable the BCI participant to process feedback. In this proof-of-concept paper a hybrid blocked fast-event related design is introduced that is novel in the context of MVPA and BCI experiments, and that might overcome these issues by combining the rest periods of the block design with the shorter and randomly alternating trial characteristics of a rapid event-related design. A well-established button-press experiment was used to perform a within-subject comparison of the proposed design with a block and a slow event-related design. The proposed hybrid blocked fast-event related design showed a decoding accuracy that was close to that of the block design, which showed highest accuracy. It allowed for across-design decoding, i.e. reliable prediction of examples obtained with another design. Finally, it also showed the most stable incremental decoding results, obtaining good performance with relatively few blocks. Our findings suggest that the blocked fast event-related design could be a viable alternative to block designs in the context of BCI-MVPA, when expectations, strategies and adaptation make blocking of trials of the same type a sub-optimal strategy. Additionally, the blocked fast event-related design is also suitable for applications in which fast incremental decoding is desired, and enables the use of a slow or block design during the test phase

    Topographic Somatosensory Imagery for Real-Time fMRI Brain-Computer Interfacing

    No full text
    Real-time functional magnetic resonance imaging (fMRI) is a promising non-invasive method for brain-computer interfaces (BCIs). BCIs translate brain activity into signals that allow communication with the outside world. Visual and motor imagery are often used as information-encoding strategies, but can be challenging if not grounded in recent experience in these modalities, e.g., in patients with locked-in-syndrome (LIS). In contrast, somatosensory imagery might constitute a more suitable information-encoding strategy as the somatosensory function is often very robust. Somatosensory imagery has been shown to activate the somatotopic cortex, but it has been unclear so far whether it can be reliably detected on a single-trial level and successfully classified according to specific somatosensory imagery content. Using ultra-high field 7-T fMRI, we show reliable and high-accuracy single-trial decoding of left-foot (LF) vs. right-hand (RH) somatosensory imagery. Correspondingly, higher decoding accuracies were associated with greater spatial separation of hand and foot decoding-weight patterns in the primary somatosensory cortex (S1). Exploiting these novel neuroscientific insights, we developed—and provide a proof of concept for—basic BCI communication by showing that binary (yes/no) answers encoded by somatosensory imagery can be decoded with high accuracy in simulated real-time (in 7 subjects) as well as in real-time (1 subject). This study demonstrates that body part-specific somatosensory imagery differentially activates somatosensory cortex in a topographically specific manner; evidence which was surprisingly still lacking in the literature. It also offers proof of concept for a novel somatosensory imagery-based fMRI-BCI control strategy, with particularly high potential for visually and motor-impaired patients. The strategy could also be transferred to lower MRI field strengths and to mobile functional near-infrared spectroscopy. Finally, given that communication BCIs provide the BCI user with a form of feedback based on their brain signals and can thus be considered as a specific form of neurofeedback, and that repeated use of a BCI has been shown to enhance underlying representations, we expect that the current BCI could also offer an interesting new approach for somatosensory rehabilitation training in the context of stroke and phantom limb pain

    Fear of Visual Stimuli: An ALE Meta-Analysis of Fear Conditioning Studies

    No full text
    Background: Fear conditioning paradigms are frequently used to experimentally induce fear towards a previously neutral stimulus (CS+). The response to this stimulus is compared to a similar control stimulus (CS-) that has not been paired with an aversive unconditioned stimulus (US). Combined with fMRI or PET this method can be used to elucidate the neural correlates that underlie a conditioned fear response. A previous meta-analysis (n=14) has reported large variability in active regions during this fear response [1], identifying only dACC as a common region of activation. In addition, a systematic review [2] has highlighted the role of stimulus modality and other design aspects in contributing to this variability in findings. Methods: We conducted an Activation Likelihood Estimate (ALE) meta-analysis of n=34 fear conditioning studies to investigate whether, with a larger sample, more common regions of activation to the contrast CS+ > CS- could be identified. To minimize the effect of procedural differences, we also conducted a more focused meta-analysis on a subset of these studies (n=18). We chose the most frequently used paradigm: delay conditioning, with partial reinforcement of a visual CS. Included studies were identified by a systematic search of PubMed, Web of Knowledge and BrainMap using combinations of the key words “Neuroimaging”, “Magnetic Resonance Imaging”, “Positron Emission Tomography”, “Fear”, “Aversive”, and “Conditioning”. A total of 781 articles were retrieved. Included studies used healthy participants and a classical uninstructed conditioning paradigm (with a discrete CS+ that was paired with an aversive US and a discrete CS- that was never paired with the aversive stimulus). Studies that used masked stimuli were excluded. Finally, studies must have used fMRI or PET imaging including the whole brain and have analyzed the data using a whole brain GLM approach, including CS+ > CS- contrast. Results: The final sample of 34 studies includes 28 with a visual CS, 5 with an auditory CS, and 1 with an olfactory CS. There was more diversity in modality of the aversive event: 17 tactile, 9 auditory, 3 interoceptive, 2 olfactory, 2 visual, and 1 visual-auditory. 11 studies used a full reinforcement schedule, whilst 23 used partial reinforcement of the CS+. In 31 of the studies the CS+ co-terminated with the aversive event (i.e. delay conditioning), 3 used a trace conditioning approach. The ALE analysis of these 34 studies (using the FDR correction (q<.01) as used in [1], and minimum cluster size of 100mm3) identified 27 clusters, including: bilateral thalamus, bilateral cingulate gyrus, bilateral insula and right lentiform nucleus. These areas were also amongst the 22 clusters identified in the ALE analysis (FDR q<.01, minimum cluster size 100mm3) of the ‘subset’ group. Conclusions: These results indicate that during the human conditioned fear response a wide range of cortical and sub-cortical regions are active. Due to the high prevalence of visual stimuli as CSs in conditioning paradigms it is not feasible to test the degree to which our findings are modality independent. By qualitative inspection there are no large differences between the results of the full sample and the subset of studies, but the imbalance in study designs may be a source of bias, preventing the generalization of findings to other fear conditioning paradigms. However, the subset sample was homogeneous due to its strict inclusion criteria and thus allows the conclusion that the conditioned fear response to a visual stimulus is related to activity in the cingulate gyrus, insula, thalamus and lentiform nucleus. [1] M.-L. Mechias, A. Etkin, and R. Kalisch, “A meta-analysis of instructed fear studies: implications for conscious appraisal of threat.,” Neuroimage, vol. 49, no. 2, pp. 1760–8, Jan. 2010. [2] C. Sehlmeyer, S. Schöning, P. Zwitserlood, B. Pfleiderer, T. Kircher, V. Arolt, and C. Konrad, “Human fear conditioning and extinction in neuroimaging: a systematic review.,” PLoS One, vol. 4, no. 6, p. e5865, Jan. 2009.status: publishe

    The acquisition and extinction of fear of painful touch: a novel tactile fear conditioning paradigm

    No full text
    Fear of touch, due to allodynia and spontaneous pain, is not well-understood. Experimental methods to advance this topic are lacking, and therefore we propose a novel tactile conditioning paradigm. Seventy- six pain-free participants underwent acquisition in both a predictable and unpredictable pain context. In the predictable context, vibrotactile stimulation was paired with painful electrocutaneous stimulation (simulating allodynia). In the unpredictable context, vibrotactile stimulation was unpaired with pain (simulating spontaneous pain). During an extinction phase, a cue exposure and context exposure group continued in the predictable and unpredictable context, respectively, without pain. A control group received continued acquisition in both contexts. Self- reported fear and skin conductance responses (SCRs), but not startle responses, showed fear of touch was acquired in the predictable context. Context-related startle responses showed contextual fear emerged in the unpredictable context, together with elevated self-reported fear and SCRs evoked by the unpaired vibrotactile stimulations. Cue exposure reduced fear of touch, whilst context exposure reduced contextual fear. Thus, painful touch leads to increased fear, as does touch in the same context as unpredictable pain, and extinction protocols can reduce this fear. We conclude that tactile conditioning is valuable for investigating fear of touch and can advance our understanding of chronic pain.status: publishe
    • 

    corecore