75 research outputs found

    Interaction of perceptual grouping and crossmodal temporal capture in tactile apparent-motion

    Get PDF
    Previous studies have shown that in tasks requiring participants to report the direction of apparent motion, task-irrelevant mono-beeps can "capture'' visual motion perception when the beeps occur temporally close to the visual stimuli. However, the contributions of the relative timing of multimodal events and the event structure, modulating uni- and/or crossmodal perceptual grouping, remain unclear. To examine this question and extend the investigation to the tactile modality, the current experiments presented tactile two-tap apparent-motion streams, with an SOA of 400 ms between successive, left-/right-hand middle-finger taps, accompanied by task-irrelevant, non-spatial auditory stimuli. The streams were shown for 90 seconds, and participants' task was to continuously report the perceived (left-or rightward) direction of tactile motion. In Experiment 1, each tactile stimulus was paired with an auditory beep, though odd-numbered taps were paired with an asynchronous beep, with audiotactile SOAs ranging from -75 ms to 75 ms. Perceived direction of tactile motion varied systematically with audiotactile SOA, indicative of a temporal-capture effect. In Experiment 2, two audiotactile SOAs-one short (75 ms), one long (325 ms)-were compared. The long-SOA condition preserved the crossmodal event structure (so the temporal-capture dynamics should have been similar to that in Experiment 1), but both beeps now occurred temporally close to the taps on one side (even-numbered taps). The two SOAs were found to produce opposite modulations of apparent motion, indicative of an influence of crossmodal grouping. In Experiment 3, only odd-numbered, but not even-numbered, taps were paired with auditory beeps. This abolished the temporal-capture effect and, instead, a dominant percept of apparent motion from the audiotactile side to the tactile-only side was observed independently of the SOA variation. These findings suggest that asymmetric crossmodal grouping leads to an attentional modulation of apparent motion, which inhibits crossmodal temporal-capture effects

    Deep learning-based pseudo-CT synthesis from zero echo time MR sequences of the pelvis

    Full text link
    Objectives To generate pseudo-CT (pCT) images of the pelvis from zero echo time (ZTE) MR sequences and compare them to conventional CT. Methods Ninety-one patients were prospectively scanned with CT and MRI including ZTE sequences of the pelvis. Eleven ZTE image volumes were excluded due to implants and severe B1 field inhomogeneity. Out of the 80 data sets, 60 were used to train and update a deep learning (DL) model for pCT image synthesis from ZTE sequences while the remaining 20 cases were selected as an evaluation cohort. CT and pCT images were assessed qualitatively and quantitatively by two readers. Results Mean pCT ratings of qualitative parameters were good to perfect (2–3 on a 4-point scale). Overall intermodality agreement between CT and pCT was good (ICC = 0.88 (95% CI: 0.85–0.90); p  0.05) with the exception of transverse pelvic diameter measurements and lateral center-edge angle measurements (p = 0.001 and p = 0.002, respectively). Image quality and tissue differentiation in CT and pCT were similar without significant differences between CT and pCT CNRs (all p > 0.05). Conclusions Using a DL-based algorithm, it is possible to synthesize pCT images of the pelvis from ZTE sequences. The pCT images showed high bone depiction quality and accurate geometrical measurements compared to conventional CT. Critical relevance statement pCT images generated from MR sequences allow for high accuracy in evaluating bone without the need for radiation exposure. Radiological applications are broad and include assessment of inflammatory and degenerative bone disease or preoperative planning studies. Key Points - pCT, based on DL-reconstructed ZTE MR images, may be comparable with true CT images. - Overall, the intermodality agreement between CT and pCT was good with excellent interreader agreements for pCT. - Geometrical measurements and tissue differentiation were similar in CT and pCT images

    Effects of Fusion between Tactile and Proprioceptive Inputs on Tactile Perception

    Get PDF
    Tactile perception is typically considered the result of cortical interpretation of afferent signals from a network of mechanical sensors underneath the skin. Yet, tactile illusion studies suggest that tactile perception can be elicited without afferent signals from mechanoceptors. Therefore, the extent that tactile perception arises from isomorphic mapping of tactile afferents onto the somatosensory cortex remains controversial. We tested whether isomorphic mapping of tactile afferent fibers onto the cortex leads directly to tactile perception by examining whether it is independent from proprioceptive input by evaluating the impact of different hand postures on the perception of a tactile illusion across fingertips. Using the Cutaneous Rabbit Effect, a well studied illusion evoking the perception that a stimulus occurs at a location where none has been delivered, we found that hand posture has a significant effect on the perception of the illusion across the fingertips. This finding emphasizes that tactile perception arises from integration of perceived mechanical and proprioceptive input and not purely from tactile interaction with the external environment

    Crossmodal duration perception involves perceptual grouping, temporal ventriloquism, and variable internal clock rates

    Get PDF
    Here, we investigate how audiovisual context affects perceived event duration with experiments in which observers reported which of two stimuli they perceived as longer. Target events were visual and/or auditory and could be accompanied by nontargets in the other modality. Our results demonstrate that the temporal information conveyed by irrelevant sounds is automatically used when the brain estimates visual durations but that irrelevant visual information does not affect perceived auditory duration (Experiment 1). We further show that auditory influences on subjective visual durations occur only when the temporal characteristics of the stimuli promote perceptual grouping (Experiments 1 and 2). Placed in the context of scalar expectancy theory of time perception, our third and fourth experiments have the implication that audiovisual context can lead both to changes in the rate of an internal clock and to temporal ventriloquism-like effects on perceived on- and offsets. Finally, intramodal grouping of auditory stimuli diminished any crossmodal effects, suggesting a strong preference for intramodal over crossmodal perceptual grouping (Experiment 5)

    Appraisal of space words and allocation of emotion words in bodily space

    Get PDF
    The body-specificity hypothesis (BSH) predicts that right-handers and left-handers allocate positive and negative concepts differently on the horizontal plane, i.e., while left-handers allocate negative concepts on the right-hand side of their bodily space, right-handers allocate such concepts to the left-hand side. Similar research shows that people, in general, tend to allocate positive and negative concepts in upper and lower areas, respectively, in relation to the vertical plane. Further research shows a higher salience of the vertical plane over the horizontal plane in the performance of sensorimotor tasks. The aim of the paper is to examine whether there should be a dominance of the vertical plane over the horizontal plane, not only at a sensorimotor level but also at a conceptual level. In Experiment 1, various participants from diverse linguistic backgrounds were asked to rate the words “up”, “down”, “left”, and “right”. In Experiment 2, right-handed participants from two linguistic backgrounds were asked to allocate emotion words into a square grid divided into four boxes of equal areas. Results suggest that the vertical plane is more salient than the horizontal plane regarding the allocation of emotion words and positively-valenced words were placed in upper locations whereas negatively-valenced words were placed in lower locations. Together, the results lend support to the BSH while also suggesting a higher saliency of the vertical plane over the horizontal plane in the allocation of valenced words.Fernando Marmolejo-Ramos, María Rosa Elosúa, Yuki Yamada, Nicholas Francis Hamm and Kimihiro Noguch
    corecore