11 research outputs found

    Turning body and self inside out: visualized heartbeats alter bodily self-consciousness and tactile perception

    Get PDF
    Prominent theories highlight the importance of bodily perception for self-consciousness, but it is currently not known whether bodily perception is based on interoceptive or exteroceptive signals or on integrated signals from these anatomically distinct systems. In the research reported here, we combined both types of signals by surreptitiously providing participants with visual exteroceptive information about their heartbeat: A real-time video image of a periodically illuminated silhouette outlined participants' (projected, "virtual") bodies and flashed in synchrony with their heartbeats. We investigated whether these "cardio-visual" signals could modulate bodily self-consciousness and tactile perception. We report two main findings. First, synchronous cardio-visual signals increased self-identification with and self-location toward the virtual body, and second, they altered the perception of tactile stimuli applied to participants' backs so that touch was mislocalized toward the virtual body. We argue that the integration of signals from the inside and the outside of the human body is a fundamental neurobiological process underlying self-consciousness

    Seeing the body modulates audiotactile integration

    Get PDF
    Audiotactile integration has been studied using various experimental setups but so far crossmodal congruency effects (CCEs) have not been found for tactile targets paired with auditory distractors. In the present study we investigated whether audiotactile CCEs exist and, if so, whether these CCEs have similar characteristics to those found by previous authors with visual distractors. We measured audiotactile CCEs by attaching four vibrators to the backs of participants and presented auditory stimuli from four loudspeakers placed, in separate blocks, at different distances in front of or behind the participant's body. Participants discriminated the elevation of tactile stimuli while ignoring the auditory distractors. CCEs were found only when participants were provided with noninformative vision of their own body, as seen from behind via a camera and head-mounted display; they were absent when participants did not view their body. Furthermore, in contrast to visuotactile CCEs, audiotactile CCEs did not depend on whether the distractors were presented on the same or different side as the tactile targets. The present study provides the first demonstration of an audiotactile CCE: incongruent auditory distractors impaired performance on a tactile elevation discrimination task relative to performance with congruent distractors. We show that audiotactile CCEs differ from visuotactile CCEs as they do not appear to be as sensitive to the spatial relations between the distractors and the tactile stimuli. We also show that these CCEs are modulated by vision of the body

    Seeing the body modulates audiotactile integration

    Full text link
    Audiotactile integration has been studied using various experimental setups but so far crossmodal congruency effects (CCEs) have not been found for tactile targets paired with auditory distractors. In the present study we investigated whether audiotactile CCEs exist and, if so, whether these CCEs have similar characteristics to those found by previous authors with visual distractors. We measured audiotactile CCEs by attaching four vibrators to the backs of participants and presented auditory stimuli from four loudspeakers placed, in separate blocks, at different distances in front of or behind the participant's body. Participants discriminated the elevation of tactile stimuli while ignoring the auditory distractors. CCEs were found only when participants were provided with noninformative vision of their own body, as seen from behind via a camera and head-mounted display; they were absent when participants did not view their body. Furthermore, in contrast to visuotactile CCEs, audiotactile CCEs did not depend on whether the distractors were presented on the same or different side as the tactile targets. The present study provides the first demonstration of an audiotactile CCE: incongruent auditory distractors impaired performance on a tactile elevation discrimination task relative to performance with congruent distractors. We show that audiotactile CCEs differ from visuotactile CCEs as they do not appear to be as sensitive to the spatial relations between the distractors and the tactile stimuli. We also show that these CCEs are modulated by vision of the body

    FAIR Principles for Research Software (FAIR4RS Principles)

    No full text
    Chue Hong NP, Katz DS, Barker M, et al. FAIR Principles for Research Software (FAIR4RS Principles). 2021.Research software is a fundamental and vital part of research worldwide, yet there remain significant challenges to software productivity, quality, reproducibility, and sustainability. Improving the practice of scholarship is a common goal of the open science, open source software and FAIR (Findable, Accessible, Interoperable and Reusable) communities, but improving the sharing of research software has not yet been a strong focus of the latter. To improve the FAIRness of research software, the FAIR for Research Software (FAIR4RS) Working Group has sought to understand how to apply the FAIR Guiding Principles for scientific data management and stewardship to research software, bringing together existing and new community efforts. Many of the FAIR Guiding Principles can be directly applied to research software by treating software and data as similar digital research objects. However, specific characteristics of software — such as its executability, composite nature, and continuous evolution and versioning — make it necessary to revise and extend the principles. This document presents the first version of the FAIR Principles for Research Software (FAIR4RS Principles). It is an outcome of the FAIR for Research Software Working Group (FAIR4RS WG). The FAIR for Research Software Working Group is jointly convened as an RDA Working Group, FORCE11 Working Group, and Research Software Alliance (ReSA) Task Force
    corecore