33 research outputs found

    Sound-contingent visual motion aftereffect

    Get PDF
    <p>Abstract</p> <p>Background</p> <p>After a prolonged exposure to a paired presentation of different types of signals (e.g., color and motion), one of the signals (color) becomes a driver for the other signal (motion). This phenomenon, which is known as contingent motion aftereffect, indicates that the brain can establish new neural representations even in the adult's brain. However, contingent motion aftereffect has been reported only in visual or auditory domain. Here, we demonstrate that a visual motion aftereffect can be contingent on a specific sound.</p> <p>Results</p> <p>Dynamic random dots moving in an alternating right or left direction were presented to the participants. Each direction of motion was accompanied by an auditory tone of a unique and specific frequency. After a 3-minutes exposure, the tones began to exert marked influence on the visual motion perception, and the percentage of dots required to trigger motion perception systematically changed depending on the tones. Furthermore, this effect lasted for at least 2 days.</p> <p>Conclusions</p> <p>These results indicate that a new neural representation can be rapidly established between auditory and visual modalities.</p

    Sound Frequency and Aural Selectivity in Sound-Contingent Visual Motion Aftereffect

    Get PDF
    BACKGROUND: One possible strategy to evaluate whether signals in different modalities originate from a common external event or object is to form associations between inputs from different senses. This strategy would be quite effective because signals in different modalities from a common external event would then be aligned spatially and temporally. Indeed, it has been demonstrated that after adaptation to visual apparent motion paired with alternating auditory tones, the tones begin to trigger illusory motion perception to a static visual stimulus, where the perceived direction of visual lateral motion depends on the order in which the tones are replayed. The mechanisms underlying this phenomenon remain unclear. One important approach to understanding the mechanisms is to examine whether the effect has some selectivity in auditory processing. However, it has not yet been determined whether this aftereffect can be transferred across sound frequencies and between ears. METHODOLOGY/PRINCIPAL FINDINGS: Two circles placed side by side were presented in alternation, producing apparent motion perception, and each onset was accompanied by a tone burst of a specific and unique frequency. After exposure to this visual apparent motion with tones for a few minutes, the tones became drivers for illusory motion perception. However, the aftereffect was observed only when the adapter and test tones were presented at the same frequency and to the same ear. CONCLUSIONS/SIGNIFICANCE: These findings suggest that the auditory processing underlying the establishment of novel audiovisual associations is selective, potentially but not necessarily indicating that this processing occurs at an early stage

    Supplemental Information 2: Example dataset description

    Get PDF
    Access to consistent, high-quality metadata is critical to finding, understanding, and reusing scientific data. However, while there are many relevant vocabularies for the annotation of a dataset, none sufficiently captures all the necessary metadata. This prevents uniform indexing and querying of dataset repositories. Towards providing a practical guide for producing a high quality description of biomedical datasets, the W3C Semantic Web for Health Care and the Life Sciences Interest Group (HCLSIG) identified Resource Description Framework (RDF) vocabularies that could be used to specify common metadata elements and their value sets. The resulting guideline covers elements of description, identification, attribution, versioning, provenance, and content summarization. This guideline reuses existing vocabularies, and is intended to meet key functional requirements including indexing, discovery, exchange, query, and retrieval of datasets, thereby enabling the publication of FAIR data. The resulting metadata profile is generic and could be used by other domains with an interest in providing machine readable descriptions of versioned datasets

    The health care and life sciences community profile for dataset descriptions

    Get PDF
    Access to consistent, high-quality metadata is critical to finding, understanding, and reusing scientific data. However, while there are many relevant vocabularies for the annotation of a dataset, none sufficiently captures all the necessary metadata. This prevents uniform indexing and querying of dataset repositories. Towards providing a practical guide for producing a high quality description of biomedical datasets, the W3C Semantic Web for Health Care and the Life Sciences Interest Group (HCLSIG) identified Resource Description Framework (RDF) vocabularies that could be used to specify common metadata elements and their value sets. The resulting guideline covers elements of description, identification, attribution, versioning, provenance, and content summarization. This guideline reuses existing vocabularies, and is intended to meet key functional requirements including indexing, discovery, exchange, query, and retrieval of datasets, thereby enabling the publication of FAIR data. The resulting metadata profile is generic and could be used by other domains with an interest in providing machine readable descriptions of versioned datasets

    Effect of flanking sounds on the auditory continuity illusion.

    Get PDF
    BACKGROUND: The auditory continuity illusion or the perceptual restoration of a target sound briefly interrupted by an extraneous sound has been shown to depend on masking. However, little is known about factors other than masking. METHODOLOGY/PRINCIPAL FINDINGS: We examined whether a sequence of flanking transient sounds affects the apparent continuity of a target tone alternated with a bandpass noise at regular intervals. The flanking sounds significantly increased the limit of perceiving apparent continuity in terms of the maximum target level at a fixed noise level, irrespective of the frequency separation between the target and flanking sounds: the flanking sounds enhanced the continuity illusion. This effect was dependent on the temporal relationship between the flanking sounds and noise bursts. CONCLUSIONS/SIGNIFICANCE: The spectrotemporal characteristics of the enhancement effect suggest that a mechanism to compensate for exogenous attentional distraction may contribute to the continuity illusion

    Study on Perception of Speaker Age by Semantic Differential Method

    Get PDF
    Humans can perceive ages of speakers from uttered voices by their own judgements. The perceived ages are called perceptual ages (PAs). Many earlier studies focused on statistical correlations between aging voices and acoustic features without taking into account the fact that human perception is vague rather than precise [1]. This paper focuses on the psychological factors to study human perceptions for aging voices. An experiment was carried out to evaluate the aging voices by candidates of semantic primitives, and the results of the listening test were analyzed by Semantic Differential Method and Regression Analysis to investigate impressions that human use to estimate PAs of speakers. Results show that with regards to both male and female voices, the Metal Factor (Deep - Flimsy, Full - Delicate, Rich ? Thin, Heavy - Light), which shows a linear relation with both male and female PAs, is the most important factor that helps listeners judge PAs of uttered voices. In addition, the rest of the factors show both linear and non-linear relationships with male aging voices, while only non-linear relations with female aging voices

    Study on Relations between Emotion Perception and Acoustic Features using Speech Morphing Techniques

    Get PDF
    In order to investigate what acoustic features are important to emotional impressions and how those features relate to emotion perception, we interpolate voices from pairs of typical emotions with a morphing method, collect emotion scores on Arousal-Valence space by a listening test, and analyze how acoustic features relate to the evaluations. The results show that Arousal perception can be stably described by merely using fundamental frequency (F0). In contrast, although this research found that F0 and formants can fit Valence scores, how acoustic features correspond to Valence perception vary with different morphing references. Furthermore, the results show that modification rules of different formant components are necessary for the voice conversion system with better Valence control

    Stimuli used in Experiment 1.

    No full text
    <p>The stimuli employed for measuring the continuity limit consisted of a 500-Hz sinusoidal inducee alternating with a 1000-Hz, one-third-octave noise band inducer. The flanking sequence consisted of seven tone pips.</p
    corecore