45 research outputs found

    Active versus passive listening to auditory streaming stimuli: a near-infrared spectroscopy study.

    Get PDF
    金沢大学人間社会研究域人間科学系We use near-infrared spectroscopy (NIRS) to assess listeners\u27 cortical responses to a 10-s series of pure tones separated in frequency. Listeners are instructed to either judge the rhythm of these "streaming" stimuli (active-response listening) or to listen to the stimuli passively. Experiment 1 shows that active-response listening causes increases in oxygenated hemoglobin (oxy-Hb) in response to all stimuli, generally over the (pre)motor cortices. The oxy-Hb increases are significantly larger over the right hemisphere than over the left for the final 5 s of the stimulus. Hemodynamic levels do not vary with changes in the frequency separation between the tones and corresponding changes in perceived rhythm ("gallop," "streaming," or "ambiguous"). Experiment 2 shows that hemodynamic levels are strongly influenced by listening mode. For the majority of time windows, active-response listening causes significantly larger oxy-Hb increases than passive listening, significantly over the left hemisphere during the stimulus and over both hemispheres after the stimulus. This difference cannot be attributed to physical motor activity and preparation related to button pressing after stimulus end, because this is required in both listening modes

    A custom magnetoencephalography device reveals brain connectivity and high reading/decoding ability in children with autism

    Get PDF
    A subset of individuals with autism spectrum disorder (ASD) performs more proficiently on certain visual tasks than may be predicted by their general cognitive performances. However, in younger children with ASD (aged 5 to 7), preserved ability in these tasks and the neurophysiological correlates of their ability are not well documented. In the present study, we used a custom child-sized magnetoencephalography system and demonstrated that preserved ability in the visual reasoning task was associated with rightward lateralisation of the neurophysiological connectivity between the parietal and temporal regions in children with ASD. In addition, we demonstrated that higher reading/decoding ability was also associated with the same lateralisation in children with ASD. These neurophysiological correlates of visual tasks are considerably different from those that are observed in typically developing children. These findings indicate that children with ASD have inherently different neural pathways that contribute to their relatively preserved ability in visual tasks

    Temporal Resolution Needed for Auditory Communication: Measurement With Mosaic Speech

    No full text
    Temporal resolution needed for Japanese speech communication was measured. A new experimental paradigm that can reflect the spectro-temporal resolution necessary for healthy listeners to perceive speech is introduced. As a first step, we report listeners' intelligibility scores of Japanese speech with a systematically degraded temporal resolution, so-called “mosaic speech”: speech mosaicized in the coordinates of time and frequency. The results of two experiments show that mosaic speech cut into short static segments was almost perfectly intelligible with a temporal resolution of 40 ms or finer. Intelligibility dropped for a temporal resolution of 80 ms, but was still around 50%-correct level. The data are in line with previous results showing that speech signals separated into short temporal segments of <100 ms can be remarkably robust in terms of linguistic-content perception against drastic manipulations in each segment, such as partial signal omission or temporal reversal. The human perceptual system thus can extract meaning from unexpectedly rough temporal information in speech. The process resembles that of the visual system stringing together static movie frames of ~40 ms into vivid motion

    The Impact of Sound Systems on the Perception of Cinematic Content in Immersive Audiovisual Productions

    No full text
    | openaire: EC/H2020/659114/EU//MULTISSOUNDWith fast technological developments, traditional perceptual environments disappear and new ones emerge. These changes make the human senses adapt to new ways of perceptual understanding, for example, regarding the perceptual integration of sound and vision. Proceeding from the fact that hearing cooperates with visual attention processes, the aim of this study is to investigate the effect of different sound design conditions on the perception of cinematic content in immersive audiovisual reproductions. Here we introduce the results of a visual selective attention task (counting objects) performed by participants watching a 270-degree immersive audiovisual display, on which a movie (»Ego Cure») was shown. Four sound conditions were used, which employed an increasing number of loudspeakers, i.e., mono, stereo, 5.1 and 7.1.4. Eye tracking was used to record the participant's eye gaze during the task. The eye tracking data showed that an increased number of speakers and a wider spatial audio distribution diffusedthe participants' attention from the task-related part of the display to non-task-related directions. The number of participants looking at the task-irrelevant display in the 7.1.4 condition was significantly higher than in the mono audio condition. This implies that additional spatial cues in the auditory modality automatically influence human visual attention (involuntary eye movements) and human analysis of visual information. Sound engineers should consider this when mixing educational or any other information-oriented productions.Peer reviewe

    Computational-Model-Based Analysis of Context Effects on Harmonic Expectancy

    No full text
    <div><p>Expectancy for an upcoming musical chord, harmonic expectancy, is supposedly based on automatic activation of tonal knowledge. Since previous studies implicitly relied on interpretations based on Western music theory, the underlying computational processes involved in harmonic expectancy and how it relates to tonality need further clarification. In particular, short chord sequences which cannot lead to unique keys are difficult to interpret in music theory. In this study, we examined effects of preceding chords on harmonic expectancy from a computational perspective, using stochastic modeling. We conducted a behavioral experiment, in which participants listened to short chord sequences and evaluated the subjective relatedness of the last chord to the preceding ones. Based on these judgments, we built stochastic models of the computational process underlying harmonic expectancy. Following this, we compared the explanatory power of the models. Our results imply that, even when listening to short chord sequences, internally constructed and updated tonal assumptions determine the expectancy of the upcoming chord.</p></div

    Summary of statistical tests in the individual analysis.

    No full text
    <p>Summary of statistical tests in the individual analysis.</p
    corecore