16 research outputs found

    Perceiving a Stranger's Voice as Being One's Own: A ‘Rubber Voice’ Illusion?

    Get PDF
    We describe an illusion in which a stranger's voice, when presented as the auditory concomitant of a participant's own speech, is perceived as a modified version of their own voice. When the congruence between utterance and feedback breaks down, the illusion is also broken. Compared to a baseline condition in which participants heard their own voice as feedback, hearing a stranger's voice induced robust changes in the fundamental frequency (F0) of their production. Moreover, the shift in F0 appears to be feedback dependent, since shift patterns depended reliably on the relationship between the participant's own F0 and the stranger-voice F0. The shift in F0 was evident both when the illusion was present and after it was broken, suggesting that auditory feedback from production may be used separately for self-recognition and for vocal motor control. Our findings indicate that self-recognition of voices, like other body attributes, is malleable and context dependent

    Multivoxel Patterns Reveal Functionally Differentiated Networks Underlying Auditory Feedback Processing of Speech

    Get PDF
    Contains fulltext : 122909.pdf (publisher's version ) (Open Access)The everyday act of speaking involves the complex processes of speech motor control. An important component of control is monitoring, detection, and processing of errors when auditory feedback does not correspond to the intended motor gesture. Here we show, using fMRI and converging operations within a multivoxel pattern analysis framework, that this sensorimotor process is supported by functionally differentiated brain networks. During scanning, a real-time speech-tracking system was used to deliver two acoustically different types of distorted auditory feedback or unaltered feedback while human participants were vocalizing monosyllabic words, and to present the same auditory stimuli while participants were passively listening. Whole-brain analysis of neural-pattern similarity revealed three functional networks that were differentially sensitive to distorted auditory feedback during vocalization, compared with during passive listening. One network of regions appears to encode an "error signal" regardless of acoustic features of the error: this network, including right angular gyrus, right supplementary motor area, and bilateral cerebellum, yielded consistent neural patterns across acoustically different, distorted feedback types, only during articulation (not during passive listening). In contrast, a frontotemporal network appears sensitive to the speech features of auditory stimuli during passive listening; this preference for speech features was diminished when the same stimuli were presented as auditory concomitants of vocalization. A third network, showing a distinct functional pattern from the other two, appears to capture aspects of both neural response profiles. Together, our findings suggest that auditory feedback processing during speech motor control may rely on multiple, interactive, functionally differentiated neural systems

    Johnsrude IS. Reducing intersubject anatomical variation: Effect of normalization method on sensitivity of functional magnetic resonance imaging data analysis in auditory cortex and the superior temporal region

    No full text
    Abstract Conventional group analysis of functional MRI (fMRI) data usually involves spatial alignment of anatomy across participants by registering every brain image to an anatomical reference image. Due to the high degree of inter-subject anatomical variability, a low-resolution average anatomical model is typically used as the target template, and/or smoothing kernels are applied to the fMRI data to increase the overlap among subjects' image data. However, such smoothing can make it difficult to resolve small regions such as subregions of auditory cortex when anatomical morphology varies among subjects. Here, we use data from an auditory fMRI study to show that using a high-dimensional registration technique (HAMMER) results in an enhanced functional signal-to-noise ratio (fSNR) for functional data analysis within auditory regions, with more localized activation patterns. The technique is validated against DARTEL, a high-dimensional diffeomorphic registration, as well as against commonly used low-dimensional normalization techniques such as the techniques provided with SPM2 (cosine basis functions) and SPM5 (unified segmentation) software packages. We also systematically examine how spatial resolution of the template image and spatial smoothing of the functional data affect the results. Only the high-dimensional technique (HAMMER) appears to be able to capitalize on the excellent anatomical resolution of a single-subject reference template, and, as expected, smoothing increased fSNR, but at the cost of spatial resolution. In general, results demonstrate significant improvement in fSNR using HAMMER * Corresponding author. Address: Medical Image Analysis Laboratory, School of Computing, Queen's University, Kingston, ON, CANADA. Tel: +1 (613) 533 2797. Email address: [email protected] (Amir M. Tahmasebi ) May 19, 2009 compared to analysis after normalization using DARTEL, or conventional normalization such as cosine basis function and unified segmentation in SPM, with more precisely localized activation foci, at least for activation in the region of auditory cortex. Preprint submitted to NeuroImag

    Recovery from Interruptions: Knowledge Workers’ Strategies, Failures and Envisioned Solutions

    No full text
    This document presents qualitative results from interviews with knowledge workers about their recovery strategies after interruptions. Special focus is given to when these strategies fail due to the nature of the interruption and existing computer support. Potential solutions offered by participants to overcome some of these problems are presented. These findings have implications for researchers and designers of taskcentric applications, especially in the area of support for recovery from interruptions

    Tasktracer: a desktop environment to support multi-tasking knowledge workers

    No full text
    This paper reports on TaskTracer — a software system being designed to help highly multitasking knowledge workers rapidly locate, discover, and reuse past processes they used to successfully complete tasks. The system monitors users ’ interaction with a computer, collects detailed records of users ’ activities and resources accessed, associates (automatically or with users ’ assistance) each interaction event with a particular task, enables users to access records of past activities and quickly restore task contexts. We present a novel Publisher-Subscriber architecture for collecting and processing users ’ activity data, describe several different user interfaces tried with TaskTracer, and discuss the possibility of applying machine learning techniques to recognize/predict users ’ tasks

    The F0 (Hz) time course for ‘day’ from one representative participant is shown.

    No full text
    <p>This participant was from the Last Mismatch group and assigned V1 as the stimulus voice. The solid purple vertical line at trial 20 indicates the end of the Baseline stage. The two solid red vertical lines indicate the beginning and end of the Stimulus Voice Mismatch stage. The black dashed horizontal line indicates the F0 of the stimulus voice V1.</p
    corecore