15,923 research outputs found

    Feel the beat: using cross-modal rhythm to integrate perception of objects, others, and self

    Get PDF
    For a robot to be capable of development, it must be able to explore its environment and learn from its experiences. It must find (or create) opportunities to experience the unfamiliar in ways that reveal properties valid beyond the immediate context. In this paper, we develop a novel method for using the rhythm of everyday actions as a basis for identifying the characteristic appearance and sounds associated with objects, people, and the robot itself. Our approach is to identify and segment groups of signals in individual modalities (sight, hearing, and proprioception) based on their rhythmic variation, then to identify and bind causally-related groups of signals across different modalities. By including proprioception as a modality, this cross-modal binding method applies to the robot itself, and we report a series of experiments in which the robot learns about the characteristics of its own body

    Speech Disruption During Delayed Auditory Feedback with Simultaneous Visual Feedback

    Get PDF
    Delayed auditory feedback (DAF) regarding speech can cause dysfluency. The purpose of this study was to explore whether providing visual feedback in addition to DAF would ameliorate speech disruption. Speakers repeated sentences and heard their auditory feedback delayed with and without simultaneous visual feedback. DAF led to increased sentence durations and an increased number of speech disruptions. Although visual feedback did not reduce DAF effects on duration, a promising but nonsignificant trend was observed for fewer speech disruptions when visual feedback was provided. This trend was significant in speakers who were overall less affected by DAF. The results suggest the possibility that speakers strategically use alternative sources of feedback

    Recognizing Speech in a Novel Accent: The Motor Theory of Speech Perception Reframed

    Get PDF
    The motor theory of speech perception holds that we perceive the speech of another in terms of a motor representation of that speech. However, when we have learned to recognize a foreign accent, it seems plausible that recognition of a word rarely involves reconstruction of the speech gestures of the speaker rather than the listener. To better assess the motor theory and this observation, we proceed in three stages. Part 1 places the motor theory of speech perception in a larger framework based on our earlier models of the adaptive formation of mirror neurons for grasping, and for viewing extensions of that mirror system as part of a larger system for neuro-linguistic processing, augmented by the present consideration of recognizing speech in a novel accent. Part 2 then offers a novel computational model of how a listener comes to understand the speech of someone speaking the listener's native language with a foreign accent. The core tenet of the model is that the listener uses hypotheses about the word the speaker is currently uttering to update probabilities linking the sound produced by the speaker to phonemes in the native language repertoire of the listener. This, on average, improves the recognition of later words. This model is neutral regarding the nature of the representations it uses (motor vs. auditory). It serve as a reference point for the discussion in Part 3, which proposes a dual-stream neuro-linguistic architecture to revisits claims for and against the motor theory of speech perception and the relevance of mirror neurons, and extracts some implications for the reframing of the motor theory

    Sound Symbolism in Foreign Language Phonological Acquisition

    Get PDF
    The paper aims at investigating the idea of a symbolic nature of sounds and its implications for in the acquisition of foreign language phonology. Firstly, it will present an overview of universal trends in phonetic symbolism, i.e. non-arbitrary representations of a phoneme by specific semantic criteria. Secondly, the results of a preliminary study on different manifestations of sound symbolism including emotionally-loaded representations of phonemes and other synaesthetic associations shall be discussed. Finally, practical pedagogical implications of sound symbolism will be explored and a number of innovative classroom activities involving sound symbolic associations will be presented

    Immediate and Distracted Imitation in Second-Language Speech: Unreleased Plosives in English

    Get PDF
    The paper investigates immediate and distracted imitation in second-language speech using unreleased plosives. Unreleased plosives are fairly frequently found in English sequences of two stops. Polish, on the other hand, is characterised by a significant rate of releases in such sequences. This cross-linguistic difference served as material to look into how and to what extent non-native properties of sounds can be produced in immediate and distracted imitation. Thirteen native speakers of Polish first read and then imitated sequences of words with two stops straddling the word boundary. Stimuli for imitation had no release of the first stop. The results revealed that (1) a non-native feature such as the lack of the release burst can be imitated; (2) distracting imitation impedes imitative performance; (3) the type of a sequence interacts with the magnitude of an imitative effec

    Sex-specific automatic responses to infant cries: TMS reveals greater excitability in females than males in motor evoked potentials

    Get PDF
    Neuroimaging reveals that infant cries activate parts of the premotor cortical system. To validate this effect in a more direct way, we used event-related transcranial magnetic stimulation (TMS). Here, we investigated the presence and the time course of modulation of motor cortex excitability in young adults who listened to infant cries. Specifically, we recorded motor evoked potentials (MEPs) from the biceps brachii (BB) and interosseus dorsalis primus (ID1) muscles as produced by TMS delivered from 0 to 250 ms from sound onset in six steps of 50 ms in 10 females and 10 males. We observed an excitatory modulation of MEPs at 100 ms from the onset of the infant cry specific to females and to the ID1 muscle. We regard this modulation as a response to natural cry sounds because it was delayed, attenuated to stimuli increasingly different from natural cry, and was absent in a separate group of females who listened to non-cry stimuli physically matched to natural infant cries. Furthermore, the 100-ms latency of this modulation is not compatible with a voluntary reaction to the stimulus but suggests an automatic, bottom-up audiomotor association. The brains of adult females appear to be tuned to respond to infant cries with automatic motor excitation. This effect may reflect the greater and longstanding burden on females in caregiving infants

    Mirroring to Build Trust in Digital Assistants

    Full text link
    We describe experiments towards building a conversational digital assistant that considers the preferred conversational style of the user. In particular, these experiments are designed to measure whether users prefer and trust an assistant whose conversational style matches their own. To this end we conducted a user study where subjects interacted with a digital assistant that responded in a way that either matched their conversational style, or did not. Using self-reported personality attributes and subjects' feedback on the interactions, we built models that can reliably predict a user's preferred conversational style.Comment: Preprin

    Ultrasonographic Investigation of Human Fetus Responses to Maternal Communicative and Non-communicative Stimuli

    Get PDF
    During pregnancy fetuses are responsive to the external environment, specifically to maternal stimulation. During this period, brain circuits develop to prepare neonates to respond appropriately. The detailed behavioral analysis of fetus’ mouth movements in response to mothers’ speech may reveal important aspects of their sensorimotor and affective skills; however, to date, no studies have investigated this response. Given that newborns at birth are capable of responding with matched behaviors to the social signals emitted by the caregiver, we hypothesize that such precocious responses could emerge in the prenatal period by exploiting infants’ sensitivity to their mother’s voice. By means of a two-dimensional (2D) ultrasonography, we assessed whether fetuses at 25 weeks of gestation, showed a congruent mouth motor response to maternal acoustic stimulation. Mothers were asked to provide different stimuli, each characterized by a different acoustic output (e.g. chewing, yawning, nursery rhymes, etc.) and we recorded the behavioral responses of 29 fetuses. We found that, when mothers sang the syllable LA in a nursery rhyme, fetuses significantly increased mouth openings. Other stimuli provided by the mother did not produce other significant changes in fetus’ behavior. This finding suggests that fetuses are sensitive only to specific maternal vocalizations (LA) and that fetal matched responses are rudimentary signs of early mirroring behaviors that become functional in the postnatal period. In conclusion, fetuses seem to be predisposed to respond selectively to specific maternal stimuli. We propose that such responses may play a role in the development of behavioral and emotional attunement with their mothers long before birth
    • …
    corecore