883 research outputs found

    “Kama muta” or ‘being moved by love’: a bootstrapping approach to the ontology and epistemology of an emotion

    Get PDF
    The emotion that people may label being moved, touched, having a heart-warming experience, rapture, or tender feelings evoked by cuteness has rarely been studied and is incompletely conceptualized. Yet it is pervasive across history, cultures, and contexts, shaping the most fundamental relationships that make up society. It is positive and can be a peak or ecstatic experience. Because no vernacular words consistently or accurately delineate this emotion, we call it kama muta. We posit that it is evoked when communal sharing relationships suddenly intensify. Using ethnological, historical, linguistic, interview, participant observation, survey, diary, and experimental methods, we have confirmed that when people report feeling this emotion they perceive that a relationship has become closer, and they tend to have a warm feeling in the chest, shed tears, and/or get goosebumps. We posit that the disposition to kama muta is an evolved universal, but that it is always culturally shaped and oriented; it must be culturally informed in order to adaptively motivate people to devote and commit themselves to new opportunities for locally propitious communal sharing relationships. Moreover, a great many cultural practices, institutions, roles, narratives, arts and artifacts are specifically adapted to evoke kama muta: that is their function.info:eu-repo/semantics/acceptedVersio

    Cardiac and Respiratory Patterns Synchronize between Persons during Choir Singing

    Get PDF
    Dyadic and collective activities requiring temporally coordinated action are likely to be associated with cardiac and respiratory patterns that synchronize within and between people. However, the extent and functional significance of cardiac and respiratory between-person couplings have not been investigated thus far. Here, we report interpersonal oscillatory couplings among eleven singers and one conductor engaged in choir singing. We find that: (a) phase synchronization both in respiration and heart rate variability increase significantly during singing relative to a rest condition; (b) phase synchronization is higher when singing in unison than when singing pieces with multiple voice parts; (c) directed coupling measures are consistent with the presence of causal effects of the conductor on the singers at high modulation frequencies; (d) the different voices of the choir are reflected in network analyses of cardiac and respiratory activity based on graph theory. Our results suggest that oscillatory coupling of cardiac and respiratory patterns provide a physiological basis for interpersonal action coordination

    The Role of Native Language Acquisition in Infant Preferences of Speech and Song

    Get PDF
    Previous research by Persaud (2013) found that infant listeners preferentially listen longer to sung stimuli over spoken stimuli, regardless of the age of the infant. The present study tests two age groups of infants to determine whether early language exposure affects infants\u27 listening preferences for song and speech. Six- to seven-month- old infants and eight- to ten-month-old infants from English speaking homes were presented with auditory stimuli of English-speaking women speaking or singing and tested in a head-turn preference task. Consistent with the findings from Persaud (2013), it was found that both age groups listened longer to the sung stimuli compared to the spoken stimuli. This suggests that song is inherently more attractive to infants, possibly because song stimuli are generally less acoustically variable compared to speech stimuli, and therefore ultimately easier for infants to cognitively process compared to speech stimuli. The results of this study support a processing-based account of infants\u27 preferences for ID-stimuli

    Human larynx motor cortices coordinate respiration for vocal-motor control.

    Get PDF
    Vocal flexibility is a hallmark of the human species, most particularly the capacity to speak and sing. This ability is supported in part by the evolution of a direct neural pathway linking the motor cortex to the brainstem nucleus that controls the larynx the primary sound source for communication. Early brain imaging studies demonstrated that larynx motor cortex at the dorsal end of the orofacial division of motor cortex (dLMC) integrated laryngeal and respiratory control, thereby coordinating two major muscular systems that are necessary for vocalization. Neurosurgical studies have since demonstrated the existence of a second larynx motor area at the ventral extent of the orofacial motor division (vLMC) of motor cortex. The vLMC has been presumed to be less relevant to speech motor control, but its functional role remains unknown. We employed a novel ultra-high field (7T) magnetic resonance imaging paradigm that combined singing and whistling simple melodies to localise the larynx motor cortices and test their involvement in respiratory motor control. Surprisingly, whistling activated both 'larynx areas' more strongly than singing despite the reduced involvement of the larynx during whistling. We provide further evidence for the existence of two larynx motor areas in the human brain, and the first evidence that laryngeal-respiratory integration is a shared property of both larynx motor areas. We outline explicit predictions about the descending motor pathways that give these cortical areas access to both the laryngeal and respiratory systems and discuss the implications for the evolution of speech

    Deep Learning for Audio Signal Processing

    Full text link
    Given the recent surge in developments of deep learning, this article provides a review of the state-of-the-art deep learning techniques for audio signal processing. Speech, music, and environmental sound processing are considered side-by-side, in order to point out similarities and differences between the domains, highlighting general methods, problems, key references, and potential for cross-fertilization between areas. The dominant feature representations (in particular, log-mel spectra and raw waveform) and deep learning models are reviewed, including convolutional neural networks, variants of the long short-term memory architecture, as well as more audio-specific neural network models. Subsequently, prominent deep learning application areas are covered, i.e. audio recognition (automatic speech recognition, music information retrieval, environmental sound detection, localization and tracking) and synthesis and transformation (source separation, audio enhancement, generative models for speech, sound, and music synthesis). Finally, key issues and future questions regarding deep learning applied to audio signal processing are identified.Comment: 15 pages, 2 pdf figure

    Human larynx motor cortices coordinate respiration for vocal-motor control

    Get PDF
    Vocal flexibility is a hallmark of the human species, most particularly the capacity to speak and sing. This ability is supported in part by the evolution of a direct neural pathway linking the motor cortex to the brainstem nucleus that controls the larynx the primary sound source for communication. Early brain imaging studies demonstrated that larynx motor cortex at the dorsal end of the orofacial division of motor cortex (dLMC) integrated laryngeal and respiratory control, thereby coordinating two major muscular systems that are necessary for vocalization. Neurosurgical studies have since demonstrated the existence of a second larynx motor area at the ventral extent of the orofacial motor division (vLMC) of motor cortex. The vLMC has been presumed to be less relevant to speech motor control, but its functional role remains unknown. We employed a novel ultra-high field (7T) magnetic resonance imaging paradigm that combined singing and whistling simple melodies to localise the larynx motor cortices and test their involvement in respiratory motor control. Surprisingly, whistling activated both ‘larynx areas’ more strongly than singing despite the reduced involvement of the larynx during whistling. We provide further evidence for the existence of two larynx motor areas in the human brain, and the first evidence that laryngeal-respiratory integration is a shared property of both larynx motor areas. We outline explicit predictions about the descending motor pathways that give these cortical areas access to both the laryngeal and respiratory systems and discuss the implications for the evolution of speech

    PoLyScriber: Integrated Training of Extractor and Lyrics Transcriber for Polyphonic Music

    Full text link
    Lyrics transcription of polyphonic music is challenging as the background music affects lyrics intelligibility. Typically, lyrics transcription can be performed by a two step pipeline, i.e. singing vocal extraction frontend, followed by a lyrics transcriber backend, where the frontend and backend are trained separately. Such a two step pipeline suffers from both imperfect vocal extraction and mismatch between frontend and backend. In this work, we propose a novel end-to-end integrated training framework, that we call PoLyScriber, to globally optimize the vocal extractor front-end and lyrics transcriber backend for lyrics transcription in polyphonic music. The experimental results show that our proposed integrated training model achieves substantial improvements over the existing approaches on publicly available test datasets.Comment: 13 page

    Singing to infants matters: early singing interactions affect musical preferences and facilitate vocabulary building

    Get PDF
    This research revealed that the frequency of reported infant-parent singing interactions predicted 6-month-old infants' performance in laboratory music experiments and mediated their language development in the second year. At 6 months, infants (n=36) were tested using a preferential listening procedure assessing their sustained attention to instrumental and sung versions of the same novel tunes whilst the parents completed an ad-hoc questionnaire assessing home musical interactions with their infants. Language development was assessed with a follow-up when the infants were 14-month-old (n=26). The main results showed that 6-month-olds preferred listening to sung rather than instrumental melodies, and that self-reported high levels of parental singing with their infants[i] were associated with less pronounced preference for the sung over the instrumental version of the tunes at 6 months, and [ii] predicted significant advantages on the language outcomes in the second year. The results are interpreted in relation to conceptions of developmental plasticity
    corecore