3,497 research outputs found

    Playful expressions of one-year-old chimpanzee infants in social and solitary play contexts

    Get PDF
    Knowledge of the context and development of playful expressions in chimpanzees is limited because research has tended to focus on social play, on older subjects, and on the communicative signaling function of expressions. Here we explore the rate of playful facial and body expressions in solitary and social play, changes from 12- to 15-months of age, and the extent to which social partners match expressions, which may illuminate a route through which context influences expression. Naturalistic observations of seven chimpanzee infants (Pan troglodytes) were conducted at Chester Zoo, UK (n = 4), and Primate Research Institute, Japan (n = 3), and at two ages, 12 months and 15 months. No group or age differences were found in the rate of infant playful expressions. However, modalities of playful expression varied with type of play: in social play, the rate of play faces was high, whereas in solitary play, the rate of body expressions was high. Among the most frequent types of play, mild contact social play had the highest rates of play faces and multi-modal expressions (often play faces with hitting). Social partners matched both infant play faces and infant body expressions, but play faces were matched at a significantly higher rate that increased with age. Matched expression rates were highest when playing with peers despite infant expressiveness being highest when playing with older chimpanzees. Given that playful expressions emerge early in life and continue to occur in solitary contexts through the second year of life, we suggest that the play face and certain body behaviors are emotional expressions of joy, and that such expressions develop additional social functions through interactions with peers and older social partners

    Preferential decoding of emotion from human non-linguistic vocalizations versus speech prosody

    Get PDF
    This study used event-related brain potentials (ERPs) to compare the time course of emotion processing from non-linguistic vocalizations versus speech prosody, to test whether vocalizations are treated preferentially by the neurocognitive system. Participants passively listened to vocalizations or pseudo-utterances conveying anger, sadness, or happiness as the EEG was recorded. Simultaneous effects of vocal expression type and emotion were analyzed for three ERP components (N100, P200, late positive component). Emotional vocalizations and speech were differentiated very early (N100) and vocalizations elicited stronger, earlier, and more differentiated P200 responses than speech. At later stages (450–700 ms), anger vocalizations evoked a stronger late positivity (LPC) than other vocal expressions, which was similar but delayed for angry speech. Individuals with high trait anxiety exhibited early, heightened sensitivity to vocal emotions (particularly vocalizations). These data provide new neurophysiological evidence that vocalizations, as evolutionarily primitive signals, are accorded precedence over speech-embedded emotions in the human voice

    Tailoring Interaction. Sensing Social Signals with Textiles.

    Get PDF
    Nonverbal behaviour is an important part of conversation and can reveal much about the nature of an interaction. It includes phenomena ranging from large-scale posture shifts to small scale nods. Capturing these often spontaneous phenomena requires unobtrusive sensing techniques that do not interfere with the interaction. We propose an underexploited sensing modality for sensing nonverbal behaviours: textiles. As a material in close contact with the body, they provide ubiquitous, large surfaces that make them a suitable soft interface. Although the literature on nonverbal communication focuses on upper body movements such as gestures, observations of multi-party, seated conversations suggest that sitting postures, leg and foot movements are also systematically related to patterns of social interaction. This thesis addressees the following questions: Can the textiles surrounding us measure social engagement? Can they tell who is speaking, and who, if anyone, is listening? Furthermore, how should wearable textile sensing systems be designed and what behavioural signals could textiles reveal? To address these questions, we have designed and manufactured bespoke chairs and trousers with integrated textile pressure sensors, that are introduced here. The designs are evaluated in three user studies that produce multi-modal datasets for the exploration of fine-grained interactional signals. Two approaches to using these bespoke textile sensors are explored. First, hand crafted sensor patches in chair covers serve to distinguish speakers and listeners. Second, a pressure sensitive matrix in custom-made smart trousers is developed to detect static sitting postures, dynamic bodily movement, as well as basic conversational states. Statistical analyses, machine learning approaches, and ethnographic methods show that by moni- toring patterns of pressure change alone it is possible to not only classify postures with high accuracy, but also to identify a wide range of behaviours reliably in individuals and groups. These findings es- tablish textiles as a novel, wearable sensing system for applications in social sciences, and contribute towards a better understanding of nonverbal communication, especially the significance of posture shifts when seated. If chairs know who is speaking, if our trousers can capture our social engagement, what role can smart textiles have in the future of human interaction? How can we build new ways to map social ecologies and tailor interactions

    Humor\u27s Effect on Short-Term Memory in Older Adults: An Innovative Wellness Paradigm

    Get PDF
    Context: With ageing, the detrimental effects of stress can impair the ability to learn and sustain memory. Humor and the associated mirthful laughter can reduce stress by decreasing the hormone cortisol. Chronic release of cortisol can damage hippocampal neurons leading to impairment of learning and memory. Objectives: To examine the effect of watching a humor video on short term memory in older adults. Design: A randomized, controlled trial. Setting: Loma Linda University, Loma Linda, CA. Participants: 30 subjects: 20 normal healthy, older adults, 11 males and 9 females; 10 Type 2 Diabetic older adults, 6 males and 4 females. Intervention: Two humor groups, healthy elderly (69.9 ± 3.7 years) and diabetics (67.1 ± 3.8 years), self-selected from 1 of 2 humorous videos (20 minutes) - Red Skeleton comedy or a montage of America’s Funniest Home Videos. The control group (68.7 ± 5.5 years) did not watch a humor video and sat in quiescence. Outcome Measures: The standardized neuropsychological memory assessment tool, Rey Auditory Verbal Learning Test was used to assess for 1) learning ability, 2) recall ability, and 3) visual recognition ability. Salivary cortisol measurements, at 5 time points, were obtained. Results: In the health elderly, diabetic, and control groups: 1) learning ability improved by 38.5%, 33.4%, and 24.0% respectively (p=.025); 2) delayed recall improved by 43.6%, 48.1%, and 20.3% respectively (p=.064); and 3) visual recognition increased by 12.6%, 16.7%, and 8.3% respectively (p=.321). For salivary cortisol levels, there were 1) borderline and significant changes in the healthy elderly group (p=.047, .046, and .062 respectively); 2) significant changes in the diabetic group (p=.047, .025, and .035 respectively); and 3) no significant changes in the control group. Conclusion: Our research findings offer potential clinical and rehabilitative benefits that can be applied to whole person elderly wellness programs. The cognitive components, learning ability and delayed recall, become more challenging as we age and are essential to older adults for an improved quality of life: mind, body, and spirit. Although older adults have age-related memory deficits, complimentary, enjoyable, and beneficial humor therapies need to be implemented for these individuals

    Authenticity recognition in laughter and crying : an ERP study

    Get PDF
    Tese de mestrado, Neurociências, Universidade de Lisboa, Faculdade de Medicina, 2018Our ability to detect authenticity in the human affective voice, whether an emotion was evoked spontaneously (reactive, genuine) or voluntarily (deliberate, controlled), is crucial in our everyday social interactions as emotions may carry different meanings and elicit different social responses. Taking laughter as an example, while a spontaneous laughter is stimulus-driven and signals positive affect, voluntary laughter deliberately signals polite agreement or affiliation without necessarily being associated with an emotional experience. Recent functional magnetic resonance (fMRI) studies have shown brain differences between these voluntary and spontaneous laughter vocalizations. While both spontaneous and voluntary laughs engage the auditory cortex, voluntary laughter requires additional involvement of brain areas typically involved in mentalizing, possibly involved in the decoding of the intentional state behind these vocal expressions. However, how authenticity affects the temporal course of voice processing is still unclear. Previous imaging studies have shed light on the areas putatively involved in the processing of authenticity in vocal emotions. Nevertheless, fMRI lacks temporal resolution and is unable to provide information about the exact time-window on which differences in the processing between spontaneous and voluntary vocalizations in the brain may occur. In the current study we used the event-related brain potentials (ERPs) methodology to shed light on how authenticity modulates the temporal course of vocal information processing in the brain. In particular, we investigated differences between spontaneous and voluntary non-linguistic affective vocalizations (crying and laughter) in both amplitude and latency of ERP components associated with early (N100, P200) and late stages (late positivity potential – LPP) of voice processing. We also aimed to replicate previous findings suggesting amplitude and latency differences as a function of emotionality in these three ERP components. In addition, we explored the extent to which sex differences may exist in both authenticity and emotionality modulation of these potentials. Twenty-three right-handed healthy participants (13 female) listened to spontaneous and voluntary non-linguistic affective vocalizations (happy, sad and neutral) while they rated the authenticity conveyed by the speaker, as the electroencephalogram (EEG) was recorded. No differences in terms of amplitude or latency were found between spontaneous and voluntary vocalizations in the N100, P200 and LPP components. Emotionality effects were found at an early processing stage (N100) with happy and sad vocalizations eliciting more negative amplitude than neutral vocalizations. Happy vocalizations elicited an enhanced P200 when compared with neutral vocalizations. At later processing stages (500 – 700 ms), happy and sad vocalizations elicited a stronger late positivity (LPP) than neutral vocalizations. No differences between emotional and neutral vocalizations were detected in the latency of these components. Lastly, no sex differences were found in the amplitude or latency of N100, P200 and LPP for emotionality or authenticity effects. Although exploratory with a small sample size and deserving further replication, all together, our results possibly suggest authenticity as unlikely to be decoded during the first 700 ms after vocalization onset. The emotional salience of the voice, on the other hand, seems to be extracted as early as 100 ms after onset. While emotional content seems to be rapidly decoded from vocal cues, authenticity may involve further elaborated processing occurring at very late stages of processing.A voz humana comunica não só informação verbal, como também informação acerca da identidade e estado emocional do locutor (e.g., medo, raiva, nojo, tristeza, felicidade, surpresa) através de modulações nas propriedades acústicas (frequência, intensidade, ritmo). A autenticidade de uma expressão emocional é uma propriedade também extraída quando escutamos uma voz. Através do perfil acústico da vocalização e do seu contexto somos capazes de detetar se uma emoção foi evocada espontaneamente (ato reativo, genuíno) ou voluntariamente (ato deliberado, controlado). A capacidade de detetar a autenticidade de expressões emocionais na voz humana é crucial nas nossas interações sociais no dia-a-dia, já que estes dois tipos de expressões transmitem diferentes significados e provocam diferentes respostas sociais. Usando como exemplo a gargalhada, enquanto a gargalhada evocada espontaneamente é o resultado de um evento externo assinalando afeto positivo, a gargalhada evocada voluntariamente é deliberada indicando cortesia ou afiliação, sem necessariamente estar associada a uma experiência emocional. Estudos recentes com ressonância magnética funcional mostraram diferenças no cérebro entre gargalhadas evocadas espontaneamente e voluntariamente. Enquanto tanto a gargalhada espontânea como a gargalhada voluntária ativam áreas do córtex auditivo, a gargalhada voluntária ativa adicionalmente áreas típicas da mentalização, possivelmente envolvendo a interpretação da intenção da expressão vocal. Contudo, permanece por explorar como a autenticidade afeta o curso temporal do processamento de voz. Um modelo de múltiplos estágios de processamento da informação vocal foi proposto por Schirmer & Kotz (2006) com base em estudos com potenciais evocados e de ressonância magnética funcional. Este modelo sugere um processamento da informação vocal em três diferentes estágios: análise das propriedades acústicas (indexado pelo componente N100, ocorrendo cerca de 100 ms após o inicio da vocalização), extração da saliência emocional (indexado pelo componente P200, ocorrendo cerca de 200 ms após o inicio da vocalização) e por último, avaliação cognitiva da expressão vocal (indexado pelo late positivity potential – LPP, ocorre entre 500 e 700 ms após o inicio da vocalização). Diferenças no processamento da informação vocal entre estímulos emocionais e neutros, têm sido amplamente reportadas nestes três estágios de processamento. No entanto, os estudos previamente mencionados utilizaram estímulos que foram desenvolvidos instruindo atores a imitar emoções (expressões de emoção voluntária) e não expressões de emoção espontâneas. Permanece por esclarecer até que ponto estes resultados reportados poderão ser explicados por diferenças na autenticidade da emoção. Estudos de neuroimagem prévios mostraram com elevada resolução espacial quais as áreas no cérebro putativamente envolvidas no processamento de autenticidade no processamento vocal afetivo. Não obstante, a técnica de imagem ressonância magnética funcional carece de resolução temporal, não permitindo extrair informação relativamente à janela temporal exata durante a qual estas diferenças no processamento cognitivo entre vocalizações espontâneas e voluntárias poderão ocorrer no cérebro. No presente estudo utilizámos uma abordagem com potenciais evocados para esclarecer como a autenticidade modula o curso temporal do processamento de informação vocal afetiva no cérebro. Em particular, tivemos por objetivo investigar as diferenças entre vocalizações não-verbais espontâneas e voluntárias (gargalhada e choro) em termos da amplitude e latência dos componentes eletrofisiológicos associados a estágios iniciais (N100, P200) e a estágios mais tardios (LPP). Procurámos também replicar resultados prévios que sugerem diferenças em termos da amplitude e latência em função da emocionalidade da vocalização (emocional vs. neutro) no componente N100, P200 e LPP. Adicionalmente, como hipótese exploratória investigámos até que ponto diferenças de sexo podem existir tanto na autenticidade como na emocionalidade na modulação destes potenciais evocados. Vinte e três participantes destros saudáveis (13 mulheres) ouviram vocalizações não-verbais espontâneas e voluntárias (expressando alegria, tristeza ou tom neutro), enquanto classificavam a autenticidade expressada pelo locutor, enquanto era registado um eletroencefalograma (EEG) em simultâneo. Em termos dos efeitos da autenticidade no processamento de informação vocal, não foram encontradas diferenças relativamente à amplitude e latência entre vocalizações espontâneas e voluntárias nos componentes N100, P200 e LPP. Efeitos de emocionalidade foram encontrados em estágios iniciais do processamento vocal (N100), com vocalizações de alegria e tristeza mostrando deflexões menos negativas quando comparadas com vocalizações neutras. Vocalizações de alegria evocaram um P200 de maior magnitude do que vocalizações neutras, não existindo diferenças significativas entre vocalizações de alegria e tristeza ou entre vocalizações de tristeza e neutras. Em estágios mais tardios do processamento (500 – 700 ms), vocalizações de alegria e tristeza evocaram uma positividade tardia (LPP) mais pronunciada do que vocalizações neutras. Os efeitos de emocionalidade reportados nos potenciais N100, P200 e LPP verificaram-se de igual modo em vocalizações espontâneas e vocalizações voluntárias. Não foram encontradas diferenças de latência entre vocalizações emocionais e neutras em nenhum estágio de processamento vocal (N100, P200 e LPP). Por último, relativamente a diferenças de sexo no processamento da autenticidade e emocionalidade na informação vocal, não foram encontradas diferenças nestes três componentes entre homens e mulheres. Ainda que exploratório e com necessidade de futuras replicações, os nossos resultados sugerem que a autenticidade possivelmente não será descodificada durante os primeiros 700 ms após o início da vocalização. A emocionalidade por outro lado, parece ser extraída cedo no processamento vocal, nos primeiros 100 ms (N100) após o início da vocalização irrespectivamente da valência (positiva e negativa), sendo que tanto vocalizações de alegria como de tristeza evocaram uma menor amplitude no componente N100 do que vocalizações neutras. A emocionalidade da vocalização parece deste modo ser detetada em estágios iniciais (N100), irrespectivamente da valência do estímulo. Porém no estágio seguinte, verificou-se que apenas vocalizações de alegria evocaram um P200 de maior amplitude, relativamente a vocalizações neutras. Este resultado poderá dever-se à elevada sensibilidade do componente P200 à ativação fisiológica inerente ao estímulo vocal, isto é, estímulos caracterizados por uma maior ativação fisiológica (e.g., gargalhada) são percebidos como emocionalmente mais salientes. Em estágios mais tardios do processamento verificou-se uma maior positividade do componente LPP em vocalizações emocionais (alegria e tristeza) comparativamente a vocalizações neutras. As vocalizações emocionais, independentemente da sua valência (positiva ou negativa), parecem assim promover uma elaboração cognitiva mais profunda. Em suma, de acordo com os resultados obtidos neste estudo preliminar enquanto o conteúdo emocional parece ser rapidamente processado em pistas vocais, a autenticidade possivelmente envolve um processamento mais elaborativo que ocorre em estádios mais tardios do processamento

    Exploration of the Neural Correlates of Ticklish Laughter by Functional Magnetic Resonance Imaging

    Get PDF
    The burst of laughter that is evoked by tickling is a primitive form of vocalization. It evolves during an early phase of postnatal life and appears to be independent of higher cortical circuits. Clinicopathological observations have led to suspicions that the hypothalamus is directly involved in the production of laughter. In this functional magnetic resonance imaging investigation, healthy participants were 1) tickled on the sole of the right foot with permission to laugh, 2) tickled but asked to stifle laughter, and 3) requested to laugh voluntarily. Tickling that was accompanied by involuntary laughter activated regions in the lateral hypothalamus, parietal operculum, amygdala, and right cerebellum to a consistently greater degree than did the 2 other conditions. Activation of the periaqueductal gray matter was observed during voluntary and involuntary laughter but not when laughter was inhibited. The present findings indicate that hypothalamic activity plays a crucial role in evoking ticklish laughter in healthy individuals. The hypothalamus promotes innate behavioral reactions to stimuli and sends projections to the periaqueductal gray matter, which is itself an important integrative center for the control of vocalization. A comparison of our findings with published data relating to humorous laughter revealed the involvement of a common set of subcortical center

    Morphological word structure in English and Swedish : the evidence from prosody

    Get PDF
    Trubetzkoy's recognition of a delimitative function of phonology, serving to signal boundaries between morphological units, is expressed in terms of alignment constraints in Optimality Theory, where the relevant constraints require specific morphological boundaries to coincide with phonological structure (Trubetzkoy 1936, 1939, McCarthy & Prince 1993). The approach pursued in the present article is to investigate the distribution of phonological boundary signals to gain insight into the criteria underlying morphological analysis. The evidence from English and Swedish suggests that necessary and sufficient conditions for word-internal morphological analysis concern the recognizability of head constituents, which include the rightmost members of compounds and head affixes. The claim is that the stability of word-internal boundary effects in historical perspective cannot in general be sufficiently explained in terms of memorization and imitation of phonological word form. Rather, these effects indicate a morphological parsing mechanism based on the recognition of word-internal head constituents. Head affixes can be shown to contrast systematically with modifying affixes with respect to syntactic function, semantic content, and prosodic properties. That is, head affixes, which cannot be omitted, often lack inherent meaning and have relatively unmarked boundaries, which can be obscured entirely under specific phonological conditions. By contrast, modifying affixes, which can be omitted, consistently have inherent meaning and have stronger boundaries, which resist prosodic fusion in all phonological contexts. While these correlations are hardly specific to English and Swedish it remains to be investigated to which extent they hold cross-linguistically. The observation that some of the constituents identified on the basis of prosodic evidence lack inherent meaning raises the issue of compositionality. I will argue that certain systematic aspects of word meaning cannot be captured with reference to the syntagmatic level, but require reference to the paradigmatic level instead. The assumption is then that there are two dimensions of morphological analysis: syntagmatic analysis, which centers on the criteria for decomposing words in terms of labelled constituents, and paradigmatic analysis, which centers on the criteria for establishing relations among (whole) words in the mental lexicon. While meaning is intrinsically connected with paradigmatic analysis (e.g. base relations, oppositeness) it is not essential to syntagmatic analysis

    The phonetics of speech breathing : pauses, physiology, acoustics, and perception

    Get PDF
    Speech is made up of a continuous stream of speech sounds that is interrupted by pauses and breathing. As phoneticians are primarily interested in describing the segments of the speech stream, pauses and breathing are often neglected in phonetic studies, even though they are vital for speech. The present work adds to a more detailed view of both pausing and speech breathing with a special focus on the latter and the resulting breath noises, investigating their acoustic, physiological, and perceptual aspects. We present an overview of how a selection of corpora annotate pauses and pause-internal particles, as well as a recording setup that can be used for further studies on speech breathing. For pauses, this work emphasized their optionality and variability under different tempos, as well as the temporal composition of silence and breath noise in breath pauses. For breath noises, we first focused on acoustic and physiological characteristics: We explored alignment between the onsets and offsets of audible breath noises with the start and end of expansion of both rib cage and abdomen. Further, we found similarities between speech breath noises and aspiration phases of /k/, as well as that breath noises may be produced with a more open and slightly more front place of articulation than realizations of schwa. We found positive correlations between acoustic and physiological parameters, suggesting that when speakers inhale faster, the resulting breath noises were more intense and produced more anterior in the mouth. Inspecting the entire spectrum of speech breath noises, we showed relatively flat spectra and several weak peaks. These peaks largely overlapped with resonances reported for inhalations produced with a central vocal tract configuration. We used 3D-printed vocal tract models representing four vowels and four fricatives to simulate in- and exhalations by reversing airflow direction. We found the direction to not have a general effect for all models, but only for those with high-tongue configurations, as opposed to those that were more open. Then, we compared inhalations produced with the schwa-model to human inhalations in an attempt to approach the vocal tract configuration in speech breathing. There were some similarities, however, several complexities of human speech breathing not captured in the models complicated comparisons. In two perception studies, we investigated how much information listeners could auditorily extract from breath noises. First, we tested categorizing different breath noises into six different types, based on airflow direction and airway usage, e.g. oral inhalation. Around two thirds of all answers were correct. Second, we investigated how well breath noises could be used to discriminate between speakers and to extract coarse information on speaker characteristics, such as age (old/young) and sex (female/male). We found that listeners were able to distinguish between two breath noises coming from the same or different speakers in around two thirds of all cases. Hearing one breath noise, classification of sex was successful in around 64%, while for age it was 50%, suggesting that sex was more perceivable than age in breath noises.Deutsche Forschungsgemeinschaft (DFG) – Projektnummer 418659027: "Pause-internal phonetic particles in speech communication

    Investigating the Neural Correlates of Voice versus Speech-Sound Directed Information in Pre-School Children

    Get PDF
    Studies in sleeping newborns and infants propose that the superior temporal sulcus is involved in speech processing soon after birth. Speech processing also implicitly requires the analysis of the human voice, which conveys both linguistic and extra-linguistic information. However, due to technical and practical challenges when neuroimaging young children, evidence of neural correlates of speech and/or voice processing in toddlers and young children remains scarce. In the current study, we used functional magnetic resonance imaging (fMRI) in 20 typically developing preschool children (average age = 5.8 y; range 5.2–6.8 y) to investigate brain activation during judgments about vocal identity versus the initial speech sound of spoken object words. FMRI results reveal common brain regions responsible for voice-specific and speech-sound specific processing of spoken object words including bilateral primary and secondary language areas of the brain. Contrasting voice-specific with speech-sound specific processing predominantly activates the anterior part of the right-hemispheric superior temporal sulcus. Furthermore, the right STS is functionally correlated with left-hemispheric temporal and right-hemispheric prefrontal regions. This finding underlines the importance of the right superior temporal sulcus as a temporal voice area and indicates that this brain region is specialized, and functions similarly to adults by the age of five. We thus extend previous knowledge of voice-specific regions and their functional connections to the young brain which may further our understanding of the neuronal mechanism of speech-specific processing in children with developmental disorders, such as autism or specific language impairments
    corecore