1,501 research outputs found

    Parental brain: cerebral areas activated by infant cries and faces. A comparison between different populations of parents and not.

    Get PDF
    Literature about parenting traditionally focused on caring behaviors and parental representations. Nowadays, an innovative line of research, interested in evaluating the neural areas and hormones implicated in the nurturing and caregiving responses, has developed. The only way to permit a newborn to survive and grow up is to respond to his needs and in order to succeed it is necessary, \ufb01rst of all, that the adults around him understand what his needs are. That is why adults\u2019 capacity of taking care of infants cannot disregard from some biological mechanisms, which allow them to be more responsive to the progeny and to infants in general. Many researches have proved that exist speci\ufb01c neural basis activating in response to infant evolutionary stimuli, such as infant cries and infant emotional facial expression. There is a sort of innate predisposition in human adults to respond to infants\u2019 signals, in order to satisfy their need and allow them to survive and become young adults capable of taking care of themselves. This article focuses on research that has investigated, in the last decade, the neural circuits underlying parental behavioral responses. Moreover, the paper compares the results of those studies that investigated the neural responses to infant stimuli under different conditions: familiar versus unknown children, parents versus non-parents and normative versus clinical samples (depression, addiction, adolescence, and PTSD)

    Facial expression of pain: an evolutionary account.

    Get PDF
    This paper proposes that human expression of pain in the presence or absence of caregivers, and the detection of pain by observers, arises from evolved propensities. The function of pain is to demand attention and prioritise escape, recovery, and healing; where others can help achieve these goals, effective communication of pain is required. Evidence is reviewed of a distinct and specific facial expression of pain from infancy to old age, consistent across stimuli, and recognizable as pain by observers. Voluntary control over amplitude is incomplete, and observers can better detect pain that the individual attempts to suppress rather than amplify or simulate. In many clinical and experimental settings, the facial expression of pain is incorporated with verbal and nonverbal vocal activity, posture, and movement in an overall category of pain behaviour. This is assumed by clinicians to be under operant control of social contingencies such as sympathy, caregiving, and practical help; thus, strong facial expression is presumed to constitute and attempt to manipulate these contingencies by amplification of the normal expression. Operant formulations support skepticism about the presence or extent of pain, judgments of malingering, and sometimes the withholding of caregiving and help. To the extent that pain expression is influenced by environmental contingencies, however, "amplification" could equally plausibly constitute the release of suppression according to evolved contingent propensities that guide behaviour. Pain has been largely neglected in the evolutionary literature and the literature on expression of emotion, but an evolutionary account can generate improved assessment of pain and reactions to it

    Machine Understanding of Human Behavior

    Get PDF
    A widely accepted prediction is that computing will move to the background, weaving itself into the fabric of our everyday living spaces and projecting the human user into the foreground. If this prediction is to come true, then next generation computing, which we will call human computing, should be about anticipatory user interfaces that should be human-centered, built for humans based on human models. They should transcend the traditional keyboard and mouse to include natural, human-like interactive functions including understanding and emulating certain human behaviors such as affective and social signaling. This article discusses a number of components of human behavior, how they might be integrated into computers, and how far we are from realizing the front end of human computing, that is, how far are we from enabling computers to understand human behavior

    Emotional authenticity perception in blind individuals : behavioral and ERP evidence

    Get PDF
    Dissertação de Mestrado Interuniversitário, Neuropsicologia Clínica e Experimental, 2022, Universidade de Lisboa, Faculdade de PsicologiaIn blind individuals, the loss of vision and the subsequent need to rely more heavily on the remaining senses results in important neuroplastic alterations. Current evidence suggests that these alterations lead to the enhancement of specific auditory abilities but can also lead to no alterations or even impaired auditory perception. Regarding vocal emotional perception, the impact of blindness on the behavioral and neural mechanisms underpinning emotional authenticity perception is still unexplored. Therefore, in the current study, we used behavioral and event-related potentials (ERP) measures to study whether and how blindness influences the perception of emotional authenticity in nonverbal vocalizations. Furthermore, we also aimed to understand whether and how the age of blindness onset (early vs. late) and task focus manipulation (attention directed to authenticity vs. emotional properties of the voice) affected these mechanisms. Fifty-one individuals with different visual conditions (17 early blind, 17 late blind, 17 sighted controls) completed two experimental tasks while electrophysiological data was continuously recorded. In these tasks, participants heard laughs and cries varying in authenticity (spontaneous vs. volitional) and emotional quality (sadness vs. amusement). The N1, P2, and late positive potential (LPP) ERP components were analyzed. Our results demonstrated authenticity effects in early sensory (N1) and late cognitive evaluative stages (LPP) of vocal emotion processing in early blind listeners. They additionally showed that both early and late blindness modulated a processing stage associated with the detection of emotional salience (P2). At a behavioral level, blindness did not affect the recognition and evaluation of vocal emotion. However, the late blind group was generally less accurate at detecting the authenticity of vocalizations than the sighted group. Overall, our findings suggest that blindness modulates the temporal course of emotional authenticity perception, particularly in early blind listeners. They additionally suggest that late-, but not early-onset blindness, deteriorates emotional authenticity perception.A ausência de visão e a consequente maior dependência dos outros sentidos causa importantes alterações neurofisiológicas. Por essa razão, a cegueira tem muitas vezes sido utilizada como modelo para investigar processos de neuroplasticidade. Potencialmente relacionado com estes fenómenos, vários estudos têm reportado uma melhor performance por parte de indivíduos cegos, em comparação com indivíduos normovisuais, em tarefas que examinaram capacidades de processamento auditivo, olfativo e táctil. Contudo, existe uma parte da literatura que reporta ausência de diferenças ou até pior performance de indivíduos cegos em tarefas que investigaram outras capacidades específicas de perceção auditiva, olfativa e tátil. No campo da perceção vocal emocional, o impacto da cegueira na perceção de autenticidade emocional e os mecanismos neurofisiológicos que estão na base destes processos ainda não foram devidamente explorados. Os escassos estudos que investigaram o processamento vocal emocional em indivíduos cegos não são consensuais, não sendo ainda claro se os cegos desenvolvem capacidades de perceção vocal emocional compensatórias ou se a visão é necessária para um eficiente desenvolvimento de faculdades de processamento emocional vocal. É também de salientar que na maioria destes estudos estes processos foram investigados numa amostra de cegos precoces. Estudar os mecanismos de perceção vocal emocional em cegos tardios poderá ser útil para compreender de que modo é que a idade de início da cegueira afeta o desenvolvimento destas capacidades. Além disso, a literatura sobre processamento vocal emocional em cegos tem como limitação o facto destes processos apenas terem sido estudados com recurso a estímulos emocionais não autênticos (i.e., emoções produzidas voluntariamente por atores). Que seja do nosso conhecimento, não existe nenhum estudo publicado que tenha explorado perceção de autenticidade emocional em cegos. No entanto, nos últimos anos tem havido um interesse crescente em estudar perceção de autenticidade emocional em indivíduos normovisuais, sendo que nesta literatura têm sido reportadas diferenças na perceção e no processamento neuronal entre estímulos autênticos e não autênticos. Recentemente, dois estudos que investigaram a perceção de autenticidade emocional de risos e choros utilizando a técnica de potencias evocados por eventos (Event-Related Potentials - ERP) encontraram efeitos de autenticidade em três componentes de ERP: N1, P2 e late positive potential (LPP). Estes componentes de ERP refletem diferentes estádios de processamento de informação emocional vocal relacionados com processamento sensorial precoce (N1), deteção de saliência (P2) e avaliação cognitiva (LPP). Esta técnica oferece uma informação mais detalhada sobre o timing dos processos neuronais, sendo um método ideal para explorar o modo como os diferentes estádios de processamento de autenticidade emocional são afetados pela cegueira. No presente estudo, foram utilizadas medidas comportamentais e ERP para estudar o modo como a cegueira influencia a perceção de autenticidade emocional de risos e choros. Adicionalmente foi também explorado o modo como a idade de início da cegueira e a manipulação do foco atencional afetam estes mecanismos. Devido ao carácter inovador do presente estudo, as nossas hipóteses sobre os efeitos da cegueira na perceção de autenticidade emocional foram exploratórias. Neste estudo, foram recolhidos dados de cinquenta e um participantes, incluindo 17 cegos precoces, 17 cegos tardios e 17 indivíduos normovisuais. Todos os participantes realizaram duas tarefas enquanto dados eletroencefalográficos eram recolhidos. Numa das tarefas os participantes foram instruídos a discriminar a autenticidade emocional dos estímulos (forçados vs. autênticos) e na outra tarefa a discriminar a emoção (tristeza vs. alegria). Em ambas as tarefas os participantes foram expostos aos mesmos estímulos – 80 vocalizações não-verbais de quatro condições diferentes: 20 risos autênticos, 20 risos forçados, 20 choros autênticos e 20 choros forçados. Após a realização destas duas tarefas, os participantes realizaram uma tarefa comportamental onde foram instruídos a avaliar as mesmas 80 vocalizações em termos de autenticidade, valência e arousal. No que diz respeito aos resultados do presente estudo, nos cegos precoces, foram encontrados efeitos da autenticidade em estádios do processamento sensorial precoce (N1) e em estádios do processamento mais tardios, associados com a avaliação cognitiva de informação emocional vocal (LPP). Mais especificamente, no grupo de cegos precoces, foi encontrada: uma amplitude de N1 mais negativa para choros autênticos (vs. choros forçados) na tarefa de deteção de autenticidade, mas não na tarefa de discriminação emocional; uma amplitude de LPP mais positiva para expressões autênticas (vs. forçadas) na tarefa de discriminação emocional, mas não na tarefa de deteção de autenticidade. Estes resultados sugerem que a cegueira precoce leva a uma reorganização cortical nestes dois estádios do processamento de autenticidade emocional. Num estádio de processamento associado à deteção de saliência emocional (P2), foi encontrada uma amplitude de P2 mais positiva para expressões forçadas (vs. autênticas) tanto no grupo de cegos precoces como no grupo de cegos tardios. Este resultado sugere que a cegueira, independentemente da idade em que esta ocorre, leva a uma reorganização cortical neste estádio do processamento de autenticidade emocional. Por fim, os resultados comportamentais revelam diferenças entre grupos no julgamento de autenticidade, mas não na capacidade de discriminação emocional de risos e choros. Mais especificamente, o grupo de cegos tardios, em comparação com o grupo de indivíduos normovisuais, foi genericamente menos preciso na deteção de autenticidade das vocalizações. Contudo, não foram encontradas diferenças significativas entre a performance dos cegos precoces e a performance dos indivíduos normovisuais, o que sugere que a cegueira tardia, mas não a cegueira precoce, deteriora a perceção de autenticidade emocional

    Application of Texture Descriptors to Facial Emotion Recognition in Infants

    Get PDF
    The recognition of facial emotions is an important issue in computer vision and artificial intelligence due to its important academic and commercial potential. If we focus on the health sector, the ability to detect and control patients’ emotions, mainly pain, is a fundamental objective within any medical service. Nowadays, the evaluation of pain in patients depends mainly on the continuous monitoring of the medical staff when the patient is unable to express verbally his/her experience of pain, as is the case of patients under sedation or babies. Therefore, it is necessary to provide alternative methods for its evaluation and detection. Facial expressions can be considered as a valid indicator of a person’s degree of pain. Consequently, this paper presents a monitoring system for babies that uses an automatic pain detection system by means of image analysis. This system could be accessed through wearable or mobile devices. To do this, this paper makes use of three different texture descriptors for pain detection: Local Binary Patterns, Local Ternary Patterns, and Radon Barcodes. These descriptors are used together with Support Vector Machines (SVM) for their classification. The experimental results show that the proposed features give a very promising classification accuracy of around 95% for the Infant COPE database, which proves the validity of the proposed method.This work has been partially supported by the Spanish Research Agency (AEI) and the European Regional Development Fund (FEDER) under project CloudDriver4Industry TIN2017-89266-R, and by the Conselleria de Educación, Investigación, Cultura y Deporte, of the Community of Valencia, Spain, within the program of support for research under project AICO/2017/134

    Vocal Flexibility in Early Human Communication

    Get PDF
    The dissertation contains two papers on the theme of flexibility in infant communication using an infrastructural approach. An infrastructural approach considers infant communication in terms of properties of human language (i.e., spontaneous vocalization, functional flexibility, social interactivity, and etc.). Infants\u27 vocal flexibility is explored in two ways in the dissertation: 1) How infants use sounds with varying emotional valences, a primary determiner of their communicative functions, and when this infrastrucutral property emerges (the first paper, in Chapter 2), and 2) what role the voice plays independently and jointly with the face in the transmission of affect and vocal type (the second paper, in Chapter 3). The first paper demonstrates that infants explore vocalizations in protophones and associate them with a range of affect as early as the first month of life. That is, all the protophone types we examined showed strong functional flexibility by showing significantly more neutral facial affect than cry and significantly less negative facial affect than cry. Further, infant protophones were functionally flexible across all three months, being differentiated from cry at all the ages. The second study revealed an important distinction in the use of face and voice in affect vs. protophone expression. Affect was transmitted with audio and video being flexibly interwoven, suggesting infant vocal capabilities establish a foundation for the flexible use of the voice, as is required in language. Both works contribute to our understanding of the path leading to the infants\u27 speech capacity

    Capturing the Attention of Caregivers: Variability in Infant Vocalizations

    Full text link
    The effect of variability in infant vocalizations on potential caregivers’ heart rate variability (HRV), facial expressions, and subjective ratings on emotional reactions and desire to approach the baby was examined in an evolutionary context. Recordings of non-canonical, canonical, fussing, and crying vocalizations were utilized to elicit physiological and self-reported reactions from sixty participants. Breastfeeding mothers, non-mothers at high estradiol point in menstrual cycle, non-mothers at low estradiol point in menstrual cycle, fathers, and non-fathers were included in the study. Participants wore Polar RS800 heart rate monitors, were video recorded for facial expression analysis, and filled out 11 point self-rating forms on emotional reactions to the infant vocal stimuli. It was expected that participants would show higher HRV for the canonical vocalizations as compared to non-canonical, fussing and crying vocal stimuli. Overall HRV as measured by SDNN (standard deviation of NN, or “normal-to-normal” interbeat intervals), was highest for the recorded babbling, however these differences were not significant. Most raters considered crying and fussing to be strong indicators of a need for interaction. Participants showed the greatest percentage of happy facial expressions (evaluated via analysis of video recordings) and also self-reported the babbling vocalizations high on “happiness” and “most liked”, as predicted. Although the predicted directions for the differences between mothers and non-mothers at two different assumed estradiol levels in menstrual cycle were not significant, breastfeeding mothers did show higher facial expressions of happiness while listening to the babbling stimuli, gave higher scores of self-rated sadness when listening to crying, and rated their irritation levels lower and the desire to pick up the baby higher for the fussing stimuli. The square root of the mean squared difference of successive NN intervals were significantly higher in fathers than non-fathers while listening to the babbling stimuli. Fathers had significantly higher self-reported happiness levels and higher scores towards the “most liked” end of the rating scale for the babbling stimuli. The results are discussed within an evolutionary framework considering the potential influence of parental selection of vocal behaviors, an attraction to complexity of sounds across species, as well as the possible influence of hormones on potential caregivers’ responses to infant needs

    Evaluating Temporal Patterns in Applied Infant Affect Recognition

    Full text link
    Agents must monitor their partners' affective states continuously in order to understand and engage in social interactions. However, methods for evaluating affect recognition do not account for changes in classification performance that may occur during occlusions or transitions between affective states. This paper addresses temporal patterns in affect classification performance in the context of an infant-robot interaction, where infants' affective states contribute to their ability to participate in a therapeutic leg movement activity. To support robustness to facial occlusions in video recordings, we trained infant affect recognition classifiers using both facial and body features. Next, we conducted an in-depth analysis of our best-performing models to evaluate how performance changed over time as the models encountered missing data and changing infant affect. During time windows when features were extracted with high confidence, a unimodal model trained on facial features achieved the same optimal performance as multimodal models trained on both facial and body features. However, multimodal models outperformed unimodal models when evaluated on the entire dataset. Additionally, model performance was weakest when predicting an affective state transition and improved after multiple predictions of the same affective state. These findings emphasize the benefits of incorporating body features in continuous affect recognition for infants. Our work highlights the importance of evaluating variability in model performance both over time and in the presence of missing data when applying affect recognition to social interactions.Comment: 8 pages, 6 figures, 10th International Conference on Affective Computing and Intelligent Interaction (ACII 2022
    corecore