1,399 research outputs found

    Motion Generation during Vocalized Emotional Expressions and Evaluation in Android Robots

    Get PDF
    Vocalized emotional expressions such as laughter and surprise often occur in natural dialogue interactions and are important factors to be considered in order to achieve smooth robot-mediated communication. Miscommunication may be caused if there is a mismatch between audio and visual modalities, especially in android robots, which have a highly humanlike appearance. In this chapter, motion generation methods are introduced for laughter and vocalized surprise events, based on analysis results of human behaviors during dialogue interactions. The effectiveness of controlling different modalities of the face, head, and upper body (eyebrow raising, eyelid widening/narrowing, lip corner/cheek raising, eye blinking, head motion, and torso motion control) and different motion control levels are evaluated using an android robot. Subjective experiments indicate the importance of each modality in the perception of motion naturalness (humanlikeness) and the degree of emotional expression

    Becoming Human with Humanoid

    Get PDF
    Nowadays, our expectations of robots have been significantly increases. The robot, which was initially only doing simple jobs, is now expected to be smarter and more dynamic. People want a robot that resembles a human (humanoid) has and has emotional intelligence that can perform action-reaction interactions. This book consists of two sections. The first section focuses on emotional intelligence, while the second section discusses the control of robotics. The contents of the book reveal the outcomes of research conducted by scholars in robotics fields to accommodate needs of society and industry

    The influence of imagined interactions on verbal fluency

    Get PDF
    Imagined interactions (IIs) are a type of social cognition and mental imagery whereby actors imagine an interaction with others for the purposes of planning. Within actual encounters, verbal fluency is a characteristic that contributes to the speaker\u27s credibility. The planning that takes place through imagined dialogues can help a speaker overcome disfluency found in speech. This study shows that improvements in speaking style are also dependent upon the trait of communication apprehension that an individual experiences. Visualization can decrease apprehension levels, thus producing higher verbal fluency. Results from this study indicate planning\u27s influence in the reduction of silent pauses but not vocalized pauses. Finally, the complexity of one\u27s imagined dialogue has been found to play a role in an increase of verbal fluency

    Expressing Robot Personality through Talking Body Language

    Get PDF
    Social robots must master the nuances of human communication as a mean to convey an effective message and generate trust. It is well-known that non-verbal cues are very important in human interactions, and therefore a social robot should produce a body language coherent with its discourse. In this work, we report on a system that endows a humanoid robot with the ability to adapt its body language according to the sentiment of its speech. A combination of talking beat gestures with emotional cues such as eye lightings, body posture of voice intonation and volume permits a rich variety of behaviors. The developed approach is not purely reactive, and it easily allows to assign a kind of personality to the robot. We present several videos with the robot in two different scenarios, and showing discrete and histrionic personalities.This work has been partially supported by the Basque Government (IT900-16 and Elkartek 2018/00114), the Spanish Ministry of Economy and Competitiveness (RTI 2018-093337-B-100, MINECO/FEDER, EU)

    Sex Differences in Mother-Infant Interaction

    Get PDF
    Sex differences in human behavior have frequently been explored by researchers. Although there are numerous studies documenting sex differences between boys and girls from childhood into adulthood, few studies have adequately examined how genetics and environment interact in infancy to promote sex differences in infant behavior. Therefore, the present study sought to examine how sex differences in maternal behavior interact with differences in infant behavior. Maternal and infant behaviors were analyzed within the still-face paradigm, a paradigm which allows for examination of mother-infant interaction in normal, stressful, and recovery situations. It was hypothesized that infant boys would react with more negativity than girls to the stressful phases of the paradigm. It was also hypothesized that mothers would continuously treat their girl infants with more positivity, and maternal behavior would not be consistent across the phases of the still-face paradigm, ultimately becoming more negative by the end of the procedure. It was expected that these sex differences in maternal behavior, coupled with maternal increases in negativity, would translate to greater negativity in boys versus girls by the end of the procedure. Infant and maternal behavior was videotaped within the still-face paradigm and behaviors and facial expressions were later coded. All of the hypotheses were supported. Infant behavior differed by sex, with boys demonstrating more negative emotionality than girls in the recovery phase. Furthermore, mothers of girls treated their infants with more positivity than mothers of boys throughout the entire procedure. Maternal behavior also became more negative by the end of the procedure, which likely contributed to increased negativity seen in boys but not girls by the end of the procedure

    Sex Differences in Mother-Infant Interaction

    Get PDF
    Sex differences in human behavior have frequently been explored by researchers. Although there are numerous studies documenting sex differences between boys and girls from childhood into adulthood, few studies have adequately examined how genetics and environment interact in infancy to promote sex differences in infant behavior. Therefore, the present study sought to examine how sex differences in maternal behavior interact with differences in infant behavior. Maternal and infant behaviors were analyzed within the still-face paradigm, a paradigm which allows for examination of mother-infant interaction in normal, stressful, and recovery situations. It was hypothesized that infant boys would react with more negativity than girls to the stressful phases of the paradigm. It was also hypothesized that mothers would continuously treat their girl infants with more positivity, and maternal behavior would not be consistent across the phases of the still-face paradigm, ultimately becoming more negative by the end of the procedure. It was expected that these sex differences in maternal behavior, coupled with maternal increases in negativity, would translate to greater negativity in boys versus girls by the end of the procedure. Infant and maternal behavior was videotaped within the still-face paradigm and behaviors and facial expressions were later coded. All of the hypotheses were supported. Infant behavior differed by sex, with boys demonstrating more negative emotionality than girls in the recovery phase. Furthermore, mothers of girls treated their infants with more positivity than mothers of boys throughout the entire procedure. Maternal behavior also became more negative by the end of the procedure, which likely contributed to increased negativity seen in boys but not girls by the end of the procedure

    Pianos and microphones : does the type of musical training affect emotion recognition?

    Get PDF
    Dissertação de Mestrado Interuniversitário, Neuropsicologia Clínica e Experimental, 2021, Universidade de Lisboa, Faculdade de PsicologiaMusic, emotion, and language have been subjects of interest in neurosciences research due to their relationship as means of social communication. It has been widely acknowledged that the musicians’ brain may help explain this relationship for it is an adequate example of cross-domain neuroplasticity. Indeed, musical performance presupposes the activation of different sensory and motor systems associated with a facilitated response to emotional auditory information. Nonetheless, literature is scarce when defining the concept of “musical expertise”. A few studies have accounted for factors other than general musical training in auditory emotional processing, however, no study has tackled the implications of vocal musical training. Vocal musical training is considered to have different neural implications from instrumental musical training, since the instrument of a singer is contained within the body. Singers have shown to have an enhanced activation of the auditory feedback system in comparison to non-musicians and instrumentalists, hence enabling a facilitated response to the production and recognition of vocal emotional information. The following study sets to explore the underlying differences in emotional auditory processing taking into consideration the type of musical training (vocal vs instrumental). Nine singers, thirteen instrumentalists, and nine non-musicians were recruited for an emotional recognition task. Participants listened to nonverbal vocalizations and prosodic speech and had to categorize those stimuli in terms of their emotional quality (anger, disgust, fear, happiness, neutral, sadness). We found no significant differences in accuracy measures and response times between the three groups. A main effect of stimulus type (speech prosody vs. vocalizations) was found, in which emotional vocalizations were faster and more accurately recognized in comparison to speech prosody stimuli. Furthermore, an interaction effect between emotion and type of stimulus was observed. We propose that the emotional recognition’s task results were affected by the reduced number of participants recruited. It might also reflect the need to assess other possible cross-domain influencing factors. Happiness, and disgust were the most accurately recognized emotions in the nonverbal vocal emotions condition. In the prosody condition, participants exhibited higher rates of accuracy in fear, but not in vocalizations. We propose that the acoustic ambiguity of fearful vocalizations might be reduced by the inherently longer duration of prosodic stimuli. Additionally, a correlation analysis for musical ability, engagement, and emotional recognition was performed, foregrounding the importance of individual differences in cross-domain effects of music.A relação entre música, linguagem e emoções tem sido tema de debate na comunidade científica, nomeadamente devido às similaridades que estes conceitos apresentam como meios de comunicação social. De facto, muitos descrevem a música como “a língua das emoções” e, como tal, a investigação em neurociências tem estudado esta proposta recorrendo à população mais fluente neste “idioma”, os músicos. Devido à sua complexidade inerente, o treino musical tem sido associado à ativação de redes neuronais relativas a funções motoras, cognitivas e sensoriais. Por exemplo, estudos realizados com o recurso a ressonância magnética (fMRI) revelam que músicos apresentam uma ativação mais acentuada em regiões de processamento sensorial, assim como uma maior densidade de substância cinzenta nestas estruturas. Do mesmo modo, estudos eletrofisiológicos reportam que a prática musical está associada a benefícios na acuidade visual, controlo motor, processamento de informação auditiva, e processamento acústico de estímulos emocionais complexos (e.g., prosódia emocional e vocalizações). De facto, a relação entre o treino musical e a eficiência da resposta subcortical a estímulos emocionais tem sido frequentemente replicada. Por este motivo, o cérebro musical é considerado um dos principais exemplos de neuroplasticidade interdomínios. Recentemente, alguns investigadores têm salientado a relevância de outros fatores influentes no reconhecimento emocional auditivo para além do treino musical. Por exemplo, há evidências que indicam que capacidades preceptivas acústicas individuais (e.g., deteção de frequência) e relação do indivíduo com a música no quotidiano são fatores que influenciam a capacidade de discriminação emocional. Assim, verificou-se que não-músicos com boas capacidades percetivas apresentaram resultados semelhantes aos músicos em tarefas de discriminação emocional auditiva. De acordo com esta ideia, propomos que fatores como o tipo de treino musical deveriam ser abordados como potenciais fontes de diferenciação no processamento emocional auditivo, com especial foco no treino musical vocal. O treino musical da voz tem diferentes implicações neuronais do treino musical instrumental, dado que o “instrumento” do cantor está contido no seu corpo. Assim, cantores distinguem-se de outros músicos devido à ativação de sistemas motores específicos durante a performance musical. De facto, ativação motora no aparelho vocal evoca recetores somatosensoriais que participam nos mecanismos de feedback envolvidos na manutenção de notas, produção de vocalizações e reprodução de tons emocionais. Consequentemente, há evidências de que cantores demonstram uma resposta mais controlada deste mecanismo quando comparados com controlos e outro tipo de músicos. Assim, o presente estudo visa explorar as diferenças no processamento auditivo emocional considerando as especificidades de diferentes tipos de treino musical (vocal vs. instrumental). Nove cantores, treze instrumentalistas e nove participantes sem treino musical foram recrutados para a realização de duas tarefas de escolha forçada. Na primeira, foi pedido aos participantes que escutassem com atenção expressões de emoção vocalizadas e que selecionassem a resposta correta de seis possibilidades (raiva, medo, repugna, felicidade, neutro e tristeza). A segunda tarefa seguiu uma estrutura semelhante, no entanto, ao invés de vocalizações, os participantes foram expostos a frases com conteúdo semântico neutro transmitido com propriedades prosódicas emocionais (discurso prosódico). Adicionalmente, os participantes preencheram um questionário relativo ao seu relacionamento com a música (Gold-MSI) e realizaram testes de habilidade percetiva de sons, nomeadamente tarefas de discriminação de frequências sonoras, e perceção de tempo e duração. Ao contrário do que era esperado, não foram obtidas diferenças significativas para as medidas de acuidade e tempo de respostas, entre os grupos. Propomos que a reduzida amostra e consequente reduzido poder estatístico possam ter influenciado os nossos resultados. Do mesmo modo, concluímos que estes resultados podem ser reflexivos da necessidade de explorar diferenças individuais na discriminação emocional auditiva (e.g., capacidades cognitivas, traços de personalidade, idade, entre outros). Um efeito principal de tipo de estímulo (prosódia vs. vocalizações) foi observado, demonstrando que os participantes foram mais rápidos e mais precisos na discriminação de emoções para a condição das vocalizações em relação à condição da prosódia. Conforme estudos prévios, este efeito era esperado dada a complexidade neuronal do processamento semântico de frases. Ademais, um efeito de interação foi observado entre categoria emocional e tipo de estímulo, onde emoções foram diferentemente reconhecidas mediante o tipo de transmissão. Salientamos quatro emoções: felicidade, tristeza, medo e repugna. Participantes apresentaram melhores resultados de acuidade na felicidade, tristeza e repugna quando estas eram expressas através de vocalizações. Como era expectável, na condição prosódica, a repugna foi a categoria emocional com menos acuidade de reconhecimento, assim replicando estudos anteriores. É importante salientar que, ao contrário do esperado o medo foi a emoção com menor reconhecimento na condição das vocalizações. No entanto, o mesmo não se verificou na condição do discurso prosódico, onde demonstrou ser a emoção com maior acuidade discriminatória. Propomos que a ambiguidade acústica evidenciada na expressão do medo vocalizado possa ser reduzida devido à inerente duração das frases prosódicas. Assim, exposição a demonstrações mais longas de medo na voz auxiliam a sua discriminação em relação a outras emoções com perfil acústico semelhante. Adicionalmente, foi realizada uma análise correlacional entre as medidas descritivas individuais da amostra (sub-escalas do Gold-MSI e tarefas psicoacústicas) e acuidade no reconhecimento emocional. Associações entre as sub-escalas de habilidades de canto e envolvimento emocional com a música foram observadas, assim como com as tarefas acústicas percetuais(nomeadamente a deteção de frequência e discriminação de duração). Tendo em consideração estes resultados, salienta-se a importância de explorar fatores individuais, para além do treino musical, no processamento auditivo de emoções. Apesar dos resultados, propomos que o cérebro de um cantor é um ótimo exemplo de neuroplasticidade induzida através da música, assim evidenciando especificidades neuronais em relação a outros tipos de treino musical. Por este motivo, encorajamos estudos futuros a explorar estas características neuronais devido ao seu potencial no auxílio do entendimento do processamento emocional auditivo

    Jointly structuring triadic spaces of meaning and action:book sharing from 3 months on

    Get PDF
    This study explores the emergence of triadic interactions through the example of book sharing. As part of a naturalistic study, 10 infants were visited in their homes from 3-12 months. We report that (1) book sharing as a form of infant-caregiver-object interaction occurred from as early as 3 months. Using qualitative video analysis at a micro-level adapting methodologies from conversation and interaction analysis, we demonstrate that caregivers and infants practiced book sharing in a highly co-ordinated way, with caregivers carving out interaction units and shaping actions into action arcs and infants actively participating and co-ordinating their attention between mother and object from the beginning. We also (2) sketch a developmental trajectory of book sharing over the first year and show that the quality and dynamics of book sharing interactions underwent considerable change as the ecological situation was transformed in parallel with the infants' development of attention and motor skills. Social book sharing interactions reached an early peak at 6 months with the infants becoming more active in the coordination of attention between caregiver and book. From 7-9 months, the infants shifted their interest largely to solitary object exploration, in parallel with newly emerging postural and object manipulation skills, disrupting the social coordination and the cultural frame of book sharing. In the period from 9-12 months, social book interactions resurfaced, as infants began to effectively integrate object actions within the socially shared activity. In conclusion, to fully understand the development and qualities of triadic cultural activities such as book sharing, we need to look especially at the hitherto overlooked early period from 4-6 months, and investigate how shared spaces of meaning and action are structured together in and through interaction, creating the substrate for continuing cooperation and cultural learning

    INFANTS’ PERCEPTION OF EMOTION FROM DYNAMIC BODY MOVEMENTS

    Get PDF
    In humans, the capacity to extract meaning from another person’s behavior is fundamental to social competency. Adults recognize emotions conveyed by body movements with comparable accuracy to when they are portrayed in facial expressions. While infancy research has examined the development of facial and vocal emotion processing extensively, no prior study has explored infants’ perception of emotion from body movements. The current studies examined the development of emotion processing from body gestures. In Experiment 1, I asked whether 6.5-month-olds infants would prefer to view emotional versus neutral body movements. The results indicate that infants prefer to view a happy versus a neutral body action when the videos are presented upright, but fail to exhibit a preference when the videos are inverted. This suggests that the preference for the emotional body movement was not driven by low-level features (such as the amount or size of the movement displayed), but rather by the affective content displayed. Experiments 2A and 2B sought to extend the findings of Experiment 1 by asking whether infants are able to match affective body expressions to their corresponding vocal emotional expressions. In both experiments, infants were tested using an intermodal preference technique: Infants were exposed to a happy and an angry body expression presented side by side while hearing either a happy or angry vocalization. An inverted condition was included to investigate whether matching was based solely upon some feature redundantly specified across modalities (e.g., tempo). In Experiment 2A, 6.5-month-old infants looked longer at the emotionally congruent videos when they were presented upright, but did not display a preference when the same videos were inverted. In Experiment 2B, 3.5-month-olds tested in the same manner exhibited a preference for the incongruent video in the upright condition, but did not show a preference when the stimuli were inverted. These results demonstrate that even young infants are sensitive to emotions conveyed by bodies, indicating that sophisticated emotion processing capabilities are present early in life

    The Progression of the Field of Kinesics

    Get PDF
    Kinesics, a term coined by anthropologist Ray Birdwhistell, is the study nonverbal communication. Nonverbal communication is primarily conducted through the use of gestures, facial expressions, and body language. These sometimes subtle cues are estimated to convey as much as seventy percent of the context of a conversation. In this thesis, I review the origin of the field of kinesics in anthropology, the development of subfields, its introduction into other various fields of study, and its significance today. Using citation analysis, I show the movement kinesics through various disciplines. This significant field of research has progressed from a research topic centered in anthropology to a subject studied by psychologists, linguists, and professional speakers. An in-depth examination of the available literature shows the major contributions of kinesics scholarship in anthropology and in other fields
    • …
    corecore