914 research outputs found

    Seven- to 11-year-olds’ developing ability to recognize natural facial expressions of basic emotions

    Get PDF
    Being able to recognize facial expressions of basic emotions is of great importance to social development. However, we still know surprisingly little about children’s developing ability to interpret emotions that are expressed dynamically, naturally and subtly, despite real-life expressions having such appearance in the vast majority of cases. The current research employs a new technique of capturing dynamic, subtly expressed natural emotional displays (happy, sad, angry, shocked and disgusted). Children aged 7, 9 and 11 years (and adults) were systematically able to discriminate each emotional display from alternatives in a 5-way choice. Children were most accurate in identifying the expression of happiness and were also relatively accurate in identifying the expression of sadness; they were far less accurate than adults in identifying shocked and disgusted. Children who performed well academically also tended to be the most accurate in recognizing expressions and this relationship maintained independently of chronological age. Generally, the findings testify to a well-developed ability to recognize very subtle naturally occurring expressions of emotions

    A symbolism study of expression in text-based communication

    Get PDF
    Modern communication technology such as mobile phone and the Internet have made long-distance text-based communication very convenient. As a result, more and more people choose to send text messages via SMS or instant message software (e.g., WhatsApp) as a major approach to communicate with each other. However, due to the limitations of written language, text-based communication usually cannot accurately express the emotions and feelings of the message sender. Symbolic facial expressions, such as emoji and emoticons, were invented to overcome this shortcoming of text-based communication. By inserting symbols in a text message, the sender has an ability to express his emotions and feelings represented by facial expressions. In this thesis, I study the usage of symbolic facial expressions in text-based communications and it\u27s impact on people\u27s communication behaviors. 1. Is the use of Emoji partially too culturally specific? 2. Can symbolic facial expression be used by themselves? If so, in what situations? 3. During SMS conversation, how many facial expressions are excessive? Will excessive symbolic facial expression impede SMS communication? 4. What factors influence the behavior of emoji usage in a SMS sender? These questions explore different aspects of the usage of facial expressions/emoji in text communication including culture differences, usage of emoji compared to text, the definition of excessive usage of emoji, and the interpretation of these symbols in general. In order to find answers to the above research questions, I conducted a survey among approximately 1000 students and faculty members in Iowa State University. I have calculated the statistical data of the survey answers, and drew my conclusion based on the analysis of these results

    A prosocial online game for social cognition training in adolescents with high-functioning autism: an fMRI study

    Get PDF
    To help patients with autism spectrum disorder (ASD) improve their social skills, effective interventions and new treatment modalities are necessary. We hypothesized that a prosocial online game would improve social cognition in ASD adolescents, as assessed using metrics of social communication, facial recognition, and emotional words. Ten ASD adolescents underwent cognitive behavior therapy (CBT) using a prosocial online game (game-CBT), and ten ASD adolescents participated in an offline-CBT. At baseline and 6 weeks later, social communication quality, correct identification of emotional words and facial emoticons, and brain activity were assessed in both groups. Social communication quality and correct response rate of emotional words and facial emoticons improved in both groups over the course of the intervention, and there were no significant differences between groups. In response to the emotional words, the brain activity within the temporal and parietal cortices increased in the game-CBT group, while the brain activity within cingulate and parietal cortices increased in the offline-CBT group. In addition, ASD adolescents in the game-CBT group showed increased brain activity within the right cingulate gyrus, left medial frontal gyrus, left cerebellum, left fusiform gyrus, left insular cortex, and sublobar area in response to facial emoticons. A prosocial online game designed for CBT was as effective as offline-CBT in ASD adolescents. Participation in the game especially increased social arousal and aided ASD adolescents in recognizing emotion. The therapy also helped participants more accurately consider associated environments in response to facial emotional stimulation. However, the online CBT was less effective than the offline-CBT at evoking emotions in response to emotional words.ope

    Accentuating illocutionary forces: emoticons as speech act realization strategies in a multicultural online communication environment

    Get PDF
    The global acceptance of emoticons has acknowledged the development of digital symbols in a communication setting when language alone can become a barrier in expressing certain intentions and feelings. This paper discusses how emoticons help indicate the illocutionary forces in texts and serve as part of various conversation strategies in the online communication environment. To achieve the research objective, a documentation of naturally occurring conversations on Facebook was made over a 12-month period to compile daily updates and conversations posted by youngsters in Malaysia. 120 online users were identified using a purposive sampling technique. A corpus of 324 362 words was established and processed. This whole set of naturally occurring conversation was then analysed based on Searle’s (1976) five categorisations of illocutionary acts using Content Analysis and Wordsmith Tools 5.0. The findings demonstrate some emoticons that accentuated illocutionary forces of speech acts in the online communication environment. Discussion of the findings also explores the purposes and functions of emoticons in Malaysian digital communication platform and the way users of a multicultural society employ emotion-symbols to achieve social cohesion and embrace cultural diversity

    The Relationship Between Self-Reported Emotional Intelligence and Emoji Identification Accuracy in College Students

    Get PDF
    The current study examined the use and interpretation of emojis by neurotypical college students through emotional recognition and social understanding and the implications for their use in supportive communication inside the classroom for individuals with autism spectrum disorder (ASD). Emotional awareness and reciprocity are essential for establishing friendships and developing social skills. At this point, there remains limited research on the implications of emoji use for individuals with ASD and any boundaries that may be associated with emojis in social understanding and emotional awareness. The specific research questions that will be addressed in this study are as follows: (1) Is there a correlation between self-reported emotional intelligence and emoji identification accuracy? (2) Is there a correlation between degree of emoji use and emoji identification accuracy? (3) Are certain demographic characteristics (i.e., gender, age, years of smartphone use) related to emoji identification accuracy? A total of 101 undergraduate and graduate students completed a 53-item survey which included demographic questions, emoji identification tasks, and self-reported measures of emotional intelligence. Results indicated that there is not a relationship between an individual’s ability to correctly identify emojis and their level of self-reported emotional intelligence (r = .161; p = .131). Participants’ identification accuracy was not found to be related to degree of emoji use, gender or age (p \u3e .05). However, the relationship between identification accuracy and years of smartphone use was found to be approaching significance (p = .051). The results provide preliminary evidence for future researchers to investigate whether there is a relationship between individuals with ASD’s emoji identification accuracy and emotional intelligence

    Cybercounseling: A comparison of the elements of counseling sessions face-to-face and over instant message

    Get PDF
    The purpose of this study was to compare key elements of a counseling session across both face-to-face sessions and sessions occurring over cyberspace via synchronous chat such as instant message or chat room. The key elements of a counseling session compared in this study included the interview environment, physical and psychological attending, interpreting verbal communication, and interpreting nonverbal communication. The results of this study, though not statistically significant, suggest that credentialed distance counselors perceive counseling via instant message to offer less opportunity to create a facilitating counseling environment, less opportunity for interpretation of nonverbal communication, and less opportunity for psychological attending to clients when compared to face-to-face counseling. Opportunity for interpreting verbal communication, and providing physical attention were perceived to improve during counseling sessions conducted via instant message as compared to face-to-face counseling

    Exploring association between musical sophistication and emotion recognition in individuals differing in musical abilities

    Get PDF
    Dissertação de Mestrado Interuniversitário, Neuropsicologia Clínica e Experimental, 2021, Universidade de Lisboa, Faculdade de PsicologiaMusic training and musical sophistication have been shown to be associated with nonmusical domains of cognition. Many authors argue that these associations demonstrate experience-driven neuroplasticity. This study attempts to replicate and lend additional support to the idea that both trained and untrained individuals with higher selfreported musical abilities can achieve similar performance levels in three identical emotion recognition tasks. In the current study, participants (N = 31) with different scores on the Goldsmiths Musical Sophistication Index had to identify the emotion that best characterized each exemplar of faces, prosody or nonverbal vocalizations in separate tasks, using one of seven response options representing emotional labels. Contrary to the hypothesis, we did not find a significant association between musical sophistication and average accuracy scores in vocal emotion recognition tasks. Also, as predicted, we found no association between musical sophistication and recognition accuracy in the facial emotion task. An exploratory analysis revealed an association between musical sophistication and recognition of fear. These mixed findings are partly in line with the musical sophistication/musical training literature. Possible methodological pitfalls are discussed. Future studies are to continually improve on the current body of work with different techniques and research methods.A formação musical ao longo da vida e as aprendizagens a ela associadas apresentam-se como bons modelos para estudar a plasticidade do cérebro humano, na medida em que a prática de um instrumento, por exemplo, requer um conjunto de faculdades cognitivas que, por sua vez, estão relacionadas com determinadas estruturas e funções cerebrais implicadas no processamento de conteúdo cognitivo-afetivo. A especificidade deste tipo de processamento (com treino vs. sem treino musical) tem sido analisada em estudos de imagiologia cerebral, eletrofisiológicos e comportamentais que revelam efeitos de facilitação numa variedade de processos cognitivos de âmbito geral (e.g., funções executivas e inteligência) e particular (e.g., processamento da fala, linguagem escrita, capacidades visuoespaciais, etc). Deste modo, os investigadores argumentam que os correlatos encontrados constituem casos onde a neuroplasticidade é orientada pela experiência, através de um mecanismo que denominam de far transfer (“transferência longa”), uma vez que ocorre normalmente entre um domínio musical (e.g., aulas de canto/treino vocal) e um domínio não-musical (e.g., cognitivo). Por oposição, a near transfer (“transferência curta”) ocorre no mesmo domínio, no qual a aptidão aprendida tem aplicação imediata na tarefa experimental (e.g., aprender acordes e tocálos no instrumento). Ora, certos autores (Sala & Gobet, 2020) admitem que a formação musical não promove melhorias nas faculdades cognitivas ou na excelência académica, visto que a maioria dos estudos analisados são transversais e o impacto é nulo. Nestes casos, a validade ecológica pode ser posta em causa, uma vez que estes estudos usaram metodologias distintas e nem sempre as mais adequadas, variando quase sempre ao nível do delineamento da intervenção (e.g., individual ou grupal), das idades dos participantes e da operacionalização das variáveis representativas do treino musical e das faculdades cognitivas em foco. Além disso, a evidência de estudos longitudinais é escassa e apresenta igualmente um conjunto de limitações, entre as quais a não aleatorização dos participantes, a não utilização de condições controlo equivalentes ao treino musical e a falta de medidas de controlo adequadas (e.g., traços de personalidade, estatuto socioeconómico e indicadores de funcionamento cognitivo). Contudo, outros autores (Bigand & Tillmann, 2021) reanalisaram esses dados e demonstraram efeitos de transferência “modestos”, mas estatisticamente significativos, quando compararam os resultados das condições experimentais e de controlo de estudos de transferências longas. Assim sendo, a plasticidade induzida pelo treino poderá não explicar todas as diferenças entre músicos e indivíduos sem formação musical. De facto, a experiência musical ou os conhecimentos musicais são muitas vezes aferidos pela capacidade de tocar um instrumento musical e o nível de proficiência exibido, enquanto que outras capacidades que dizem respeito à relação do indivíduo com a música não são tidas em conta. Na literatura, esta relação é definida pelo conceito de sofisticação musical (“musical sophistication”; Mullensiefen et al., 2014), que engloba comportamentos e competências musicais numa diversidade de dimensões como o envolvimento ativo com a música (e.g., quanto tempo e recursos monetários são investidos em música), competências percetivas (e.g., audição musical), treino musical (e.g., treino musical formal recebido), competências de canto (e.g., performance vocal durante o canto), e envolvimento emocional com a música (e.g., capacidade de falar sobre emoções expressas pela música). Por conseguinte, é possível identificar estes comportamentos/competências na população em geral, e não só exclusivamente em músicos. Acresce que certos autores (e.g., Martins et al., 2021) reforçam que fatores preexistentes como predisposições genéticas podem favorecer o surgimento de competências musicais naturalmente boas em indivíduos sem treino musical. De facto, estudos recentes têm mostrado que boas competências de perceção musical em indivíduos sem treino musical estão relacionadas com melhores desempenhos em domínios não musicais (e.g., Mankel & Bidelman, 2018; Swaminathan & Schellenberg, 2017). Outros fatores, nomeadamente ambientais, como as faculdades cognitivas, a personalidade e o estatuto socioeconómico parecem, no entanto, predizer diferenças individuais no treino musical (Corrigall et al., 2013; Swaminathan & Schellenberg, 2018). Nas últimas duas décadas, tem havido interesse crescente em investigar aspetos do processamento socioemocional, nomeadamente a capacidade de reconhecer emoções através da voz e faces. Embora a música esteja intimamente ligada com processos emocionais e sociais, o estudo de efeitos de transferência para estes domínios tem conhecido tímidos avanços, sobretudo no que toca à sofisticação musical. O presente estudo teve como objetivo usar medidas explícitas de reconhecimento emocional para explorar associações entre a sofisticação musical, avaliada com recurso à adaptação portuguesa do Gold-MSI (Lima et al., 2020), e o reconhecimento de vozes e faces, com base na precisão das respostas dos participantes (em Hu scores; Wagner, 1993). Nesse sentido, três tarefas idênticas foram administradas a todos os participantes. Para cada tarefa, os participantes ou viam ou ouviam exemplares pré-selecionados de estímulos de três categorias (vocalizações não verbais, prosódia de discurso e expressões faciais) e tinham de identificar a qualidade emocional do estímulo apresentado com um de sete rótulos que representavam as seis emoções (raiva, nojo, medo, alegria, tristeza e neutralidade) e a opção “nenhuma das anteriores”, para quando a voz/face não expressasse nenhum dos estados emocionais atrás mencionados. Os participantes também responderam ao questionário Gold-MSI e a outros relativos a informação sociodemográfica e a sintomas de COVID-19, que, quando presentes, poderiam levar ao cancelamento da sessão experimental. Quanto às hipóteses, nós esperávamos que a sofisticação musical estivesse associada ao reconhecimento emocional de vozes, fosse para a prosódia de discurso ou para as vocalizações não verbais. Especificamente, em linha com Correia e colaboradores (2020), os participantes que reportassem maiores competências percetivas teriam um melhor desempenho no reconhecimento de vozes. Também esperávamos que a sofisticação musical não estivesse associada ao reconhecimento de expressões faciais. Ainda explorámos a possibilidade da sofisticação musical estar associada ao reconhecimento de emoções específicas para cada tarefa de reconhecimento. As análises de correlação não revelaram nenhuma associação entre a sofisticação musical e o reconhecimento de vozes e faces. Além disso, a associação encontrada para as faces foi negativa, ao contrário da nossa predição, sendo apenas verificada parcialmente. Assim, apesar de sustentarem a ideia de que não existe uma vantagem clara no reconhecimento emocional de faces para indivíduos com valores elevados no GoldMSI, estes resultados não mostraram a relação previamente identificada entre sofisticação musical e desempenho nas tarefas de reconhecimento de vozes (ver Correia et al., 2020). Por outro lado, a análise exploratória revelou uma correlação marginalmente significativa entre o Envolvimento Ativo (i.e., subescala do Gold-MSI) e o reconhecimento de medo. Houve uma correlação particularmente forte entre o reconhecimento do medo na tarefa da prosódia de discurso e o envolvimento ativo com a música que parece ter contribuído de forma decisiva para que a anterior estivesse mais saliente. Em suma, esta investigação propôs-se explorar correlações entre os dados da escala de autoavaliação Gold-MSI e as médias da precisão de resposta (corrigida quanto a potenciais enviesamentos) em três tarefas de reconhecimento emocional variando apenas no que toca ao estímulo usado. Os resultados não mostraram melhorias de desempenho nos indivíduos que pontuaram tendencialmente mais alto no Gold-MSI, tanto para as tarefas de voz como para a de faces. Poderá ser importante no futuro utilizar métodos complementares para se compreender as variáveis de interesse e as relações entre elas de forma extensiva. Por exemplo, técnicas de EEG poderão ajudar a descrever o decurso temporal do processamento emocional para cada tipo de estímulo, enquanto que as de fMRI poderão localizar as regiões ativas durante o reconhecimento emocional e como esses padrões de conectividade variam de acordo com o tipo de emoção. Posto isto, defendemos que este estudo tem relevância na literatura da sofisticação musical

    Facial expressions and ability to recognize emotions from eyes or mouth in children

    Get PDF
    This research aims to contribute to the literature on the ability to recognize anger, happiness, fear, surprise, sadness, disgust and neutral emotions from facial information. By investigating children’s performance in detecting these emotions from a specific face region, we were interested to know whether children would show differences in recognizing these expressions from the upper or lower face, and if any difference between specific facial regions depended on the emotion in question. For this purpose, a group of 6-7 year-old children was selected. Participants were asked to recognize emotions by using a labeling task with three stimulus types (region of the eyes, of the mouth, and full face). The findings seem to indicate that children correctly recognize basic facial expressions when pictures represent the whole face, except for a neutral expression, which was recognized from the mouth, and sadness, which was recognized from the eyes. Children are also able to identify anger from the eyes as well as from the whole face. With respect to gender differences, there is no female advantage in emotional recognition. The results indicate a significant interaction 'gender x face region' only for anger and neutral emotions
    corecore