3 research outputs found

    Automated Semantic Understanding of Human Emotions in Writing and Speech

    Get PDF
    Affective Human Computer Interaction (A-HCI) will be critical for the success of new technologies that will prevalent in the 21st century. If cell phones and the internet are any indication, there will be continued rapid development of automated assistive systems that help humans to live better, more productive lives. These will not be just passive systems such as cell phones, but active assistive systems like robot aides in use in hospitals, homes, entertainment room, office, and other work environments. Such systems will need to be able to properly deduce human emotional state before they determine how to best interact with people. This dissertation explores and extends the body of knowledge related to Affective HCI. New semantic methodologies are developed and studied for reliable and accurate detection of human emotional states and magnitudes in written and spoken speech; and for mapping emotional states and magnitudes to 3-D facial expression outputs. The automatic detection of affect in language is based on natural language processing and machine learning approaches. Two affect corpora were developed to perform this analysis. Emotion classification is performed at the sentence level using a step-wise approach which incorporates sentiment flow and sentiment composition features. For emotion magnitude estimation, a regression model was developed to predict evolving emotional magnitude of actors. Emotional magnitudes at any point during a story or conversation are determined by 1) previous emotional state magnitude; 2) new text and speech inputs that might act upon that state; and 3) information about the context the actors are in. Acoustic features are also used to capture additional information from the speech signal. Evaluation of the automatic understanding of affect is performed by testing the model on a testing subset of the newly extended corpus. To visualize actor emotions as perceived by the system, a methodology was also developed to map predicted emotion class magnitudes to 3-D facial parameters using vertex-level mesh morphing. The developed sentence level emotion state detection approach achieved classification accuracies as high as 71% for the neutral vs. emotion classification task in a test corpus of children’s stories. After class re-sampling, the results of the step-wise classification methodology on a test sub-set of a medical drama corpus achieved accuracies in the 56% to 84% range for each emotion class and polarity. For emotion magnitude prediction, the developed recurrent (prior-state feedback) regression model using both text-based and acoustic based features achieved correlation coefficients in the range of 0.69 to 0.80. This prediction function was modeled using a non-linear approach based on Support Vector Regression (SVR) and performed better than other approaches based on Linear Regression or Artificial Neural Networks

    Group cohesion in multifamily therapy with multilingual families

    Get PDF
    This study explores how Multifamily therapists create a context for group cohesion between monolingual and multilingual family members and what they might inadvertently do to hinder it. Group cohesion has been found to enable processes of change. I examine the intersection between group cohesion and language which is underrepresented in psychotherapy, MFT process research. Qualitative research methods were used to address the following research questions: 1) What do Multifamily therapists do in dialogue to create a context for horizontal (between multilingual and monolingual families) and vertical (between family members and therapist) group cohesion?; 2) What do Multifamily therapists do in dialogue that inadvertently hinders the horizontal and vertical group cohesion between monolingual and multilingual families?; 3) What is the intersection between Multifamily therapy, group cohesion and language, including interpreters’ roles? Participants included families with children between 6 and 14 years old, from different cultural backgrounds, attending a Multifamily group in inner London for children who were at risk of being permanently excluded from school. For some, English was their first language, and for others English was their second language and some needed an interpreter. Therapists, students and interpreters also participated. Two types of analysis, Dialogical Investigations of Happenings of Change (Seikkula, Laitila and Rober, 2012), and Thematic Analysis, were carried out on three data sources– 2 MFT sessions, a focus group with group participants, and an interview with therapists. A significant finding was that MFT therapists used a collaborative voice in dialogue, but also held an inherently powerful position influenced by how group participants positioned them, their own positioning (such as organising the room/activities, deciding topics and who talked, their work context) and by contextual/external factors (e.g. reasons for families’ referral, societally constructed as experts), and their therapeutic task. Dialogical language seemed to create a space for ‘withness’ interactions between group members, and more group cohesion. I identified some factors which were likely to have impacted negatively on group cohesion and placed participants in a powerless/non-agentic position. Interpreters’ roles and children’s positioning as their ‘mother’s voice’ were also considered. Implications of the study are discussed as is its potential contribution to the practice, training and supervision of MFT and individual FT with multilingual and monolingual families. The importance of creating a space where everyone’s voices can be heard and in particular those of silenced/marginalised members is highlighted. Therapists’ relational reflexivity and self-reflexivity play a crucial part in this in order to avoid unintentionally putting group members in shameful or powerless/non-agentic positions. The significance of, and processes involved in, creating a ‘community of help’ are identified
    corecore