9,909 research outputs found

    First impressions: A survey on vision-based apparent personality trait analysis

    Get PDF
    © 2019 IEEE. Personal use of this material is permitted. Permission from IEEE must be obtained for all other uses, in any current or future media, including reprinting/republishing this material for advertising or promotional purposes,creating new collective works, for resale or redistribution to servers or lists, or reuse of any copyrighted component of this work in other works.Personality analysis has been widely studied in psychology, neuropsychology, and signal processing fields, among others. From the past few years, it also became an attractive research area in visual computing. From the computational point of view, by far speech and text have been the most considered cues of information for analyzing personality. However, recently there has been an increasing interest from the computer vision community in analyzing personality from visual data. Recent computer vision approaches are able to accurately analyze human faces, body postures and behaviors, and use these information to infer apparent personality traits. Because of the overwhelming research interest in this topic, and of the potential impact that this sort of methods could have in society, we present in this paper an up-to-date review of existing vision-based approaches for apparent personality trait recognition. We describe seminal and cutting edge works on the subject, discussing and comparing their distinctive features and limitations. Future venues of research in the field are identified and discussed. Furthermore, aspects on the subjectivity in data labeling/evaluation, as well as current datasets and challenges organized to push the research on the field are reviewed.Peer ReviewedPostprint (author's final draft

    SpeechMirror: A Multimodal Visual Analytics System for Personalized Reflection of Online Public Speaking Effectiveness

    Full text link
    As communications are increasingly taking place virtually, the ability to present well online is becoming an indispensable skill. Online speakers are facing unique challenges in engaging with remote audiences. However, there has been a lack of evidence-based analytical systems for people to comprehensively evaluate online speeches and further discover possibilities for improvement. This paper introduces SpeechMirror, a visual analytics system facilitating reflection on a speech based on insights from a collection of online speeches. The system estimates the impact of different speech techniques on effectiveness and applies them to a speech to give users awareness of the performance of speech techniques. A similarity recommendation approach based on speech factors or script content supports guided exploration to expand knowledge of presentation evidence and accelerate the discovery of speech delivery possibilities. SpeechMirror provides intuitive visualizations and interactions for users to understand speech factors. Among them, SpeechTwin, a novel multimodal visual summary of speech, supports rapid understanding of critical speech factors and comparison of different speech samples, and SpeechPlayer augments the speech video by integrating visualization of the speaker's body language with interaction, for focused analysis. The system utilizes visualizations suited to the distinct nature of different speech factors for user comprehension. The proposed system and visualization techniques were evaluated with domain experts and amateurs, demonstrating usability for users with low visualization literacy and its efficacy in assisting users to develop insights for potential improvement.Comment: Main paper (11 pages, 6 figures) and Supplemental document (11 pages, 11 figures). Accepted by VIS 202

    Investigating the Physiological Responses to Virtual Audience Behavioral Changes A Stress-Aware Audience for Public Speaking Training

    Get PDF
    International audienceVirtual audiences have been used in psychotherapy for the treatment of public speaking anxiety, and recent studies show promising results with patients undergoing cognitive-behavior therapy with virtual reality exposure maintaining a reduction in their anxiety disorder for a year after treatment. It has been shown virtual exhibiting positive or negative behavior trigger different stress responses, however research on the topic of the effect of virtual audience behaviors has been scarce. In particular, it is unclear how variations in audience behavior can make the user's stress levels vary while they are presenting. In this paper, we present a study where we intend to investigate the relationship between virtual audience behaviors and physiological measurements of stress. We use the Cicero virtual audience framework which allows for precise manipulation of its perceived level of arousal and valence by in-cremental changes in individual audience members behaviors. Additionally , we introduce a concept of a stress-aware virtual audience for public speaking training, which uses physiological assessments and virtual audience stimuli to maintain the user in a challenging, non-threatening state

    CHORUS Deliverable 2.2: Second report - identification of multi-disciplinary key issues for gap analysis toward EU multimedia search engines roadmap

    Get PDF
    After addressing the state-of-the-art during the first year of Chorus and establishing the existing landscape in multimedia search engines, we have identified and analyzed gaps within European research effort during our second year. In this period we focused on three directions, notably technological issues, user-centred issues and use-cases and socio- economic and legal aspects. These were assessed by two central studies: firstly, a concerted vision of functional breakdown of generic multimedia search engine, and secondly, a representative use-cases descriptions with the related discussion on requirement for technological challenges. Both studies have been carried out in cooperation and consultation with the community at large through EC concertation meetings (multimedia search engines cluster), several meetings with our Think-Tank, presentations in international conferences, and surveys addressed to EU projects coordinators as well as National initiatives coordinators. Based on the obtained feedback we identified two types of gaps, namely core technological gaps that involve research challenges, and “enablers”, which are not necessarily technical research challenges, but have impact on innovation progress. New socio-economic trends are presented as well as emerging legal challenges

    A Survey on Emotion Recognition for Human Robot Interaction

    Get PDF
    With the recent developments of technology and the advances in artificial intelligent and machine learning techniques, it becomes possible for the robot to acquire and show the emotions as a part of Human-Robot Interaction (HRI). An emotional robot can recognize the emotional states of humans so that it will be able to interact more naturally with its human counterpart in different environments. In this article, a survey on emotion recognition for HRI systems has been presented. The survey aims to achieve two objectives. Firstly, it aims to discuss the main challenges that face researchers when building emotional HRI systems. Secondly, it seeks to identify sensing channels that can be used to detect emotions and provides a literature review about recent researches published within each channel, along with the used methodologies and achieved results. Finally, some of the existing emotion recognition issues and recommendations for future works have been outlined

    Virtual reality training for Hajj pilgrims as an innovative community translation dissemination medium

    Get PDF
    During the Islamic pilgrimage known as Hajj, Muslim pilgrims from all over the world, with many different backgrounds, gather together and coexist in the city of Mecca in Saudi Arabia. Managing a large and diverse congregation for the safe and successful completion of Hajj requires effective communication channels between speakers of the mainstream languages and international pilgrims or non-Arabic-speaking pilgrims. The focus of the study is on the use of innovative media in community translation (CT) dissemination methods and will determine which CT dissemination media are the most effective for English-speaking Hajj pilgrims. The study compares three forms of media: the booklet and video guides from the official Mnask Academy media produced by Hajj authorities; and the prototype of this study, an immersive virtual reality-based Hajj training media “VR-Hajj”. The methodology of the study consisted of three stages, starting with the development of assessment tools. Community translation usability (CTX) and medium usability (MX) for the different community translation dissemination media, which were based on the literature on CT studies and user-centred translation (UCT) studies, as well as usability studies (UX). The next stage was prototyping, which involved the collaboration between the researcher and virtual reality experts (developers and designers). The final stage was testing the three community translation dissemination media mentioned earlier with English-speaking Muslim users. A total of 96 Muslim respondents were surveyed, three groups were formed, and each participant evaluated a community translation dissemination medium. The self-administered questionnaire elicited perceptions and feedback about CTX and MX from the three groups. Quantitative data was processed using Statistical Package for the Social Sciences (SPSS), while qualitative data was analysed using the Thematic Analysis (TA) method. The results of the present study revealed significant differences between the levels of community translation perception and medium usability achieved by participants from each group. In addition, the results revealed the shortcomings of the conventional Mnask Academy training media currently in use, as well as the promising advantages of using innovative immersive virtual reality technology for Hajj training. The study concludes that immersive virtual reality technology, which allows pilgrims to mentally travel to the Hajj area, is more effective for understanding community translation, Hajj rituals and related cultural aspects than passively-created community translation media
    corecore