52 research outputs found

    Real Time Facial Expression Recognition Using Webcam and SDK Affectiva

    Get PDF
    Facial expression is an essential part of communication. For this reason, the issue of human emotions evaluation using a computer is a very interesting topic, which has gained more and more attention in recent years. It is mainly related to the possibility of applying facial expression recognition in many fields such as HCI, video games, virtual reality, and analysing customer satisfaction etc. Emotions determination (recognition process) is often performed in 3 basic phases: face detection, facial features extraction, and last stage - expression classification. Most often you can meet the so-called Ekman’s classification of 6 emotional expressions (or 7 - neutral expression) as well as other types of classification - the Russell circular model, which contains up to 24 or the Plutchik’s Wheel of Emotions. The methods used in the three phases of the recognition process have not only improved over the last 60 years, but new methods and algorithms have also emerged that can determine the ViolaJones detector with greater accuracy and lower computational demands. Therefore, there are currently various solutions in the form of the Software Development Kit (SDK). In this publication, we point to the proposition and creation of our system for real-time emotion classification. Our intention was to create a system that would use all three phases of the recognition process, work fast and stable in real time. That’s why we’ve decided to take advantage of existing Affectiva SDKs. By using the classic webcamera we can detect facial landmarks on the image automatically using the Software Development Kit (SDK) from Affectiva. Geometric feature based approach is used for feature extraction. The distance between landmarks is used as a feature, and for selecting an optimal set of features, the brute force method is used. The proposed system uses neural network algorithm for classification. The proposed system recognizes 6 (respectively 7) facial expressions, namely anger, disgust, fear, happiness, sadness, surprise and neutral. We do not want to point only to the percentage of success of our solution. We want to point out the way we have determined this measurements and the results we have achieved and how these results have significantly influenced our future research direction

    A Framework for Students Profile Detection

    Get PDF
    Some of the biggest problems tackling Higher Education Institutions are students’ drop-out and academic disengagement. Physical or psychological disabilities, social-economic or academic marginalization, and emotional and affective problems, are some of the factors that can lead to it. This problematic is worsened by the shortage of educational resources, that can bridge the communication gap between the faculty staff and the affective needs of these students. This dissertation focus in the development of a framework, capable of collecting analytic data, from an array of emotions, affects and behaviours, acquired either by human observations, like a teacher in a classroom or a psychologist, or by electronic sensors and automatic analysis software, such as eye tracking devices, emotion detection through facial expression recognition software, automatic gait and posture detection, and others. The framework establishes the guidance to compile the gathered data in an ontology, to enable the extraction of patterns outliers via machine learning, which assist the profiling of students in critical situations, like disengagement, attention deficit, drop-out, and other sociological issues. Consequently, it is possible to set real-time alerts when these profiles conditions are detected, so that appropriate experts could verify the situation and employ effective procedures. The goal is that, by providing insightful real-time cognitive data and facilitating the profiling of the students’ problems, a faster personalized response to help the student is enabled, allowing academic performance improvements

    Are Instructed Emotional States Suitable for Classification? Demonstration of How They Can Significantly Influence the Classification Result in An Automated Recognition System

    Get PDF
    At the present time, various freely available or commercial solutions are used to classify the subject's emotional state. Classification of the emotional state helps us to understand how the subject feels and what he is experiencing in a particular situation. Classification of the emotional state can thus be used in various areas of our life from neuromarketing, through the automotive industry (determining how emotions affect driving), to implementing such a system into the learning process. The learning process, which is the (mutual) interaction between the teacher and the learner, is an interesting area in which individual emotional states can be explored. In this pedagogical-psychological area several research studies were realized. These studies in some cases demonstrated the important impact of the emotional state on the results of the students. However, for comparison and unambiguous classification of the emotional state most of these studies used the instructed (even constructed) stereotypical facial expressions of the most well-known test databases (Jaffe is a typical example). Such facial expressions are highly standardized, and the software can recognize them with a fairly big percentage, but this does not necessarily point to the actual success rate of the subject's emotional classification in such a test because the similarity to real emotional expression remains unknown. Therefore, we examined facial expressions in real situations. We have subsequently compared these examined facial expressions with the instructed expressions of the same emotions (the Jaffe database). The overall average classification score in real facial expressions was 94.58%

    Automatic emotion recognition in clinical scenario: a systematic review of methods

    Get PDF
    none4Automatic emotion recognition has powerful opportunities in the clinical field, but several critical aspects are still open, such as heterogeneity of methodologies or technologies tested mainly on healthy people. This systematic review aims to survey automatic emotion recognition systems applied in real clinical contexts, to deeply analyse clinical and technical aspects, how they were addressed, and relationships among them. The literature review was conducted on: IEEEXplore, ScienceDirect, Scopus, PubMed, ACM. Inclusion criteria were the presence of an automatic emotion recognition algorithm and the enrollment of at least 2 patients in the experimental protocol. The review process followed the Preferred Reporting Items for Systematic Reviews and Meta-Analyses guidelines. Moreover, the works were analysed according to a reference model to deeply examine both clinical and technical topics. 52 scientific papers passed inclusion criteria. Most clinical scenarios involved neurodevelopmental, neurological and psychiatric disorders with the aims of diagnosing, monitoring, or treating emotional symptoms. The most adopted signals are video and audio, while supervised shallow learning is mostly used for emotion recognition. A poor study design, tiny samples, and the absence of a control group emerged as methodological weaknesses. Heterogeneity of performance metrics, datasets and algorithms challenges results comparability, robustness, reliability and reproducibility.openPepa, Lucia; Spalazzi, Luca; Capecci, Marianna; Ceravolo, Maria GabriellaPepa, Lucia; Spalazzi, Luca; Capecci, Marianna; Ceravolo, Maria Gabriell

    Relationship between temporary emotion of students and performance in learning through comparing facial expression analytics

    Get PDF
    This paper presents a study on temporary emotion of students and their performance related to learning activities. This paper elucidates different kinds of facial expressions elicited during the activities: quiz and a movie trailer with the help of existing facial expression analyzing applications. The user’s expressions are recorded as video while watching the movie trailer and doing the quiz. The video is processed by different applications which gives the score for different emotions. The results obtained are studied to find which emotion is mostly prevalent among the user in different situations. From this study, it is shown that students experience seemingly different emotions during the activity. The emotions they portrayed were confusion, sadness, anger and neutral. This study explores the use of affective computing for further comprehension of students’ emotion in learning environment

    User emotional interaction processor: a tool to support the development of GUIs through physiological user monitoring

    Get PDF
    Ever since computers have entered humans' daily lives, the activity between the human and the digital ecosystems has increased. This increase encourages the development of smarter and more user-friendly human-computer interfaces. However, to test these interfaces, the means of interaction have been limited, for the most part restricted to the conventional interface, the "manual" interface, where physical input is required, where participants who test these interfaces use a keyboard, mouse, or a touch screen, and where communication between participants and designers is required. There is another method, which will be applied in this dissertation, which does not require physical input from the participants, which is called Affective Computing. This dissertation presents the development of a tool to support the development of graphical interfaces, based on the monitoring of psychological and physiological aspects of the user (emotions and attention), aiming to improve the experience of the end user, with the ultimate goal of improving the interface design. The development of this tool will be described. The results, provided by designers from an IT company, suggest that the tool is useful but that the optimized interface generated by it still has some flaws. These flaws are mainly related to the lack of consideration of a general context in the interface generation process.Desde que os computadores entraram na vida diária dos humanos, a atividade entre o ecossistema humano e o digital tem aumentado. Este aumento estimula o desenvolvimento de interfaces humano-computador mais inteligentes e apelativas ao utilizador. No entanto, para testar estas interfaces, os meios de interação têm sido limitados, em grande parte restritos à interface convencional, a interface "manual", onde é preciso "input" físico, onde os participantes que testam estas interface, usam um teclado, um rato ou um "touch screen", e onde a comunicação dos participantes com os designers é necessária. Existe outro método, que será aplicado nesta dissertação, que não necessita de "input" físico dos participantes, que se denomina de "Affective Computing". Esta dissertação apresenta o desenvolvimento de uma ferramenta de suporte ao desenvolvimento de interfaces gráficas, baseada na monitorização de aspetos psicológicos e fisiológicos do utilizador (emoções e atenção), visando melhorar a experiência do utilizador final, com o objetivo último de melhorar o design da interface. O desenvolvimento desta ferramenta será descrito. Os resultados, dados por designers de uma empresa de IT, sugerem que esta é útil, mas que a interface otimizada gerada pela mesma tem ainda algumas falhas. Estas falhas estão, principalmente, relacionadas com a ausência de consideração de um contexto geral no processo de geração da interface

    Uma abordagem sistemática para a integração de contexto emocional em sistemas interativos

    Get PDF
    In interactive systems, knowing the user's emotional state is not only important to understand and improve overall user experience, but also of the utmost relevance in scenarios where such information might foster our ability to help users manage and express their emotions (e.g., anxiety), with a strong impact on their daily life and on how they interact with others. Nevertheless, although there is a clear potential for emotionally-aware applications, several challenges preclude their wider availability, sometimes resulting from the low translational nature of the research in affective computing, and from a lack of straightforward methods for easy integration of emotion in applications. While several toolkits have been proposed for emotion extraction from a variety of data, such as text, audio and video, their integration still requires a considerable development effort that needs to be repeated for every deployed application. In light of these challenges, we present a conceptual vision for considering emotion in the scope of multimodal interactive systems, proposing a decoupled approach that can also foster an articulation with research in affective computing. Following this vision, we developed an affective generic modality, aligned with the W3C recommendations for multimodal interaction architectures, which enables the development of emotionally-aware applications keeping the developed interactive systems (and the developers) completely agnostic to the affective computing methods considered. To support the work carried out, and to illustrate the potential of the proposed affective modality in providing emotional context for interactive scenarios, two demonstrator applications were instantiated. The first enables multimodal interaction with Spotify, considering the user's emotional context to adapt how music is played. The second, Mood Diary, serves as a proof-of-concept to how an emotionally-aware application can support users in understanding and expressing emotion, a potentially relevant feature for those suffering from conditions such as autism spectrum disordersEm sistemas interativos, ter conhecimento do estado emocional do utilizador não é apenas importante para perceber e melhorar a experiência global do utilizador, mas também e de extrema relevância em cenários onde tal informação pode impulsionar a nossa capacidade para ajudar os utilizadores a gerir e a expressar as suas emoções (por exemplo, ansiedade), com um grande impacto nas suas atividades do dia-a-dia e como estes utilizadores interagem com outros. Embora exista um elevado potencial em aplicações que tenham em consideração as emoções, inúmeros desafios impedem a sua disponibilidade mais ampla, por vezes, resultante da baixa natureza translacional da pesquisa feita na área de computação afetiva, outras pela falta de métodos que permitam uma fácil integração de emoções nas aplicações. Embora já múltiplas ferramentas tenham sido propostas para a extração de emoção a partir de um conjunto de dados, como texto, áudio e vídeo, a sua integração ainda requer um esforço de desenvolvimento considerável que necessita de ser repetido para cada aplicação implementada. Tendo em conta estes desafios, apresentamos uma visão conceptual para a consideração de emoções no âmbito de sistemas de interação multimodais, propondo uma abordagem desacoplada que também pode fomentar uma articulação com a pesquisa na área da computação afetiva. Com esta visão em mente, desenvolvemos uma modalidade afetiva genérica, alinhada com as recomendações do W3C para arquiteturas de interação multimodal, que possibilita o desenvolvimento de aplicações que têm em consideração as emoções, mantendo os sistemas interativos desenvolvidos (e os programadores) completamente dissociados dos métodos computacionais afetivos considerados. De forma a complementar o trabalho realizado, e de modo a ilustrar o potencial da modalidade afetiva proposta na apresentação de contexto emocional em cenários interativos, foram instanciadas duas aplicações demonstrativas. A primeira, permite uma interação multimodal com o Spotify, considerando o contexto emocional do utilizador para adaptar a música que é reproduzida. O segundo, Mood Diary, serve como uma prova de conceito de como uma aplicação que considera as emoções pode ajudar os utilizadores na compreensão e expressão de emoções, uma característica potencialmente relevante para aqueles que têm necessidades especiais, como é o caso, das perturbações do espectro do autismoMestrado em Engenharia de Computadores e Telemátic

    Cohousing IoT:Technology Design for Life In Community

    Get PDF
    This paper presents a research-through-design project to develop and interpret speculative smart home technologies for cohousing communities—Cohousing IoT. Fieldwork at multiple sites coupled to a constructive design research process led to three prototypes designed for cohousing communities: Cohousing Radio, Physical RSVP, and Participation Scales. These were brought back to the communities that inspired them as a form of evaluation, but also to generate new understandings of designing for cohousing. In discussing how they understand these prototypes, this paper offers an account of how research though design generates knowledge that is specific to the conditions and issues that matter to communities. This contributes to design research more broadly in two ways. First, it demonstrates how contemporary ideas of smart home technology are or could be made relevant to broader ways of living in the future. Second, it provides an example of how a design research process can serve to uncover community values, issues, and goals

    Basic Emotion Recogniton using Automatic Facial Expression Analysis Software

    Get PDF
    Facial expression was proven to show a person's emotions, including 6 basic human emotions, namely happy, sad, surprise, disgusted, angry, and fear. Currently, the recognition of basic emotions is applied using the automatic facial expression analysis software. In fact, not all emotions are performed with the same expressions. This study aims to analyze whether the six basic human emotions can be recognized by software. Ten subjects were asked to spontaneously show the expressions of the six basic emotions sequentially. Subjects are not given instructions on how the standard expressions of each of the basic emotions are. The results show that only happy expressions can be consistently identified clearly by the software, while sad expressions are almost unrecognizable. On the other hand surprise expressions tend to be recognized as mixed emotions of surprise and happy. There are two emotions that are difficult to express by the subject, namely fear and anger. The subject interpretation of these two emotions varies widely and tends to be unrecognizable by software. The conclusion of this study is the way a person shows his emotions varies. Although there are some similarities in expression, it cannot be proven that all expressions of basic human emotions can be generalized. Further implication of this research needs further discussion
    corecore