266 research outputs found
Affective Computing
This book provides an overview of state of the art research in Affective Computing. It presents new ideas, original results and practical experiences in this increasingly important research field. The book consists of 23 chapters categorized into four sections. Since one of the most important means of human communication is facial expression, the first section of this book (Chapters 1 to 7) presents a research on synthesis and recognition of facial expressions. Given that we not only use the face but also body movements to express ourselves, in the second section (Chapters 8 to 11) we present a research on perception and generation of emotional expressions by using full-body motions. The third section of the book (Chapters 12 to 16) presents computational models on emotion, as well as findings from neuroscience research. In the last section of the book (Chapters 17 to 22) we present applications related to affective computing
Multimodaalsel emotsioonide tuvastamisel pÔhineva inimese-roboti suhtluse arendamine
VĂ€itekirja elektrooniline versioon ei sisalda publikatsiooneĂks afektiivse arvutiteaduse peamistest huviobjektidest on mitmemodaalne emotsioonituvastus, mis leiab rakendust peamiselt inimese-arvuti interaktsioonis. Emotsiooni Ă€ratundmiseks uuritakse nendes sĂŒsteemides nii inimese nĂ€oilmeid kui kakĂ”net. KĂ€esolevas töös uuritakse inimese emotsioonide ja nende avaldumise visuaalseid ja akustilisi tunnuseid, et töötada vĂ€lja automaatne multimodaalne emotsioonituvastussĂŒsteem. KĂ”nest arvutatakse mel-sageduse kepstri kordajad, helisignaali erinevate komponentide energiad ja prosoodilised nĂ€itajad. NĂ€oilmeteanalĂŒĂŒsimiseks kasutatakse kahte erinevat strateegiat. Esiteks arvutatakse inimesenĂ€o tĂ€htsamate punktide vahelised erinevad geomeetrilised suhted. Teiseks vĂ”etakse emotsionaalse sisuga video kokku vĂ€hendatud hulgaks pĂ”hikaadriteks, misantakse sisendiks konvolutsioonilisele tehisnĂ€rvivĂ”rgule emotsioonide visuaalsekseristamiseks. Kolme klassifitseerija vĂ€ljunditest (1 akustiline, 2 visuaalset) koostatakse uus kogum tunnuseid, mida kasutatakse Ă”ppimiseks sĂŒsteemi viimasesetapis. Loodud sĂŒsteemi katsetati SAVEE, Poola ja Serbia emotsionaalse kĂ”neandmebaaside, eNTERFACEâ05 ja RML andmebaaside peal. Saadud tulemusednĂ€itavad, et vĂ”rreldes olemasolevatega vĂ”imaldab kĂ€esoleva töö raames loodudsĂŒsteem suuremat tĂ€psust emotsioonide Ă€ratundmisel. Lisaks anname kĂ€esolevastöös ĂŒlevaate kirjanduses vĂ€ljapakutud sĂŒsteemidest, millel on vĂ”imekus tunda Ă€raemotsiooniga seotud Ìzeste. Selle ĂŒlevaate eesmĂ€rgiks on hĂ”lbustada uute uurimissuundade leidmist, mis aitaksid lisada töö raames loodud sĂŒsteemile ÌzestipĂ”hiseemotsioonituvastuse vĂ”imekuse, et veelgi enam tĂ”sta sĂŒsteemi emotsioonide Ă€ratundmise tĂ€psust.Automatic multimodal emotion recognition is a fundamental subject of interest in affective computing. Its main applications are in human-computer interaction. The systems developed for the foregoing purpose consider combinations of different modalities, based on vocal and visual cues. This thesis takes the foregoing modalities into account, in order to develop an automatic multimodal emotion recognition system. More specifically, it takes advantage of the information extracted from speech and face signals. From speech signals, Mel-frequency cepstral coefficients, filter-bank energies and prosodic features are extracted. Moreover, two different strategies are considered for analyzing the facial data. First, facial landmarks' geometric relations, i.e. distances and angles, are computed. Second, we summarize each emotional video into a reduced set of key-frames. Then they are taught to visually discriminate between the emotions. In order to do so, a convolutional neural network is applied to the key-frames summarizing the videos. Afterward, the output confidence values of all the classifiers from both of the modalities are used to define a new feature space. Lastly, the latter values are learned for the final emotion label prediction, in a late fusion. The experiments are conducted on the SAVEE, Polish, Serbian, eNTERFACE'05 and RML datasets. The results show significant performance improvements by the proposed system in comparison to the existing alternatives, defining the current state-of-the-art on all the datasets. Additionally, we provide a review of emotional body gesture recognition systems proposed in the literature. The aim of the foregoing part is to help figure out possible future research directions for enhancing the performance of the proposed system. More clearly, we imply that incorporating data representing gestures, which constitute another major component of the visual modality, can result in a more efficient framework
When a few words are not enough: improving text classification through contextual information
Traditional text classification approaches may be ineffective when applied to texts with insufficient or limited number of words due to brevity of text and sparsity of feature space. The lack of contextual information can make texts ambiguous; hence, text classification approaches relying solely on words may not properly capture the critical features of a real-world problem. One of the popular approaches to overcoming this problem is to enrich texts with additional domain-specific features. Thus, this thesis shows how it can be done in two realworld problems in which text information alone is insufficient for classification. While one problem is depression detection based on the automatic analysis of clinical interviews, another problem is detecting fake online news. Depression profoundly affects how people behave, perceive, and interact. Language reveals our ideas, moods, feelings, beliefs, behaviours and personalities. However, because of inherent variations in the speech system, no single cue is sufficiently discriminative as a sign of depression on its own. This means that language alone may not be adequate for understanding a personâs mental characteristics and states. Therefore, adding contextual information can properly represent the critical features of texts. Speech includes both linguistic content (what people say) and acoustic aspects (how words are said), which provide important clues about the speakerâs emotional, physiological and mental characteristics. Therefore, we study the possibility of effectively detecting depression using unobtrusive and inexpensive technologies based on the automatic analysis of language (what you say) and speech (how you say it). For fake news detection, people seem to use their cognitive abilities to hide information, which induces behavioural change, thereby changing their writing style and word choices. Therefore, the spread of false claims has polluted the web. However, the claims are relatively short and include limited content. Thus, capturing only text features of the claims will not provide sufficient information to detect deceptive claims. Evidence articles can help support the factual claim by representing the central content of the claim more authentically. Therefore, we propose an automated credibility assessment approach based on linguistic analysis of the claim and its evidence articles
Proceedings of the 1st Doctoral Consortium at the European Conference on Artificial Intelligence (DC-ECAI 2020)
1st Doctoral Consortium at the European Conference on
Artificial Intelligence (DC-ECAI 2020), 29-30 August, 2020
Santiago de Compostela, SpainThe DC-ECAI 2020 provides a unique opportunity for PhD students, who are close to finishing their doctorate research, to interact with experienced researchers in the field. Senior members of the community are assigned as mentors for each group of students based on the studentâs research or similarity of research interests. The DC-ECAI 2020, which is held virtually this year, allows students from all over the world to present their research and discuss their ongoing research and career plans with their mentor, to do networking with other participants, and to receive training and mentoring about career planning and career option
A Comprehensive Survey on Deep Learning Techniques in Educational Data Mining
Educational Data Mining (EDM) has emerged as a vital field of research, which
harnesses the power of computational techniques to analyze educational data.
With the increasing complexity and diversity of educational data, Deep Learning
techniques have shown significant advantages in addressing the challenges
associated with analyzing and modeling this data. This survey aims to
systematically review the state-of-the-art in EDM with Deep Learning. We begin
by providing a brief introduction to EDM and Deep Learning, highlighting their
relevance in the context of modern education. Next, we present a detailed
review of Deep Learning techniques applied in four typical educational
scenarios, including knowledge tracing, undesirable student detecting,
performance prediction, and personalized recommendation. Furthermore, a
comprehensive overview of public datasets and processing tools for EDM is
provided. Finally, we point out emerging trends and future directions in this
research area.Comment: 21 pages, 5 figure
- âŠ