1,583 research outputs found

    An Actor-Centric Approach to Facial Animation Control by Neural Networks For Non-Player Characters in Video Games

    Get PDF
    Game developers increasingly consider the degree to which character animation emulates facial expressions found in cinema. Employing animators and actors to produce cinematic facial animation by mixing motion capture and hand-crafted animation is labor intensive and therefore expensive. Emotion corpora and neural network controllers have shown promise toward developing autonomous animation that does not rely on motion capture. Previous research and practice in disciplines of Computer Science, Psychology and the Performing Arts have provided frameworks on which to build a workflow toward creating an emotion AI system that can animate the facial mesh of a 3d non-player character deploying a combination of related theories and methods. However, past investigations and their resulting production methods largely ignore the emotion generation systems that have evolved in the performing arts for more than a century. We find very little research that embraces the intellectual process of trained actors as complex collaborators from which to understand and model the training of a neural network for character animation. This investigation demonstrates a workflow design that integrates knowledge from the performing arts and the affective branches of the social and biological sciences. Our workflow begins at the stage of developing and annotating a fictional scenario with actors, to producing a video emotion corpus, to designing training and validating a neural network, to analyzing the emotion data annotation of the corpus and neural network, and finally to determining resemblant behavior of its autonomous animation control of a 3d character facial mesh. The resulting workflow includes a method for the development of a neural network architecture whose initial efficacy as a facial emotion expression simulator has been tested and validated as substantially resemblant to the character behavior developed by a human actor

    Automatic classification of human facial features based on their appearance

    Full text link
    [EN] Classification or typology systems used to categorize different human body parts have existed for many years. Nevertheless, there are very few taxonomies of facial features. Ergonomics, forensic anthropology, crime prevention or new human-machine interaction systems and online activities, like e-commerce, e-learning, games, dating or social networks, are fields in which classifications of facial features are useful, for example, to create digital interlocutors that optimize the interactions between human and machines. However, classifying isolated facial features is difficult for human observers. Previous works reported low inter-observer and intra-observer agreement in the evaluation of facial features. This work presents a computer-based procedure to automatically classify facial features based on their global appearance. This procedure deals with the difficulties associated with classifying features using judgements from human observers, and facilitates the development of taxonomies of facial features. Taxonomies obtained through this procedure are presented for eyes, mouths and noses.Fuentes-Hurtado, F.; Diego-Mas, JA.; Naranjo Ornedo, V.; Alcañiz Raya, ML. (2019). Automatic classification of human facial features based on their appearance. PLoS ONE. 14(1):1-20. https://doi.org/10.1371/journal.pone.0211314S120141Damasio, A. R. (1985). Prosopagnosia. Trends in Neurosciences, 8, 132-135. doi:10.1016/0166-2236(85)90051-7Bruce, V., & Young, A. (1986). Understanding face recognition. British Journal of Psychology, 77(3), 305-327. doi:10.1111/j.2044-8295.1986.tb02199.xTodorov, A. (2011). Evaluating Faces on Social Dimensions. Social Neuroscience, 54-76. doi:10.1093/acprof:oso/9780195316872.003.0004Little, A. C., Burriss, R. P., Jones, B. C., & Roberts, S. C. (2007). Facial appearance affects voting decisions. Evolution and Human Behavior, 28(1), 18-27. doi:10.1016/j.evolhumbehav.2006.09.002Porter, J. P., & Olson, K. L. (2001). Anthropometric Facial Analysis of the African American Woman. Archives of Facial Plastic Surgery, 3(3), 191-197. doi:10.1001/archfaci.3.3.191Gündüz Arslan, S., Genç, C., Odabaş, B., & Devecioğlu Kama, J. (2007). Comparison of Facial Proportions and Anthropometric Norms Among Turkish Young Adults With Different Face Types. Aesthetic Plastic Surgery, 32(2), 234-242. doi:10.1007/s00266-007-9049-yFerring, V., & Pancherz, H. (2008). Divine proportions in the growing face. American Journal of Orthodontics and Dentofacial Orthopedics, 134(4), 472-479. doi:10.1016/j.ajodo.2007.03.027Mane, D. R., Kale, A. D., Bhai, M. B., & Hallikerimath, S. (2010). Anthropometric and anthroposcopic analysis of different shapes of faces in group of Indian population: A pilot study. Journal of Forensic and Legal Medicine, 17(8), 421-425. doi:10.1016/j.jflm.2010.09.001Ritz-Timme, S., Gabriel, P., Tutkuviene, J., Poppa, P., Obertová, Z., Gibelli, D., … Cattaneo, C. (2011). Metric and morphological assessment of facial features: A study on three European populations. Forensic Science International, 207(1-3), 239.e1-239.e8. doi:10.1016/j.forsciint.2011.01.035Ritz-Timme, S., Gabriel, P., Obertovà, Z., Boguslawski, M., Mayer, F., Drabik, A., … Cattaneo, C. (2010). A new atlas for the evaluation of facial features: advantages, limits, and applicability. International Journal of Legal Medicine, 125(2), 301-306. doi:10.1007/s00414-010-0446-4Kong, S. G., Heo, J., Abidi, B. R., Paik, J., & Abidi, M. A. (2005). Recent advances in visual and infrared face recognition—a review. Computer Vision and Image Understanding, 97(1), 103-135. doi:10.1016/j.cviu.2004.04.001Tavares, G., Mourão, A., & Magalhães, J. (2016). Crowdsourcing facial expressions for affective-interaction. Computer Vision and Image Understanding, 147, 102-113. doi:10.1016/j.cviu.2016.02.001Buckingham, G., DeBruine, L. M., Little, A. C., Welling, L. L. M., Conway, C. A., Tiddeman, B. P., & Jones, B. C. (2006). Visual adaptation to masculine and feminine faces influences generalized preferences and perceptions of trustworthiness. Evolution and Human Behavior, 27(5), 381-389. doi:10.1016/j.evolhumbehav.2006.03.001Boberg M, Piippo P, Ollila E. Designing Avatars. DIMEA ‘08 Proc 3rd Int Conf Digit Interact Media Entertain Arts. ACM; 2008; 232–239. doi: https://doi.org/10.1145/1413634.1413679Rojas Q., M., Masip, D., Todorov, A., & Vitria, J. (2011). Automatic Prediction of Facial Trait Judgments: Appearance vs. Structural Models. PLoS ONE, 6(8), e23323. doi:10.1371/journal.pone.0023323Laurentini, A., & Bottino, A. (2014). Computer analysis of face beauty: A survey. Computer Vision and Image Understanding, 125, 184-199. doi:10.1016/j.cviu.2014.04.006Alemany S, Gonzalez J, Nacher B, Soriano C, Arnaiz C, Heras H. Anthropometric survey of the Spanish female population aimed at the apparel industry. Proceedings of the 2010 Intl Conference on 3D Body scanning Technologies. 2010. pp. 307–315.Vinué, G., Epifanio, I., & Alemany, S. (2015). Archetypoids: A new approach to define representative archetypal data. Computational Statistics & Data Analysis, 87, 102-115. doi:10.1016/j.csda.2015.01.018Jee, S., & Yun, M. H. (2016). An anthropometric survey of Korean hand and hand shape types. International Journal of Industrial Ergonomics, 53, 10-18. doi:10.1016/j.ergon.2015.10.004Kim, N.-S., & Do, W.-H. (2014). Classification of Elderly Women’s Foot Type. Journal of the Korean Society of Clothing and Textiles, 38(3), 305-320. doi:10.5850/jksct.2014.38.3.305Sarakon P, Charoenpong T, Charoensiriwath S. Face shape classification from 3D human data by using SVM. The 7th 2014 Biomedical Engineering International Conference. IEEE; 2014. pp. 1–5. doi: https://doi.org/10.1109/BMEiCON.2014.7017382PRESTON, T. A., & SINGH, M. (1972). Redintegrated Somatotyping. Ergonomics, 15(6), 693-700. doi:10.1080/00140137208924469Lin, Y.-L., & Lee, K.-L. (1999). Investigation of anthropometry basis grouping technique for subject classification. Ergonomics, 42(10), 1311-1316. doi:10.1080/001401399184965Malousaris, G. G., Bergeles, N. K., Barzouka, K. G., Bayios, I. A., Nassis, G. P., & Koskolou, M. D. (2008). Somatotype, size and body composition of competitive female volleyball players. Journal of Science and Medicine in Sport, 11(3), 337-344. doi:10.1016/j.jsams.2006.11.008Carvalho, P. V. R., dos Santos, I. L., Gomes, J. O., Borges, M. R. S., & Guerlain, S. (2008). Human factors approach for evaluation and redesign of human–system interfaces of a nuclear power plant simulator. Displays, 29(3), 273-284. doi:10.1016/j.displa.2007.08.010Fabri M, Moore D. The use of emotionally expressive avatars in Collaborative Virtual Environments. AISB’05 Convention:Proceedings of the Joint Symposium on Virtual Social Agents: Social Presence Cues for Virtual Humanoids Empathic Interaction with Synthetic Characters Mind Minding Agents. 2005. pp. 88–94. doi:citeulike-article-id:790934Sukhija, P., Behal, S., & Singh, P. (2016). Face Recognition System Using Genetic Algorithm. Procedia Computer Science, 85, 410-417. doi:10.1016/j.procs.2016.05.183Trescak T, Bogdanovych A, Simoff S, Rodriguez I. Generating diverse ethnic groups with genetic algorithms. Proceedings of the 18th ACM symposium on Virtual reality software and technology—VRST ‘12. New York, New York, USA: ACM Press; 2012. p. 1. doi: https://doi.org/10.1145/2407336.2407338Vanezis, P., Lu, D., Cockburn, J., Gonzalez, A., McCombe, G., Trujillo, O., & Vanezis, M. (1996). Morphological Classification of Facial Features in Adult Caucasian Males Based on an Assessment of Photographs of 50 Subjects. Journal of Forensic Sciences, 41(5), 13998J. doi:10.1520/jfs13998jTamir, A. (2011). Numerical Survey of the Different Shapes of the Human Nose. Journal of Craniofacial Surgery, 22(3), 1104-1107. doi:10.1097/scs.0b013e3182108eb3Tamir, A. (2013). Numerical Survey of the Different Shapes of Human Chin. Journal of Craniofacial Surgery, 24(5), 1657-1659. doi:10.1097/scs.0b013e3182942b77Richler, J. J., Cheung, O. S., & Gauthier, I. (2011). Holistic Processing Predicts Face Recognition. Psychological Science, 22(4), 464-471. doi:10.1177/0956797611401753Taubert, J., Apthorp, D., Aagten-Murphy, D., & Alais, D. (2011). The role of holistic processing in face perception: Evidence from the face inversion effect. Vision Research, 51(11), 1273-1278. doi:10.1016/j.visres.2011.04.002Donnelly, N., & Davidoff, J. (1999). The Mental Representations of Faces and Houses: Issues Concerning Parts and Wholes. Visual Cognition, 6(3-4), 319-343. doi:10.1080/135062899395000Davidoff, J., & Donnelly, N. (1990). Object superiority: A comparison of complete and part probes. Acta Psychologica, 73(3), 225-243. doi:10.1016/0001-6918(90)90024-aTanaka, J. W., & Farah, M. J. (1993). Parts and Wholes in Face Recognition. The Quarterly Journal of Experimental Psychology Section A, 46(2), 225-245. doi:10.1080/14640749308401045Wang, R., Li, J., Fang, H., Tian, M., & Liu, J. (2012). Individual Differences in Holistic Processing Predict Face Recognition Ability. Psychological Science, 23(2), 169-177. doi:10.1177/0956797611420575Rhodes, G., Ewing, L., Hayward, W. G., Maurer, D., Mondloch, C. J., & Tanaka, J. W. (2009). Contact and other-race effects in configural and component processing of faces. British Journal of Psychology, 100(4), 717-728. doi:10.1348/000712608x396503Miller, G. A. (1994). The magical number seven, plus or minus two: Some limits on our capacity for processing information. Psychological Review, 101(2), 343-352. doi:10.1037/0033-295x.101.2.343Scharff, A., Palmer, J., & Moore, C. M. (2011). Evidence of fixed capacity in visual object categorization. Psychonomic Bulletin & Review, 18(4), 713-721. doi:10.3758/s13423-011-0101-1Meyers, E., & Wolf, L. (2007). Using Biologically Inspired Features for Face Processing. International Journal of Computer Vision, 76(1), 93-104. doi:10.1007/s11263-007-0058-8Cootes, T. F., Edwards, G. J., & Taylor, C. J. (2001). Active appearance models. IEEE Transactions on Pattern Analysis and Machine Intelligence, 23(6), 681-685. doi:10.1109/34.927467Ahonen, T., Hadid, A., & Pietikainen, M. (2006). Face Description with Local Binary Patterns: Application to Face Recognition. IEEE Transactions on Pattern Analysis and Machine Intelligence, 28(12), 2037-2041. doi:10.1109/tpami.2006.244Belhumeur, P. N., Hespanha, J. P., & Kriegman, D. J. (1997). Eigenfaces vs. Fisherfaces: recognition using class specific linear projection. IEEE Transactions on Pattern Analysis and Machine Intelligence, 19(7), 711-720. doi:10.1109/34.598228Turk, M., & Pentland, A. (1991). Eigenfaces for Recognition. Journal of Cognitive Neuroscience, 3(1), 71-86. doi:10.1162/jocn.1991.3.1.71Klare B, Jain AK. On a taxonomy of facial features. IEEE 4th International Conference on Biometrics: Theory, Applications and Systems, BTAS 2010. IEEE; 2010. pp. 1–8. doi: https://doi.org/10.1109/BTAS.2010.5634533Chihaoui, M., Elkefi, A., Bellil, W., & Ben Amar, C. (2016). A Survey of 2D Face Recognition Techniques. Computers, 5(4), 21. doi:10.3390/computers5040021Ma, D. S., Correll, J., & Wittenbrink, B. (2015). The Chicago face database: A free stimulus set of faces and norming data. Behavior Research Methods, 47(4), 1122-1135. doi:10.3758/s13428-014-0532-5Asthana A, Zafeiriou S, Cheng S, Pantic M. Incremental face alignment in the wild. Proceedings of the IEEE Computer Society Conference on Computer Vision and Pattern Recognition. 2014. pp. 1859–1866. doi: https://doi.org/10.1109/CVPR.2014.240Bag S, Barik S, Sen P, Sanyal G. A statistical nonparametric approach of face recognition: combination of eigenface & modified k-means clustering. Proceedings Second International Conference on Information Processing. 2008. p. 198.Doukas, C., & Maglogiannis, I. (2010). A Fast Mobile Face Recognition System for Android OS Based on Eigenfaces Decomposition. Artificial Intelligence Applications and Innovations, 295-302. doi:10.1007/978-3-642-16239-8_39Huang P, Huang Y, Wang W, Wang L. Deep embedding network for clustering. Proceedings—International Conference on Pattern Recognition. 2014. pp. 1532–1537. doi: https://doi.org/10.1109/ICPR.2014.272Dizaji KG, Herandi A, Deng C, Cai W, Huang H. Deep Clustering via Joint Convolutional Autoencoder Embedding and Relative Entropy Minimization. Proceedings of the IEEE International Conference on Computer Vision. 2017. doi: https://doi.org/10.1109/ICCV.2017.612Xie J, Girshick R, Farhadi A. Unsupervised deep embedding for clustering analysis [Internet]. Proceedings of the 33rd International Conference on International Conference on Machine Learning—Volume 48. JMLR.org; 2016. pp. 478–487. Available: https://dl.acm.org/citation.cfm?id=3045442Nousi, P., & Tefas, A. (2017). Discriminatively Trained Autoencoders for Fast and Accurate Face Recognition. Communications in Computer and Information Science, 205-215. doi:10.1007/978-3-319-65172-9_18Sirovich, L., & Kirby, M. (1987). Low-dimensional procedure for the characterization of human faces. Journal of the Optical Society of America A, 4(3), 519. doi:10.1364/josaa.4.00051

    CGAMES'2009

    Get PDF

    Logging Stress and Anxiety Using a Gamified Mobile-based EMA Application, and Emotion Recognition Using a Personalized Machine Learning Approach

    Get PDF
    According to American Psychological Association (APA) more than 9 in 10 (94 percent) adults believe that stress can contribute to the development of major health problems, such as heart disease, depression, and obesity. Due to the subjective nature of stress, and anxiety, it has been demanding to measure these psychological issues accurately by only relying on objective means. In recent years, researchers have increasingly utilized computer vision techniques and machine learning algorithms to develop scalable and accessible solutions for remote mental health monitoring via web and mobile applications. To further enhance accuracy in the field of digital health and precision diagnostics, there is a need for personalized machine-learning approaches that focus on recognizing mental states based on individual characteristics, rather than relying solely on general-purpose solutions. This thesis focuses on conducting experiments aimed at recognizing and assessing levels of stress and anxiety in participants. In the initial phase of the study, a mobile application with broad applicability (compatible with both Android and iPhone platforms) is introduced (we called it STAND). This application serves the purpose of Ecological Momentary Assessment (EMA). Participants receive daily notifications through this smartphone-based app, which redirects them to a screen consisting of three components. These components include a question that prompts participants to indicate their current levels of stress and anxiety, a rating scale ranging from 1 to 10 for quantifying their response, and the ability to capture a selfie. The responses to the stress and anxiety questions, along with the corresponding selfie photographs, are then analyzed on an individual basis. This analysis focuses on exploring the relationships between self-reported stress and anxiety levels and potential facial expressions indicative of stress and anxiety, eye features such as pupil size variation and eye closure, and specific action units (AUs) observed in the frames over time. In addition to its primary functions, the mobile app also gathers sensor data, including accelerometer and gyroscope readings, on a daily basis. This data holds potential for further analysis related to stress and anxiety. Furthermore, apart from capturing selfie photographs, participants have the option to upload video recordings of themselves while engaging in two neuropsychological games. These recorded videos are then subjected to analysis in order to extract pertinent features that can be utilized for binary classification of stress and anxiety (i.e., stress and anxiety recognition). The participants that will be selected for this phase are students aged between 18 and 38, who have received recent clinical diagnoses indicating specific stress and anxiety levels. In order to enhance user engagement in the intervention, gamified elements - an emerging trend to influence user behavior and lifestyle - has been utilized. Incorporating gamified elements into non-game contexts (e.g., health-related) has gained overwhelming popularity during the last few years which has made the interventions more delightful, engaging, and motivating. In the subsequent phase of this research, we conducted an AI experiment employing a personalized machine learning approach to perform emotion recognition on an established dataset called Emognition. This experiment served as a simulation of the future analysis that will be conducted as part of a more comprehensive study focusing on stress and anxiety recognition. The outcomes of the emotion recognition experiment in this study highlight the effectiveness of personalized machine learning techniques and bear significance for the development of future diagnostic endeavors. For training purposes, we selected three models, namely KNN, Random Forest, and MLP. The preliminary performance accuracy results for the experiment were 93%, 95%, and 87% respectively for these models

    Affective Computing

    Get PDF
    This book provides an overview of state of the art research in Affective Computing. It presents new ideas, original results and practical experiences in this increasingly important research field. The book consists of 23 chapters categorized into four sections. Since one of the most important means of human communication is facial expression, the first section of this book (Chapters 1 to 7) presents a research on synthesis and recognition of facial expressions. Given that we not only use the face but also body movements to express ourselves, in the second section (Chapters 8 to 11) we present a research on perception and generation of emotional expressions by using full-body motions. The third section of the book (Chapters 12 to 16) presents computational models on emotion, as well as findings from neuroscience research. In the last section of the book (Chapters 17 to 22) we present applications related to affective computing

    Application of Common Sense Computing for the Development of a Novel Knowledge-Based Opinion Mining Engine

    Get PDF
    The ways people express their opinions and sentiments have radically changed in the past few years thanks to the advent of social networks, web communities, blogs, wikis and other online collaborative media. The distillation of knowledge from this huge amount of unstructured information can be a key factor for marketers who want to create an image or identity in the minds of their customers for their product, brand, or organisation. These online social data, however, remain hardly accessible to computers, as they are specifically meant for human consumption. The automatic analysis of online opinions, in fact, involves a deep understanding of natural language text by machines, from which we are still very far. Hitherto, online information retrieval has been mainly based on algorithms relying on the textual representation of web-pages. Such algorithms are very good at retrieving texts, splitting them into parts, checking the spelling and counting their words. But when it comes to interpreting sentences and extracting meaningful information, their capabilities are known to be very limited. Existing approaches to opinion mining and sentiment analysis, in particular, can be grouped into three main categories: keyword spotting, in which text is classified into categories based on the presence of fairly unambiguous affect words; lexical affinity, which assigns arbitrary words a probabilistic affinity for a particular emotion; statistical methods, which calculate the valence of affective keywords and word co-occurrence frequencies on the base of a large training corpus. Early works aimed to classify entire documents as containing overall positive or negative polarity, or rating scores of reviews. Such systems were mainly based on supervised approaches relying on manually labelled samples, such as movie or product reviews where the opinionist’s overall positive or negative attitude was explicitly indicated. However, opinions and sentiments do not occur only at document level, nor they are limited to a single valence or target. Contrary or complementary attitudes toward the same topic or multiple topics can be present across the span of a document. In more recent works, text analysis granularity has been taken down to segment and sentence level, e.g., by using presence of opinion-bearing lexical items (single words or n-grams) to detect subjective sentences, or by exploiting association rule mining for a feature-based analysis of product reviews. These approaches, however, are still far from being able to infer the cognitive and affective information associated with natural language as they mainly rely on knowledge bases that are still too limited to efficiently process text at sentence level. In this thesis, common sense computing techniques are further developed and applied to bridge the semantic gap between word-level natural language data and the concept-level opinions conveyed by these. In particular, the ensemble application of graph mining and multi-dimensionality reduction techniques on two common sense knowledge bases was exploited to develop a novel intelligent engine for open-domain opinion mining and sentiment analysis. The proposed approach, termed sentic computing, performs a clause-level semantic analysis of text, which allows the inference of both the conceptual and emotional information associated with natural language opinions and, hence, a more efficient passage from (unstructured) textual information to (structured) machine-processable data. The engine was tested on three different resources, namely a Twitter hashtag repository, a LiveJournal database and a PatientOpinion dataset, and its performance compared both with results obtained using standard sentiment analysis techniques and using different state-of-the-art knowledge bases such as Princeton’s WordNet, MIT’s ConceptNet and Microsoft’s Probase. Differently from most currently available opinion mining services, the developed engine does not base its analysis on a limited set of affect words and their co-occurrence frequencies, but rather on common sense concepts and the cognitive and affective valence conveyed by these. This allows the engine to be domain-independent and, hence, to be embedded in any opinion mining system for the development of intelligent applications in multiple fields such as Social Web, HCI and e-health. Looking ahead, the combined novel use of different knowledge bases and of common sense reasoning techniques for opinion mining proposed in this work, will, eventually, pave the way for development of more bio-inspired approaches to the design of natural language processing systems capable of handling knowledge, retrieving it when necessary, making analogies and learning from experience

    Application of Common Sense Computing for the Development of a Novel Knowledge-Based Opinion Mining Engine

    Get PDF
    The ways people express their opinions and sentiments have radically changed in the past few years thanks to the advent of social networks, web communities, blogs, wikis and other online collaborative media. The distillation of knowledge from this huge amount of unstructured information can be a key factor for marketers who want to create an image or identity in the minds of their customers for their product, brand, or organisation. These online social data, however, remain hardly accessible to computers, as they are specifically meant for human consumption. The automatic analysis of online opinions, in fact, involves a deep understanding of natural language text by machines, from which we are still very far. Hitherto, online information retrieval has been mainly based on algorithms relying on the textual representation of web-pages. Such algorithms are very good at retrieving texts, splitting them into parts, checking the spelling and counting their words. But when it comes to interpreting sentences and extracting meaningful information, their capabilities are known to be very limited. Existing approaches to opinion mining and sentiment analysis, in particular, can be grouped into three main categories: keyword spotting, in which text is classified into categories based on the presence of fairly unambiguous affect words; lexical affinity, which assigns arbitrary words a probabilistic affinity for a particular emotion; statistical methods, which calculate the valence of affective keywords and word co-occurrence frequencies on the base of a large training corpus. Early works aimed to classify entire documents as containing overall positive or negative polarity, or rating scores of reviews. Such systems were mainly based on supervised approaches relying on manually labelled samples, such as movie or product reviews where the opinionist’s overall positive or negative attitude was explicitly indicated. However, opinions and sentiments do not occur only at document level, nor they are limited to a single valence or target. Contrary or complementary attitudes toward the same topic or multiple topics can be present across the span of a document. In more recent works, text analysis granularity has been taken down to segment and sentence level, e.g., by using presence of opinion-bearing lexical items (single words or n-grams) to detect subjective sentences, or by exploiting association rule mining for a feature-based analysis of product reviews. These approaches, however, are still far from being able to infer the cognitive and affective information associated with natural language as they mainly rely on knowledge bases that are still too limited to efficiently process text at sentence level. In this thesis, common sense computing techniques are further developed and applied to bridge the semantic gap between word-level natural language data and the concept-level opinions conveyed by these. In particular, the ensemble application of graph mining and multi-dimensionality reduction techniques on two common sense knowledge bases was exploited to develop a novel intelligent engine for open-domain opinion mining and sentiment analysis. The proposed approach, termed sentic computing, performs a clause-level semantic analysis of text, which allows the inference of both the conceptual and emotional information associated with natural language opinions and, hence, a more efficient passage from (unstructured) textual information to (structured) machine-processable data. The engine was tested on three different resources, namely a Twitter hashtag repository, a LiveJournal database and a PatientOpinion dataset, and its performance compared both with results obtained using standard sentiment analysis techniques and using different state-of-the-art knowledge bases such as Princeton’s WordNet, MIT’s ConceptNet and Microsoft’s Probase. Differently from most currently available opinion mining services, the developed engine does not base its analysis on a limited set of affect words and their co-occurrence frequencies, but rather on common sense concepts and the cognitive and affective valence conveyed by these. This allows the engine to be domain-independent and, hence, to be embedded in any opinion mining system for the development of intelligent applications in multiple fields such as Social Web, HCI and e-health. Looking ahead, the combined novel use of different knowledge bases and of common sense reasoning techniques for opinion mining proposed in this work, will, eventually, pave the way for development of more bio-inspired approaches to the design of natural language processing systems capable of handling knowledge, retrieving it when necessary, making analogies and learning from experience

    Brain-Computer Interfaces for Non-clinical (Home, Sports, Art, Entertainment, Education, Well-being) Applications

    Get PDF
    HCI researchers interest in BCI is increasing because the technology industry is expanding into application areas where efficiency is not the main goal of concern. Domestic or public space use of information and communication technology raise awareness of the importance of affect, comfort, family, community, or playfulness, rather than efficiency. Therefore, in addition to non-clinical BCI applications that require efficiency and precision, this Research Topic also addresses the use of BCI for various types of domestic, entertainment, educational, sports, and well-being applications. These applications can relate to an individual user as well as to multiple cooperating or competing users. We also see a renewed interest of artists to make use of such devices to design interactive art installations that know about the brain activity of an individual user or the collective brain activity of a group of users, for example, an audience. Hence, this Research Topic also addresses how BCI technology influences artistic creation and practice, and the use of BCI technology to manipulate and control sound, video, and virtual and augmented reality (VR/AR)
    • …
    corecore