9 research outputs found

    Convergence of Gamification and Machine Learning: A Systematic Literature Review

    Get PDF
    Recent developments in human–computer interaction technologies raised the attention towards gamification techniques, that can be defined as using game elements in a non-gaming context. Furthermore, advancement in machine learning (ML) methods and its potential to enhance other technologies, resulted in the inception of a new era where ML and gamification are combined. This new direction thrilled us to conduct a systematic literature review in order to investigate the current literature in the field, to explore the convergence of these two technologies, highlighting their influence on one another, and the reported benefits and challenges. The results of the study reflect the various usage of this confluence, mainly in, learning and educational activities, personalizing gamification to the users, behavioral change efforts, adapting the gamification context and optimizing the gamification tasks. Adding to that, data collection for machine learning by gamification technology and teaching machine learning with the help of gamification were identified. Finally, we point out their benefits and challenges towards streamlining future research endeavors.publishedVersio

    Perancangan Dan Implementasi Game Tower Defense: The Legend Guardians Berbasis Android

    Get PDF
    Kemajuan teknologi informasi menjadikan dunia gaming berkembang secara pesat. Berbagai macam genre, cara bermain, dan fitur-fitur lainnya didalam game terus berkembang dan menjadi sangat beragam. Salah satu genre game yang cukup banyak diminati adalah genre tower defense. Tower Defense (TD) adalah sebuah subgenre dari game strategi yang bertujuan untuk mempertahankan wilayah kekuasaan atau harta pemain dari musuh. Terdapat beberapa macam cara bermain game dengan genre tower defense ini, diantaranya adalah dengan cara memilih pasukan dan meletakannya di tempat yang tepat agar wilayah pemain tidak dihancurkan oleh musuh. Terdapat juga cara bermain dengan memilih pasukan untuk bertahan dari serangan musuh sekaligus menghancurkan wilayah musuh. Pada usulan tugas akhir ini, game yang akan dibuat adalah game dengan genre tower defense yang akan dimainkan melawan bot/komputer. Bot akan dirancang sedemikian rupa agar memiliki tingkat kesulitan yang meningkat seiring bertambahnya level. Cara bermain yang digunakan adalah dengan memilih pasukan untuk bertahan dari serangan musuh sekaligus menghancurkan wilayah musuh untuk memenangkan permainan. Dengan sistem seperti ini, akan membuat permainan menjadi lebih strategis dan menarik. Dari perancangan dan implementasi game tower defense: The Legend Guardians berbasis android ini, diharapkan banyak orang mengerti bagaimana aturan main, skenario, dan tingkat kesulitannya serta dapat meningkatkan pola pikir strategis bagi para pemain sekaligus memberikan hiburan

    Affective Game Computing: A Survey

    Full text link
    This paper surveys the current state of the art in affective computing principles, methods and tools as applied to games. We review this emerging field, namely affective game computing, through the lens of the four core phases of the affective loop: game affect elicitation, game affect sensing, game affect detection and game affect adaptation. In addition, we provide a taxonomy of terms, methods and approaches used across the four phases of the affective game loop and situate the field within this taxonomy. We continue with a comprehensive review of available affect data collection methods with regards to gaming interfaces, sensors, annotation protocols, and available corpora. The paper concludes with a discussion on the current limitations of affective game computing and our vision for the most promising future research directions in the field

    Emotion estimation in crowds:a machine learning approach

    Get PDF

    Emotion estimation in crowds:a machine learning approach

    Get PDF

    Reconocimiento de emociones mediante análisis de señales visuales

    Full text link
    Trabajo de Fin de Grado en Ingeniería de Tecnologías y Servicios de TelecomunicaciónEste Trabajo Fin de Grado se centra en el reconocimiento automático de emociones a partir de expresiones faciales. El objetivo final es el desarrollo de una aplicación que en tiempo real detecte emociones y que permita evaluar comparativamente diversas técnicas de reconocimiento, sirviendo de base a trabajos futuros. Las emociones que se diferencian en el sistema desarrollado son siete: “Asco”, “Enfado”, “Felicidad”, “Miedo”, “Neutral”, “Sorpresa” y “Tristeza”. Se ha llegado a este punto progresivamente, tras estudiar el estado del arte, evaluar distintos sistemas existentes y, a raíz de esto, realizar mejoras que proporcionen unos resultados más concluyentes. Tras dichas evaluaciones, se ha elegido un sistema base centrado en aprendizaje profundo; por lo tanto, la extracción de características y la clasificación de las mismas se consiguen en un mismo bloque. Dicho sistema se ha mejorado con la modificación de las bases de datos utilizadas y uniéndolas en una, que destaca por lo equilibrada que es para las siete emociones. Esta decisión ha surgido debido a que, en el aprendizaje profundo, tener unos buenos datos de entrenamiento es un punto realmente decisivo en los resultados y era una premisa que no cumplían los trabajos previos analizados

    Multimodal Sensing and Data Processing for Speaker and Emotion Recognition using Deep Learning Models with Audio, Video and Biomedical Sensors

    Full text link
    The focus of the thesis is on Deep Learning methods and their applications on multimodal data, with a potential to explore the associations between modalities and replace missing and corrupt ones if necessary. We have chosen two important real-world applications that need to deal with multimodal data: 1) Speaker recognition and identification; 2) Facial expression recognition and emotion detection. The first part of our work assesses the effectiveness of speech-related sensory data modalities and their combinations in speaker recognition using deep learning models. First, the role of electromyography (EMG) is highlighted as a unique biometric sensor in improving audio-visual speaker recognition or as a substitute in noisy or poorly-lit environments. Secondly, the effectiveness of deep learning is empirically confirmed through its higher robustness to all types of features in comparison to a number of commonly used baseline classifiers. Not only do deep models outperform the baseline methods, their power increases when they integrate multiple modalities, as different modalities contain information on different aspects of the data, especially between EMG and audio. Interestingly, our deep learning approach is word-independent. Plus, the EMG, audio, and visual parts of the samples from each speaker do not need to match. This increases the flexibility of our method in using multimodal data, particularly if one or more modalities are missing. With a dataset of 23 individuals speaking 22 words five times, we show that EMG can replace the audio/visual modalities, and when combined, significantly improve the accuracy of speaker recognition. The second part describes a study on automated emotion recognition using four different modalities – audio, video, electromyography (EMG), and electroencephalography (EEG). We collected a dataset by recording the 4 modalities as 12 human subjects expressed six different emotions or maintained a neutral expression. Three different aspects of emotion recognition were investigated: model selection, feature selection, and data selection. Both generative models (DBNs) and discriminative models (LSTMs) were applied to the four modalities, and from these analyses we conclude that LSTM is better for audio and video together with their corresponding sophisticated feature extractors (MFCC and CNN), whereas DBN is better for both EMG and EEG. By examining these signals at different stages (pre-speech, during-speech, and post-speech) of the current and following trials, we have found that the most effective stages for emotion recognition from EEG occur after the emotion has been expressed, suggesting that the neural signals conveying an emotion are long-lasting

    Towards an “In-the-Wild” Emotion Dataset Using a Game-Based Framework

    No full text

    Evolutionary deep learning

    Get PDF
    The primary objective of this thesis is to investigate whether evolutionary concepts can improve the performance, speed and convenience of algorithms in various active areas of machine learning research. Deep neural networks are exhibiting an explosion in the number of parameters that need to be trained, as well as the number of permutations of possible network architectures and hyper-parameters. There is little guidance on how to choose these and brute-force experimentation is prohibitively time consuming. We show that evolutionary algorithms can help tame this explosion of freedom, by developing an algorithm that robustly evolves near optimal deep neural network architectures and hyper-parameters across a wide range of image and sentiment classification problems. We further develop an algorithm that automatically determines whether a given data science problem is of classification or regression type, successfully choosing the correct problem type with more than 95% accuracy. Together these algorithms show that a great deal of the current "art" in the design of deep learning networks - and in the job of the data scientist - can be automated. Having discussed the general problem of optimising deep learning networks the thesis moves on to a specific application: the automated extraction of human sentiment from text and images of human faces. Our results reveal that our approach is able to outperform several public and/or commercial text sentiment analysis algorithms using an evolutionary algorithm that learned to encode and extend sentiment lexicons. A second analysis looked at using evolutionary algorithms to estimate text sentiment while simultaneously compressing text data. An extensive analysis of twelve sentiment datasets reveal that accurate compression is possible with 3.3% loss in classification accuracy even with 75% compression of text size, which is useful in environments where data volumes are a problem. Finally, the thesis presents improvements to automated sentiment analysis of human faces to identify emotion, an area where there has been a tremendous amount of progress using convolutional neural networks. We provide a comprehensive critique of past work, highlight recommendations and list some open, unanswered questions in facial expression recognition using convolutional neural networks. One serious challenge when implementing such networks for facial expression recognition is the large number of trainable parameters which results in long training times. We propose a novel method based on evolutionary algorithms, to reduce the number of trainable parameters whilst simultaneously retaining classification performance, and in some cases achieving superior performance. We are robustly able to reduce the number of parameters on average by 95% with no loss in classification accuracy. Overall our analyses show that evolutionary algorithms are a valuable addition to machine learning in the deep learning era: automating, compressing and/or improving results significantly, depending on the desired goal
    corecore