126 research outputs found

    Using Online Role-playing Games for Entrepreneurship Training

    Get PDF
    This edited collection of chapters explores the application, potential and challenges of game-based learning and gamification across multiple disciplines and sectors, including psychology, education, business, history, languages and the ..

    Conceptualising, operationalising and measuring the player experience in videogames

    Get PDF
    The player experience is at the core of videogame play. Understanding the facets of player experience presents many research challenges, as the phenomenon sits at the intersection of psychology, design, human-computer interaction, sociology, and physiology. This workshop brings together an interdisciplinary group of researchers to systematically and rigorously analyse all aspects of the player experience. Methods and tools for conceptualising, operationalising and measuring the player experience form the core of this research. Our aim is to take a holistic approach to identifying, adapting and extending theories and models of the player experience, to understand how these theories and models interact, overlap and differ, and to construct a unified vision for future research

    Stress and heart rate: significant parameters and their variations

    Get PDF
    The aim of this paper is to identify heart rate parameters with higher significant values when a set of people are performing a task under stress condition. In order to accomplish this, one computer application with arithmetic and memory activities which lets drive the subjects to different stages of activity and stress has been designed. Tests are formed by initial and final rest periods and three task phases with incremental stressful level. Electrocardiogram is measured in each state and parameters are extracted from it. A statistical study using analysis of variance (ANOVA) is done to see which ones are the most significant. It is concluded that the median of RR segments is the parameter to best determine the state of stress.Regional Government of Andalusia (p08-TIC-3631

    EEG Based Emotion Monitoring Using Wavelet and Learning Vector Quantization

    Get PDF
    Emotional identification is necessary for example in Brain Computer Interface (BCI) application and when emotional therapy and medical rehabilitation take place. Some emotional states can be characterized in the frequency of EEG signal, such excited, relax and sad. The signal extracted in certain frequency useful to distinguish the three emotional state. The classification of the EEG signal in real time depends on extraction methods to increase class distinction, and identification methods with fast computing. This paper proposed human emotion monitoring in real time using Wavelet and Learning Vector Quantization (LVQ). The process was done before the machine learning using training data from the 10 subjects, 10 trial, 3 classes and 16 segments (equal to 480 sets of data). Each data set processed in 10 seconds and extracted into Alpha, Beta, and Theta waves using Wavelet. Then they become input for the identification system using LVQ three emotional state that is excited, relax, and sad. The results showed that by using wavelet we can improve the accuracy of 72% to 87% and number of training data variation increased the accuracy. The system was integrated with wireless EEG to monitor emotion state in real time with change each 10 seconds. It takes 0.44 second, was not significant toward 10 seconds

    A taxonomy and state of the art revision on affective games

    Full text link
    Affective Games are a sub-field of Affective Computing that tries to study how to design videogames that are able to react to the emotions expressed by the player, as well as provoking desired emotions to them. To achieve those goals it is necessary to research on how to measure and detect human emotions using a computer, and how to adapt videogames to the perceived emotions to finally provoke them to the players. This work presents a taxonomy for research on affective games centring on the aforementioned issues. Here we devise as well a revision of the most relevant published works known to the authors on this area. Finally, we analyse and discuss which important research problem are yet open and might be tackled by future investigations in the area of Affective GamesThis work has been co-funded by the following research projects: EphemeCH (TIN2014-56494-C4-{1,4}-P) and DeepBio (TIN2017-85727-C4-3-P) by Spanish Ministry of Economy and Competitivity, under the European Regional Development Fund FEDER, and Justice Programme of the European Union (2014–2020) 723180 – RiskTrack – JUST-2015-JCOO-AG/JUST-2015-JCOO-AG-

    Learning deep physiological models of affect

    Get PDF
    Feature extraction and feature selection are crucial phases in the process of affective modeling. Both, however, incorporate substantial limitations that hinder the development of reliable and accurate models of affect. For the purpose of modeling affect manifested through physiology, this paper builds on recent advances in machine learning with deep learning (DL) approaches. The efficiency of DL algorithms that train artificial neural network models is tested and compared against standard feature extraction and selection approaches followed in the literature. Results on a game data corpus — containing players’ physiological signals (i.e. skin conductance and blood volume pulse) and subjective self-reports of affect — reveal that DL outperforms manual ad-hoc feature extraction as it yields significantly more accurate affective models. Moreover, it appears that DL meets and even outperforms affective models that are boosted by automatic feature selection, for several of the scenarios examined. As the DL method is generic and applicable to any affective modeling task, the key findings of the paper suggest that ad-hoc feature extraction and selection — to a lesser degree — could be bypassed.The authors would like to thank Tobias Mahlmann for his work on the development and administration of the cluster used to run the experiments. Special thanks for proofreading goes to Yana Knight. Thanks also go to the Theano development team, to all participants in our experiments, and to Ubisoft, NSERC and Canada Research Chairs for funding. This work is funded, in part, by the ILearnRW (project no: 318803) and the C2Learn (project no. 318480) FP7 ICT EU projects.peer-reviewe

    Emotion and Stress Recognition Related Sensors and Machine Learning Technologies

    Get PDF
    This book includes impactful chapters which present scientific concepts, frameworks, architectures and ideas on sensing technologies and machine learning techniques. These are relevant in tackling the following challenges: (i) the field readiness and use of intrusive sensor systems and devices for capturing biosignals, including EEG sensor systems, ECG sensor systems and electrodermal activity sensor systems; (ii) the quality assessment and management of sensor data; (iii) data preprocessing, noise filtering and calibration concepts for biosignals; (iv) the field readiness and use of nonintrusive sensor technologies, including visual sensors, acoustic sensors, vibration sensors and piezoelectric sensors; (v) emotion recognition using mobile phones and smartwatches; (vi) body area sensor networks for emotion and stress studies; (vii) the use of experimental datasets in emotion recognition, including dataset generation principles and concepts, quality insurance and emotion elicitation material and concepts; (viii) machine learning techniques for robust emotion recognition, including graphical models, neural network methods, deep learning methods, statistical learning and multivariate empirical mode decomposition; (ix) subject-independent emotion and stress recognition concepts and systems, including facial expression-based systems, speech-based systems, EEG-based systems, ECG-based systems, electrodermal activity-based systems, multimodal recognition systems and sensor fusion concepts and (x) emotion and stress estimation and forecasting from a nonlinear dynamical system perspective

    Psychophysiological analysis of a pedagogical agent and robotic peer for individuals with autism spectrum disorders.

    Get PDF
    Autism spectrum disorder (ASD) is a neurodevelopmental disorder characterized by ongoing problems in social interaction and communication, and engagement in repetitive behaviors. According to Centers for Disease Control and Prevention, an estimated 1 in 68 children in the United States has ASD. Mounting evidence shows that many of these individuals display an interest in social interaction with computers and robots and, in general, feel comfortable spending time in such environments. It is known that the subtlety and unpredictability of people’s social behavior are intimidating and confusing for many individuals with ASD. Computerized learning environments and robots, however, prepare a predictable, dependable, and less complicated environment, where the interaction complexity can be adjusted so as to account for these individuals’ needs. The first phase of this dissertation presents an artificial-intelligence-based tutoring system which uses an interactive computer character as a pedagogical agent (PA) that simulates a human tutor teaching sight word reading to individuals with ASD. This phase examines the efficacy of an instructional package comprised of an autonomous pedagogical agent, automatic speech recognition, and an evidence-based instructional procedure referred to as constant time delay (CTD). A concurrent multiple-baseline across-participants design is used to evaluate the efficacy of intervention. Additionally, post-treatment probes are conducted to assess maintenance and generalization. The results suggest that all three participants acquired and maintained new sight words and demonstrated generalized responding. The second phase of this dissertation describes the augmentation of the tutoring system developed in the first phase with an autonomous humanoid robot which serves the instructional role of a peer for the student. In this tutoring paradigm, the robot adopts a peer metaphor, where its function is to act as a peer. With the introduction of the robotic peer (RP), the traditional dyadic interaction in tutoring systems is augmented to a novel triadic interaction in order to enhance the social richness of the tutoring system, and to facilitate learning through peer observation. This phase evaluates the feasibility and effects of using PA-delivered sight word instruction, based on a CTD procedure, within a small-group arrangement including a student with ASD and the robotic peer. A multiple-probe design across word sets, replicated across three participants, is used to evaluate the efficacy of intervention. The findings illustrate that all three participants acquired, maintained, and generalized all the words targeted for instruction. Furthermore, they learned a high percentage (94.44% on average) of the non-target words exclusively instructed to the RP. The data show that not only did the participants learn nontargeted words by observing the instruction to the RP but they also acquired their target words more efficiently and with less errors by the addition of an observational component to the direct instruction. The third and fourth phases of this dissertation focus on physiology-based modeling of the participants’ affective experiences during naturalistic interaction with the developed tutoring system. While computers and robots have begun to co-exist with humans and cooperatively share various tasks; they are still deficient in interpreting and responding to humans as emotional beings. Wearable biosensors that can be used for computerized emotion recognition offer great potential for addressing this issue. The third phase presents a Bluetooth-enabled eyewear – EmotiGO – for unobtrusive acquisition of a set of physiological signals, i.e., skin conductivity, photoplethysmography, and skin temperature, which can be used as autonomic readouts of emotions. EmotiGO is unobtrusive and sufficiently lightweight to be worn comfortably without interfering with the users’ usual activities. This phase presents the architecture of the device and results from testing that verify its effectiveness against an FDA-approved system for physiological measurement. The fourth and final phase attempts to model the students’ engagement levels using their physiological signals collected with EmotiGO during naturalistic interaction with the tutoring system developed in the second phase. Several physiological indices are extracted from each of the signals. The students’ engagement levels during the interaction with the tutoring system are rated by two trained coders using the video recordings of the instructional sessions. Supervised pattern recognition algorithms are subsequently used to map the physiological indices to the engagement scores. The results indicate that the trained models are successful at classifying participants’ engagement levels with the mean classification accuracy of 86.50%. These models are an important step toward an intelligent tutoring system that can dynamically adapt its pedagogical strategies to the affective needs of learners with ASD
    • …
    corecore