573 research outputs found

    Predictive biometrics: A review and analysis of predicting personal characteristics from biometric data

    Get PDF
    Interest in the exploitation of soft biometrics information has continued to develop over the last decade or so. In comparison with traditional biometrics, which focuses principally on person identification, the idea of soft biometrics processing is to study the utilisation of more general information regarding a system user, which is not necessarily unique. There are increasing indications that this type of data will have great value in providing complementary information for user authentication. However, the authors have also seen a growing interest in broadening the predictive capabilities of biometric data, encompassing both easily definable characteristics such as subject age and, most recently, `higher level' characteristics such as emotional or mental states. This study will present a selective review of the predictive capabilities, in the widest sense, of biometric data processing, providing an analysis of the key issues still adequately to be addressed if this concept of predictive biometrics is to be fully exploited in the future

    What does touch tell us about emotions in touchscreen-based gameplay?

    Get PDF
    This is the post-print version of the Article. The official published version can be accessed from the link below - Copyright @ 2012 ACM. It is posted here by permission of ACM for your personal use. Not for redistribution.Nowadays, more and more people play games on touch-screen mobile phones. This phenomenon raises a very interesting question: does touch behaviour reflect the player’s emotional state? If possible, this would not only be a valuable evaluation indicator for game designers, but also for real-time personalization of the game experience. Psychology studies on acted touch behaviour show the existence of discriminative affective profiles. In this paper, finger-stroke features during gameplay on an iPod were extracted and their discriminative power analysed. Based on touch-behaviour, machine learning algorithms were used to build systems for automatically discriminating between four emotional states (Excited, Relaxed, Frustrated, Bored), two levels of arousal and two levels of valence. The results were very interesting reaching between 69% and 77% of correct discrimination between the four emotional states. Higher results (~89%) were obtained for discriminating between two levels of arousal and two levels of valence

    Linking recorded data with emotive and adaptive computing in an eHealth environment

    Get PDF
    Telecare, and particularly lifestyle monitoring, currently relies on the ability to detect and respond to changes in individual behaviour using data derived from sensors around the home. This means that a significant aspect of behaviour, that of an individuals emotional state, is not accounted for in reaching a conclusion as to the form of response required. The linked concepts of emotive and adaptive computing offer an opportunity to include information about emotional state and the paper considers how current developments in this area have the potential to be integrated within telecare and other areas of eHealth. In doing so, it looks at the development of and current state of the art of both emotive and adaptive computing, including its conceptual background, and places them into an overall eHealth context for application and development

    Using Keystroke Dynamics in a Multi-Agent System for User Guiding in Online Social Networks

    Full text link
    [EN] Nowadays there is a strong integration of online social platforms and applications with our daily life. Such interactions can make risks arise and compromise the information we share, thereby leading to privacy issues. In this work, a proposal that makes use of a software agent that performs sentiment analysis and another performing stress analysis on keystroke dynamics data has been designed and implemented. The proposal consists of a set of new agents that have been integrated into a multi-agent system (MAS) for guiding users interacting in online social environments, which has agents for sentiment and stress analysis on text. We propose a combined analysis using the different agents. The MAS analyzes the states of the users when they are interacting, and warns them if the messages they write are deemed negative. In this way, we aim to prevent potential negative outcomes on social network sites (SNSs). We performed experiments in the laboratory with our private SNS Pesedia over a period of one month, so we gathered data about text messages and keystroke dynamics data, and used the datasets to train the artificial neural networks (ANNs) of the agents. A set of experiments was performed for discovering which analysis is able to detect a state of the user that propagates more in the SNS, so it may be more informative for the MAS. Our study will help develop future intelligent systems that utilize user data in online social environments for guiding or helping them in their social experience.This work was funded by the project TIN2017-89156-R of the Spanish government.Aguado-Sarrió, G.; Julian Inglada, VJ.; García-Fornes, A.; Espinosa Minguet, AR. (2020). Using Keystroke Dynamics in a Multi-Agent System for User Guiding in Online Social Networks. Applied Sciences. 10(11):1-20. https://doi.org/10.3390/app10113754S1201011O’Keeffe, G. S., & Clarke-Pearson, K. (2011). The Impact of Social Media on Children, Adolescents, and Families. PEDIATRICS, 127(4), 800-804. doi:10.1542/peds.2011-0054George, J. M., & Dane, E. (2016). Affect, emotion, and decision making. Organizational Behavior and Human Decision Processes, 136, 47-55. doi:10.1016/j.obhdp.2016.06.004Thelwall, M. (2017). TensiStrength: Stress and relaxation magnitude detection for social media texts. Information Processing & Management, 53(1), 106-121. doi:10.1016/j.ipm.2016.06.009Aguado, G., Julian, V., & Garcia-Fornes, A. (2018). Towards Aiding Decision-Making in Social Networks by Using Sentiment and Stress Combined Analysis. Information, 9(5), 107. doi:10.3390/info9050107Schouten, K., & Frasincar, F. (2016). Survey on Aspect-Level Sentiment Analysis. IEEE Transactions on Knowledge and Data Engineering, 28(3), 813-830. doi:10.1109/tkde.2015.2485209Lee, P.-M., Tsui, W.-H., & Hsiao, T.-C. (2015). The Influence of Emotion on Keyboard Typing: An Experimental Study Using Auditory Stimuli. PLOS ONE, 10(6), e0129056. doi:10.1371/journal.pone.0129056Vizer, L. M., Zhou, L., & Sears, A. (2009). Automated stress detection using keystroke and linguistic features: An exploratory study. International Journal of Human-Computer Studies, 67(10), 870-886. doi:10.1016/j.ijhcs.2009.07.005Huang, F., Zhang, X., Zhao, Z., Xu, J., & Li, Z. (2019). Image–text sentiment analysis via deep multimodal attentive fusion. Knowledge-Based Systems, 167, 26-37. doi:10.1016/j.knosys.2019.01.019Mehrabian, A. (1996). Pleasure-arousal-dominance: A general framework for describing and measuring individual differences in Temperament. Current Psychology, 14(4), 261-292. doi:10.1007/bf02686918Ulinskas, M., Damaševičius, R., Maskeliūnas, R., & Woźniak, M. (2018). Recognition of human daytime fatigue using keystroke data. Procedia Computer Science, 130, 947-952. doi:10.1016/j.procs.2018.04.09

    A Review of Emotion Recognition Methods from Keystroke, Mouse, and Touchscreen Dynamics

    Get PDF
    Emotion can be defined as a subject’s organismic response to an external or internal stimulus event. The responses could be reflected in pattern changes of the subject’s facial expression, gesture, gait, eye-movement, physiological signals, speech and voice, keystroke, and mouse dynamics, etc. This suggests that on the one hand emotions can be measured/recognized from the responses, and on the other hand they can be facilitated/regulated by external stimulus events, situation changes or internal motivation changes. It is well-known that emotion has a close relationship with both physical and mental health, usually affecting an individual’s and a team’s work performance, thus emotion recognition is an important prerequisite for emotion regulation towards better emotional states and work performance. The primary problem in emotion recognition is how to recognize a subject’s emotional states easily and accurately. Currently, there are a body of good research on emotion recognition from facial expression, gesture, gait, eye-tracking, and other physiological signals such as speech and voice, but they are all intrusive and obtrusive to some extent. In contrast, keystroke, mouse and touchscreen (KMT) dynamics data can be collected non-intrusively and unobtrusively as secondary data responding to primary physical actions, thus, this paper aims to review the state-of-the-art research on emotion recognition from KMT dynamics and to identify key research challenges, opportunities and a future research roadmap for referencing. In addition, this paper answers the following six research questions (RQs): (1) what are the commonly used emotion elicitation methods and databases for emotion recognition? (2) which emotions could be recognized from KMT dynamics? (3) what key features are most appropriate for recognizing different specific emotions? (4) which classification methods are most effective for specific emotions? (5) what are the application trends of emotion recognition from KMT dynamics? (6) which application contexts are of greatest concern

    Towards detecting programmers’ stress on the basis of keystroke dynamics

    Full text link

    Stress detection in computer users from keyboard and mouse dynamics

    Get PDF
    Detecting stress in computer users, while technically challenging, is of the utmost importance in the workplace, especially now that remote working scenarios are becoming ubiquitous. In this context, cost-effective, subject-independent systems are needed that can be embedded in consumer devices and classify users' stress in a reliable and unobtrusive fashion. Leveraging keyboard and mouse dynamics is particularly appealing in this context as it exploits readily available sensors. However, available studies are mostly performed in laboratory conditions, and there is a lack of on-field investigations in closer-to-real-world settings. In this study, keyboard and mouse data from 62 volunteers were experimentally collected in-the-wild using a purpose-built Web application, designed to induce stress by asking each subject to perform 8 computer tasks under different stressful conditions. The application of Multiple Instance Learning (MIL) to Random Forest (RF) classification allowed the devised system to successfully distinguish 3 stress-level classes from keyboard (76% accuracy) and mouse (63% accuracy) data. Classifiers were further evaluated via confusion matrix, precision, recall, and F1-score
    corecore