16 research outputs found
Effect of Personality Traits on UX Evaluation Metrics: A Study on Usability Issues, Valence-Arousal and Skin Conductance
Personality affect the way someone feels or acts. This paper examines the
effect of personality traits, as operationalized by the Big-five questionnaire,
on the number, type, and severity of the identified usability issues,
physiological signals (skin conductance), and subjective emotional ratings
(valence-arousal).Twenty-four users interacted with a web service and then
participated in a retrospective thinking aloud session. Results revealed that
the number of usability issues is significantly affected by the Openness trait.
Emotional Stability significantly affects the type of reported usability
issues. Problem severity is not affected by any trait. Valence ratings are
significantly affected by Conscientiousness, whereas Agreeableness, Emotional
Stability and Openness significantly affect arousal ratings. Finally, Openness
has a significant effect on the number of detected peaks in user's skin
conductance
Recognizing Human Affection: Smartphone Perspective
Touch-screen Smartphone has become an obligatory segment in the lives of billions of people around the world. Understanding the human affection or emotional state of the user enables efficient human computer interaction. Smartphone is one of the most frequently used electronic devices and the number of applications developed for it is increasing day by day. Emotion recognition of the user will lead to the development of emotion aware applications. Service recommendations and intelligent user interfaces in Smartphone will be other encouraging scopes for the mobile application developers. In this paper we discuss about state-of-the-art technologies to detect human emotional states. We proposed a methodology by which three different emotional states (positive, neutral, negative) of the user can be identified using Smartphone2019;s built-in sensors like the gyroscope, accelerometer and also additional sensors such as pressure sensor. We tried to analyse infraction log of Smartphone users, approximated different sensor values to recognize human emotions. Since the pressure values found on the existing phones are not completely accurate, we introduced the use of Force Sensitive Resistor (FSR) sensor to get more accurate pressure values
The Grenoble System for the Social Touch Challenge at ICMI 2015
International audienceNew technologies and especially robotics is going towards more natural user interfaces. Works have been done in different modality of interaction such as sight (visual computing), and audio (speech and audio recognition) but some other modalities are still less researched. The touch modality is one of the less studied in HRI but could be valuable for naturalistic interaction. However touch signals can vary in semantics. It is therefore necessary to be able to recognize touch gestures in order to make human-robot interaction even more natural.We propose a method to recognize touch gestures. This method was developed on the CoST corpus and then directly applied on the HAART dataset as a participation of the Social Touch Challenge at ICMI 2015.Our touch gesture recognition process is detailed in this article to make it reproducible by other research teams.Besides features set description, we manually filtered the training corpus to produce 2 datasets.For the challenge, we submitted 6 different systems.A Support Vector Machine and a Random Forest classifiers for the HAART dataset.For the CoST dataset, the same classifiers are tested in two conditions: using all or filtered training datasets.As reported by organizers, our systems have the best correct rate in this year's challenge (70.91% on HAART, 61.34% on CoST).Our performances are slightly better that other participants but stay under previous reported state-of-the-art results
What does touch tell us about emotions in touchscreen-based gameplay?
This is the post-print version of the Article. The official published version can be accessed from the link below - Copyright @ 2012 ACM. It is posted here by permission of ACM for your personal use. Not for redistribution.Nowadays, more and more people play games on touch-screen mobile phones. This phenomenon raises a very interesting question: does touch behaviour reflect the player’s emotional state? If possible, this would not only be a valuable evaluation indicator for game designers, but also for real-time personalization of the game experience. Psychology studies on acted touch behaviour show the existence of discriminative affective profiles. In this paper, finger-stroke features during gameplay on an iPod were extracted and their discriminative power analysed. Based on touch-behaviour, machine learning algorithms were used to build systems for automatically discriminating between four emotional states (Excited, Relaxed, Frustrated, Bored), two levels of arousal and two levels of valence. The results were very interesting reaching between 69% and 77% of correct discrimination between the four emotional states. Higher results (~89%) were obtained for discriminating between two levels of arousal and two levels of valence
Recommended from our members
Non-conventional keystroke dynamics for user authentication
This paper introduces an approach for user authentication using free-text keystroke dynamics which incorporates the use of non-conventional keystroke features. Semi-timing features along with editing features are extracted from the user’s typing stream. Decision trees were exploited to classify each of the user’s data. In parallel for comparison, support vector machines (SVMs) were also used for classification in association with an ant colony optimization (ACO) feature selection technique. The results obtained from this study are encouraging as low false accept rates (FAR) and false reject rates (FRR) were achieved in the experimentation phase. This signifies that satisfactory overall system performance was achieved by using the typing attributes in the proposed approach. Thus, the use of non-conventional typing features improves the understanding of human typing behavior and therefore, provides significant contribution to the authentication system
Behavioral biometrics and ambient intelligence: New opportunities for context-aware applications
Ambient Intelligence has always been associated with the promise of exciting new applications, aware of the users' needs and state, and proactive towards their goals. However, the acquisition of the necessary information for supporting such high-level learning and decision-making processes is not always straightforward. In this chapter we describe a multi-faceted smart environment for the acquisition of relevant contextual information about its users. This information, acquired transparently through the technological devices in the environment, supports the building of high-level knowledge about the users, including a quantification of aspects such as performance, attention, mental fatigue and stress. The environment described is particularly suited for milieus such as workplaces and classrooms, in which this kind of information may be very important for the effective management of human resources, with advantages for organizations and individuals alike.(UID/CEC/00319/2013)info:eu-repo/semantics/publishedVersio
Recommended from our members
Improving the performance of free-text keystroke dynamics authentication by fusion
Free-text keystroke dynamics is invariably hampered by the huge amount of data needed to train the system. This problem has been addressed in this paper by suggesting a system that combines two methods, both of which provide a reduced training requirement for user authentication using free-text keystrokes. The two methods were fused to achieve error rates lower than those produced by each method separately. Two fusion schemes, namely: decision-level fusion and feature-level fusion, were applied. Feature-level fusion was done by concatenating two sets of features before the learning stage. The two sets of features were: a timing feature set and a non-conventional feature set. Moreover, decision-level fusion was used to merge the output of two methods using majority voting. One is Support Vector Machines (SVMs) together with Ant Colony Optimization (ACO) feature selection and the other is decision trees (DTs). Even though the classifiers using the parameters merged at feature level produced low error rates, its results were outperformed by the results achieved by the decision-level fusion scheme. Decision-level fusion was employed to achieve the best performance of 0.00% False Accept Rate (FAR) and 0.00% False Reject Rate (FRR)
Under pressure: sensing stress of computer users
ABSTRACT Recognizing when computer users are stressed can help reduce their frustration and prevent a large variety of negative health conditions associated with chronic stress. However, measuring stress non-invasively and continuously at work remains an open challenge. This work explores the possibility of using a pressure-sensitive keyboard and a capacitive mouse to discriminate between stressful and relaxed conditions in a laboratory study. During a 30-minute session, 24 participants performed several computerized tasks consisting of expressive writing, text transcription, and mouse clicking. During the stressful conditions, the large majority of the participants showed significantly increased typing pressure (>79% of the participants) and more contact with the surface of the mouse (75% of the participants). We discuss the potential implications of this work and provide recommendations for future work