5,503 research outputs found

    What does touch tell us about emotions in touchscreen-based gameplay?

    Get PDF
    This is the post-print version of the Article. The official published version can be accessed from the link below - Copyright @ 2012 ACM. It is posted here by permission of ACM for your personal use. Not for redistribution.Nowadays, more and more people play games on touch-screen mobile phones. This phenomenon raises a very interesting question: does touch behaviour reflect the player’s emotional state? If possible, this would not only be a valuable evaluation indicator for game designers, but also for real-time personalization of the game experience. Psychology studies on acted touch behaviour show the existence of discriminative affective profiles. In this paper, finger-stroke features during gameplay on an iPod were extracted and their discriminative power analysed. Based on touch-behaviour, machine learning algorithms were used to build systems for automatically discriminating between four emotional states (Excited, Relaxed, Frustrated, Bored), two levels of arousal and two levels of valence. The results were very interesting reaching between 69% and 77% of correct discrimination between the four emotional states. Higher results (~89%) were obtained for discriminating between two levels of arousal and two levels of valence

    GUI system for Elders/Patients in Intensive Care

    Full text link
    In the old age, few people need special care if they are suffering from specific diseases as they can get stroke while they are in normal life routine. Also patients of any age, who are not able to walk, need to be taken care of personally but for this, either they have to be in hospital or someone like nurse should be with them for better care. This is costly in terms of money and man power. A person is needed for 24x7 care of these people. To help in this aspect we purposes a vision based system which will take input from the patient and will provide information to the specified person, who is currently may not in the patient room. This will reduce the need of man power, also a continuous monitoring would not be needed. The system is using MS Kinect for gesture detection for better accuracy and this system can be installed at home or hospital easily. The system provides GUI for simple usage and gives visual and audio feedback to user. This system work on natural hand interaction and need no training before using and also no need to wear any glove or color strip.Comment: In proceedings of the 4th IEEE International Conference on International Technology Management Conference, Chicago, IL USA, 12-15 June, 201

    Automation of motor dexterity assessment

    Get PDF
    Motor dexterity assessment is regularly performed in rehabilitation wards to establish patient status and automatization for such routinary task is sought. A system for automatizing the assessment of motor dexterity based on the Fugl-Meyer scale and with loose restrictions on sensing technologies is presented. The system consists of two main elements: 1) A data representation that abstracts the low level information obtained from a variety of sensors, into a highly separable low dimensionality encoding employing t-distributed Stochastic Neighbourhood Embedding, and, 2) central to this communication, a multi-label classifier that boosts classification rates by exploiting the fact that the classes corresponding to the individual exercises are naturally organized as a network. Depending on the targeted therapeutic movement class labels i.e. exercises scores, are highly correlated-patients who perform well in one, tends to perform well in related exercises-; and critically no node can be used as proxy of others - an exercise does not encode the information of other exercises. Over data from a cohort of 20 patients, the novel classifier outperforms classical Naive Bayes, random forest and variants of support vector machines (ANOVA: p <; 0.001). The novel multi-label classification strategy fulfills an automatic system for motor dexterity assessment, with implications for lessening therapist's workloads, reducing healthcare costs and providing support for home-based virtual rehabilitation and telerehabilitation alternatives

    Semi-automation of gesture annotation by machine learning and human collaboration

    Get PDF
    none6siGesture and multimodal communication researchers typically annotate video data manually, even though this can be a very time-consuming task. In the present work, a method to detect gestures is proposed as a fundamental step towards a semi-automatic gesture annotation tool. The proposed method can be applied to RGB videos and requires annotations of part of a video as input. The technique deploys a pose estimation method and active learning. In the experiment, it is shown that if about 27% of the video is annotated, the remaining parts of the video can be annotated automatically with an F-score of at least 0.85. Users can run this tool with a small number of annotations first. If the predicted annotations for the remainder of the video are not satisfactory, users can add further annotations and run the tool again. The code has been released so that other researchers and practitioners can use the results of this research. This tool has been confirmed to work in conjunction with ELAN.openIenaga, Naoto; Cravotta, Alice; Terayama, Kei; Scotney, Bryan W.; Saito, Hideo; BusĂ , M. GraziaIenaga, Naoto; Cravotta, Alice; Terayama, Kei; Scotney, Bryan W.; Saito, Hideo; BusĂ , M. Grazi

    Characterization of stroke-related upper limb motor impairments across various upper limb activities by use of kinematic core set measures

    Full text link
    BACKGROUND Upper limb kinematic assessments provide quantifiable information on qualitative movement behavior and limitations after stroke. A comprehensive characterization of spatiotemporal kinematics of stroke subjects during upper limb daily living activities is lacking. Herein, kinematic expressions were investigated with respect to different movement types and impairment levels for the entire task as well as for motion subphases. METHOD Chronic stroke subjects with upper limb movement impairments and healthy subjects performed a set of daily living activities including gesture and grasp movements. Kinematic measures of trunk displacement, shoulder flexion/extension, shoulder abduction/adduction, elbow flexion/extension, forearm pronation/supination, wrist flexion/extension, movement time, hand peak velocity, number of velocity peaks (NVP), and spectral arc length (SPARC) were extracted for the whole movement as well as the subphases of reaching distally and proximally. The effects of the factors gesture versus grasp movements, and the impairment level on the kinematics of the whole task were tested. Similarities considering the metrics expressions and relations were investigated for the subphases of reaching proximally and distally between tasks and subgroups. RESULTS Data of 26 stroke and 5 healthy subjects were included. Gesture and grasp movements were differently expressed across subjects. Gestures were performed with larger shoulder motions besides higher peak velocity. Grasp movements were expressed by larger trunk, forearm, and wrist motions. Trunk displacement, movement time, and NVP increased and shoulder flexion/extension decreased significantly with increased impairment level. Across tasks, phases of reaching distally were comparable in terms of trunk displacement, shoulder motions and peak velocity, while reaching proximally showed comparable expressions in trunk motions. Consistent metric relations during reaching distally were found between shoulder flexion/extension, elbow flexion/extension, peak velocity, and between movement time, NVP, and SPARC. Reaching proximally revealed reproducible correlations between forearm pronation/supination and wrist flexion/extension, movement time and NVP. CONCLUSION Spatiotemporal differences between gestures versus grasp movements and between different impairment levels were confirmed. The consistencies of metric expressions during movement subphases across tasks can be useful for linking kinematic assessment standards and daily living measures in future research and performing task and study comparisons. TRIAL REGISTRATION ClinicalTrials.gov Identifier NCT03135093. Registered 26 April 2017, https://clinicaltrials.gov/ct2/show/NCT03135093

    Face and Body gesture recognition for a vision-based multimodal analyser

    Full text link
    users, computers should be able to recognize emotions, by analyzing the human&apos;s affective state, physiology and behavior. In this paper, we present a survey of research conducted on face and body gesture and recognition. In order to make human-computer interfaces truly natural, we need to develop technology that tracks human movement, body behavior and facial expression, and interprets these movements in an affective way. Accordingly in this paper, we present a framework for a vision-based multimodal analyzer that combines face and body gesture and further discuss relevant issues
    • …
    corecore