176 research outputs found

    INNOVATING CONTROL AND EMOTIONAL EXPRESSIVE MODALITIES OF USER INTERFACES FOR PEOPLE WITH LOCKED-IN SYNDROME

    Get PDF
    Patients with Lock-In-Syndrome (LIS) lost their ability to control any body part beside their eyes. Current solutions mainly use eye-tracking cameras to track patients' gaze as system input. However, despite the fact that interface design greatly impacts user experience, only a few guidelines have been were proposed so far to insure an easy, quick, fluid and non-tiresome computer system for these patients. On the other hand, the emergence of dedicated computer software has been greatly increasing the patients' capabilities, but there is still a great need for improvements as existing systems still present low usability and limited capabilities. Most interfaces designed for LIS patients aim at providing internet browsing or communication abilities. State of the art augmentative and alternative communication systems mainly focus on sentences communication without considering the need for emotional expression inextricable from human communication. This thesis aims at exploring new system control and expressive modalities for people with LIS. Firstly, existing gaze-based web-browsing interfaces were investigated. Page analysis and high mental workload appeared as recurring issues with common systems. To address this issue, a novel user interface was designed and evaluated against a commercial system. The results suggested that it is easier to learn and to use, quicker, more satisfying, less frustrating, less tiring and less prone to error. Mental workload was greatly diminished with this system. Other types of system control for LIS patients were then investigated. It was found that galvanic skin response may be used as system input and that stress related bio-feedback helped lowering mental workload during stressful tasks. Improving communication was one of the main goal of this research and in particular emotional communication. A system including a gaze-controlled emotional voice synthesis and a personal emotional avatar was developed with this purpose. Assessment of the proposed system highlighted the enhanced capability to have dialogs more similar to normal ones, to express and to identify emotions. Enabling emotion communication in parallel to sentences was found to help with the conversation. Automatic emotion detection seemed to be the next step toward improving emotional communication. Several studies established that physiological signals relate to emotions. The ability to use physiological signals sensors with LIS patients and their non-invasiveness made them an ideal candidate for this study. One of the main difficulties of emotion detection is the collection of high intensity affect-related data. Studies in this field are currently mostly limited to laboratory investigations, using laboratory-induced emotions, and are rarely adapted for real-life applications. A virtual reality emotion elicitation technique based on appraisal theories was proposed here in order to study physiological signals of high intensity emotions in a real-life-like environment. While this solution successfully elicited positive and negative emotions, it did not elicit the desired emotions for all subject and was therefore, not appropriate for the goals of this research. Collecting emotions in the wild appeared as the best methodology toward emotion detection for real-life applications. The state of the art in the field was therefore reviewed and assessed using a specifically designed method for evaluating datasets collected for emotion recognition in real-life applications. The proposed evaluation method provides guidelines for future researcher in the field. Based on the research findings, a mobile application was developed for physiological and emotional data collection in the wild. Based on appraisal theory, this application provides guidance to users to provide valuable emotion labelling and help them differentiate moods from emotions. A sample dataset collected using this application was compared to one collected using a paper-based preliminary study. The dataset collected using the mobile application was found to provide a more valuable dataset with data consistent with literature. This mobile application was used to create an open-source affect-related physiological signals database. While the path toward emotion detection usable in real-life application is still long, we hope that the tools provided to the research community will represent a step toward achieving this goal in the future. Automatically detecting emotion could not only be used for LIS patients to communicate but also for total-LIS patients who have lost their ability to move their eyes. Indeed, giving the ability to family and caregiver to visualize and therefore understand the patients' emotional state could greatly improve their quality of life. This research provided tools to LIS patients and the scientific community to improve augmentative and alternative communication, technologies with better interfaces, emotion expression capabilities and real-life emotion detection. Emotion recognition methods for real-life applications could not only enhance health care but also robotics, domotics and many other fields of study. A complete system fully gaze-controlled was made available open-source with all the developed solutions for LIS patients. This is expected to enhance their daily lives by improving their communication and by facilitating the development of novel assistive systems capabilities

    ELVIS: Entertainment-led video summaries

    Get PDF
    © ACM, 2010. This is the author's version of the work. It is posted here by permission of ACM for your personal use. Not for redistribution. The definitive version was published in ACM Transactions on Multimedia Computing, Communications, and Applications, 6(3): Article no. 17 (2010) http://doi.acm.org/10.1145/1823746.1823751Video summaries present the user with a condensed and succinct representation of the content of a video stream. Usually this is achieved by attaching degrees of importance to low-level image, audio and text features. However, video content elicits strong and measurable physiological responses in the user, which are potentially rich indicators of what video content is memorable to or emotionally engaging for an individual user. This article proposes a technique that exploits such physiological responses to a given video stream by a given user to produce Entertainment-Led VIdeo Summaries (ELVIS). ELVIS is made up of five analysis phases which correspond to the analyses of five physiological response measures: electro-dermal response (EDR), heart rate (HR), blood volume pulse (BVP), respiration rate (RR), and respiration amplitude (RA). Through these analyses, the temporal locations of the most entertaining video subsegments, as they occur within the video stream as a whole, are automatically identified. The effectiveness of the ELVIS technique is verified through a statistical analysis of data collected during a set of user trials. Our results show that ELVIS is more consistent than RANDOM, EDR, HR, BVP, RR and RA selections in identifying the most entertaining video subsegments for content in the comedy, horror/comedy, and horror genres. Subjective user reports also reveal that ELVIS video summaries are comparatively easy to understand, enjoyable, and informative

    Modelling stress levels based on physiological responses to web contents

    Get PDF
    Capturing data on user experience of web applications and browsing is important in many ways. For instance, web designers and developers may find such data quite useful in enhancing navigational features of web pages; rehabilitation therapists, mental-health specialists and other biomedical personnel regularly use computer simulations to monitor and control the behaviour of patients. Marketing and law enforcement agencies are probably two of the most common beneficiaries of such data - with the success of online marketing increasingly requiring a good understanding of customers' online behaviour. On the other hand, law enforcement agents have for long been using lie detection methods - typically relying on human physiological functions - to determine the likelihood of falsehood in interrogations. Quite often, online user experience is studied via tangible measures such as task completion time, surveys and comprehensive tests from which data attributes are generated. Prediction of users' stress level and behaviour in some of these cases depends mostly on task completion time and number of clicks per given time interval. However, such approaches are generally subjective and rely heavily on distributional assumptions making the results prone to recording errors. We propose a novel method - PHYCOB I - that addresses the foregoing issues. Primary data were obtained from laboratory experiments during which forty-four volunteers had their synchronized physiological readings - Skin Conductance Response, Skin Temperature, Eye tracker sensors and users activity attributes taken by a specially designed sensing device. PHYCOB I then collects secondary data attributes from these synchronized physiological readings and uses them for two purposes. Firstly, naturally arising structures in the data are detected via identifying optimal responses and high level tonic phases and secondly users are classified into three different stress levels. The method's novelty derives from its ability to integrate physiological readings and eye movement records to identify hidden correlates by simply computing the delay for each increase in amplitude in reaction to webpages contents. This addresses the problem of latency faced in most physiological readings. Performance comparisons are made with conventional predictive methods such as Neural Network and Logistic Regression whereas multiple runs of the Forward Search algorithm and Principal Component Analysis are used to cross-validate the performance. Results show that PHYCOB I outperforms the conventional models in terms of both accuracy and reliability - that is, the average recoverable natural structures for the three models with respect to accuracy and reliability are more consistent within the PHYCOB I environment than with the other two. There are two main advantages of the proposed method - its resistance to over-fitting and its ability to automatically assess human stress levels while dealing with specific web contents. The latter is particularly important in that it can be used to predict which contents of webpages cause stress-induced emotions to users when involved in online activities. There are numerous potential extensions of the model including, but not limited to, applications in law enforcement - detecting abnormal online behaviour; online shopping (marketing) - predicting what captures customers attention and palliative in biomedical application such as detecting levels of stress in patients during physiotherapy sessions

    Developing an objective indicator of fatigue: An alternative mobile version of the Psychomotor Vigilance Task (m-PVT)

    Get PDF
    Approximately 20% of the working population report symptoms of feeling fatigued at work. The aim of the study was to investigate whether an alternative mobile version of the ‘gold standard’ Psychomotor Vigilance Task (PVT) could be used to provide an objective indicator of fatigue in staff working in applied safety critical settings such as train driving, hospital staffs, emergency services, law enforcements, etc., using different mobile devices. 26 participants mean age 20 years completed a 25-min reaction time study using an alternative mobile version of the Psychomotor Vigilance Task (m-PVT) that was implemented on either an Apple iPhone 6s Plus or a Samsung Galaxy Tab 4. Participants attended two sessions: a morning and an afternoon session held on two consecutive days counterbalanced. It was found that the iPhone 6s Plus generated both mean speed responses (1/RTs) and mean reaction times (RTs) that were comparable to those observed in the literature while the Galaxy Tab 4 generated significantly lower 1/RTs and slower RTs than those found with the iPhone 6s Plus. Furthermore, it was also found that the iPhone 6s Plus was sensitive enough to detect lower mean speed of responses (1/RTs) and significantly slower mean reaction times (RTs) after 10-min on the m-PVT. In contrast, it was also found that the Galaxy Tab 4 generated mean number of lapses that were significant after 5-min on the m-PVT. These findings seem to indicate that the m-PVT could be used to provide an objective indicator of fatigue in staff working in applied safety critical settings such as train driving, hospital staffs, emergency services, law enforcements, etc

    Exploring Peripheral Physiology as a Predictor of Perceived Relevance in Information Retrieval

    Get PDF
    Peripheral physiological signals, as obtained using electrodermal activity and facial electromyography over the corrugator supercilii muscle, are explored as indicators of perceived relevance in information retrieval tasks. An experiment with 40 participants is reported, in which these physiological signals are recorded while participants perform information retrieval tasks. Appropriate feature engineering is defined, and the feature space is explored. The results indicate that features in the window of 4 to 6 seconds after the relevance judgment for electrodermal activity, and from 1 second before to 2 seconds after the relevance judgment for corrugator supercilii activity, are associated with the users’ perceived relevance of information items. A classifier verified the predictive power of the features and showed up to 14% improvement predicting relevance. Our research can help the design of intelligent user interfaces for information retrieval that can detect the user’s perceived relevance from physiological signals and complement or replace conventional relevance feedback

    Evaluation of User Experience in Human–Robot Interaction : A Systematic Literature Review

    Get PDF
    Industry 4.0 has ushered in a new era of process automation, thus redefining the role of people and altering existing workplaces into unknown formats. The number of robots in the manufacturing industry has been steadily increasing for several decades and in recent years the number and variety of industries using robots have also increased. For robots to become allies in the day-to-day lives of operators, they need to provide positive and fit-for-purpose experiences through smooth and satisfying interactions. In this sense, user experience (UX) serves as the greatest link between persons and robots. Essential to the study of UX is its evaluation. Therefore, the aim of this study is to identify methodologies that evaluate the human–robot interaction (HRI) from a human-centred approach. A systematic literature review has been carried out, in which 24 articles have been identified. Among these, are 15 experimental studies, in addition to theoretical frameworks and tools. The review has provided insight into how evaluations are conducted in HRI. The results show the most evaluated factors and how they are measured considering different types of measurements: qualitative and quantitative, objective and subjective. Research gaps and future directions are correspondingly identified

    Avoiding Ad Avoidance: Factors Affecting The Perception Of Online Banner Ads

    Get PDF
    This dissertation examined the effect of search type, ad saliency, and ad repetition on the perception of online banner advertisements. In the first study, 48 student participants conducted simulated search tasks using mixed factorial design where search type (known-item vs. exploratory) was manipulated within-subject and the banner saliency level (low (black and white) vs. medium (color) vs. high (color animation)) was manipulated between subjects. The results showed a significant effect for search type, such that during an exploratory search task the participants had a higher average number of eye fixations on the banner ads compared with known-item search. In addition, there was a significant difference between high and low ad saliency levels, such that participants exposed to low salient ads had a higher average number of eye fixations on the banner ads as compared with high salient ads. There was no significant effect of ad repetition on ad perception. A second study replicated the original experimental design but used four novice Internet users. The results from the second study provide preliminary support to the asymptotic habituation model, which predicts an inverse decline of an orienting response to banner ads as a function of repetition. This dissertation concludes with applicable design recommendation for banner ad deployment to ensure visibility while maintaining a positive user experience.Doctor of Philosoph

    Work, aging, mental fatigue, and eye movement dynamics

    Get PDF

    A defeasible reasoning framework for human mental workload representation and assessment

    Get PDF
    Human mental workload (MWL) has gained importance in the last few decades as an important design concept. It is a multifaceted complex construct mainly applied in cognitive sciences and has been defined in many different ways. Although measuring MWL has potential advantages in interaction and interface design, its formalisation as an operational and computational construct has not sufficiently been addressed. This research contributes to the body of knowledge by providing an extensible framework built upon defeasible reasoning, and implemented with argumentation theory (AT), in which MWL can be better defined, measured, analysed, explained and applied in different human–computer interactive contexts. User studies have demonstrated how a particular instance of this framework outperformed state-of-the-art subjective MWL assessment techniques in terms of sensitivity, diagnosticity and validity. This in turn encourages further application of defeasible AT for enhancing the representation of MWL and improving the quality of its assessment
    • 

    corecore