1,813 research outputs found
An Empirical Study Comparing Unobtrusive Physiological Sensors for Stress Detection in Computer Work.
Several unobtrusive sensors have been tested in studies to capture physiological reactions to stress in workplace settings. Lab studies tend to focus on assessing sensors during a specific computer task, while in situ studies tend to offer a generalized view of sensors' efficacy for workplace stress monitoring, without discriminating different tasks. Given the variation in workplace computer activities, this study investigates the efficacy of unobtrusive sensors for stress measurement across a variety of tasks. We present a comparison of five physiological measurements obtained in a lab experiment, where participants completed six different computer tasks, while we measured their stress levels using a chest-band (ECG, respiration), a wristband (PPG and EDA), and an emerging thermal imaging method (perinasal perspiration). We found that thermal imaging can detect increased stress for most participants across all tasks, while wrist and chest sensors were less generalizable across tasks and participants. We summarize the costs and benefits of each sensor stream, and show how some computer use scenarios present usability and reliability challenges for stress monitoring with certain physiological sensors. We provide recommendations for researchers and system builders for measuring stress with physiological sensors during workplace computer use
Multi-Modal Emotion Recognition for Enhanced Requirements Engineering: A Novel Approach
Requirements engineering (RE) plays a crucial role in developing software
systems by bridging the gap between stakeholders' needs and system
specifications. However, effective communication and elicitation of stakeholder
requirements can be challenging, as traditional RE methods often overlook
emotional cues. This paper introduces a multi-modal emotion recognition
platform (MEmoRE) to enhance the requirements engineering process by capturing
and analyzing the emotional cues of stakeholders in real-time. MEmoRE leverages
state-of-the-art emotion recognition techniques, integrating facial expression,
vocal intonation, and textual sentiment analysis to comprehensively understand
stakeholder emotions. This multi-modal approach ensures the accurate and timely
detection of emotional cues, enabling requirements engineers to tailor their
elicitation strategies and improve overall communication with stakeholders. We
further intend to employ our platform for later RE stages, such as requirements
reviews and usability testing. By integrating multi-modal emotion recognition
into requirements engineering, we aim to pave the way for more empathetic,
effective, and successful software development processes. We performed a
preliminary evaluation of our platform. This paper reports on the platform
design, preliminary evaluation, and future development plan as an ongoing
project
Emotion Detection Research: A Systematic Review Focuses on Data Type, Classifier Algorithm, and Experimental Methods
There is a lot of research being done on detecting human emotions. Emotion detection models are developed based on physiological data. With the development of low-cost wearable devices that measure human physiological data such as brain activity, heart rate, and skin conductivity, this research can be conducted in developing countries like Southeast Asia. However, as far as the author's research is concerned, a literature review has yet to be found on how this research on emotion detection was carried out in Southeast Asia. Therefore, this study aimed to conduct a systematic review of emotion detection research in Southeast Asia, focusing on the selection of physiological data, classification methods, and how the experiment was conducted according to the number of participants and duration. Using PRISMA guidelines, 22 SCOPUS-indexed journal articles and proceedings were reviewed. The review found that physiological data were dominated by brain activity data with the Muse Headband, followed by heart rate and skin conductivity collected with various wristbands, from around 5-31 participants, for 8 minutes to 7 weeks. Classification analysis applies machine learning, deep learning, and traditional statistics. The experiments were conducted primarily in sitting and standing positions, conditioned environments (for developing research), and unconditioned environments (applied research). This review concluded that future research opportunities exist regarding other data types, data labeling methods, and broader applications. These reviews will contribute to the enrichment of ideas and the development of emotion recognition research in Southeast Asian countries in the future
VIRTUAL INTERACTIONS: CAN EEG HELP MAKE THE DIFFERENCE WITH REAL INTERACTION?
International audienceScience and technology progress fast, but mouse and keyboard are still used to control multimedia devices. One of the limiting factors of gesture based HCIs adoption is the detection of the user's intention to interact. This study tries to make a step in that direction with use of consumer EEG sensor headset. EEG headset records in real-time data that can help to identify intention of the user based on his emotional state. For each subject EEG responses for different stimuli are recorded. Acquiring these data allows to determine the potential of EEG based intention detection. The findings are promising and with proper implementation should allow to building a new type of HCI devices
Identification of Affective States in MOOCs: A Systematic Literature Review
Massive Open Online Courses (MOOCs) are a type of online coursewere students have little interaction, no instructor, and in some cases, no deadlines to finisch assignments. For this reason, a better understanding of student affection in MOOCs is importantant could have potential to open new perspectives for this type of course. The recent popularization of tools, code libraries and algorithms for intensive data analysis made possible collect data from text and interaction with the platforms, which can be used to infer correlations between affection and learning. In this context, a bibliographical review was carried out, considering the period between 2012 and 2018, with the goal of identifying which methods are being to identify affective states. Three databases were used: ACM Digital Library, IEEE Xplore and Scopus, and 46 papers were found. The articles revealed that the most common methods are related to data intensive techinques (i.e. machine learning, sentiment analysis and, more broadly, learning analytics). Methods such as physiological signal recognition andself-report were less frequent
Designed with older adults to support better error correction in smartphone text entry : the MaxieKeyboard
Through our participatory design with older adults a need for improved error support for texting on smartphones emerged. Here we present the MaxieKeyboard based on the outcomes from this process. The keyboard highlights errors, auto-corrections and suggestion bar usage in the composition area and gives feedback on the keyboard on typing correctness. Our older adult groups have shown strong support for the keyboard
Psychophysiology in games
Psychophysiology is the study of the relationship between psychology
and its physiological manifestations. That relationship is of particular importance
for both game design and ultimately gameplaying. Players’ psychophysiology offers
a gateway towards a better understanding of playing behavior and experience.
That knowledge can, in turn, be beneficial for the player as it allows designers to
make better games for them; either explicitly by altering the game during play or
implicitly during the game design process. This chapter argues for the importance
of physiology for the investigation of player affect in games, reviews the current
state of the art in sensor technology and outlines the key phases for the application
of psychophysiology in games.The work is supported, in part, by the EU-funded FP7 ICT iLearnRWproject
(project no: 318803).peer-reviewe
CommuniSense: Crowdsourcing Road Hazards in Nairobi
Nairobi is one of the fastest growing metropolitan cities and a major
business and technology powerhouse in Africa. However, Nairobi currently lacks
monitoring technologies to obtain reliable data on traffic and road
infrastructure conditions. In this paper, we investigate the use of mobile
crowdsourcing as means to gather and document Nairobi's road quality
information. We first present the key findings of a city-wide road quality
survey about the perception of existing road quality conditions in Nairobi.
Based on the survey's findings, we then developed a mobile crowdsourcing
application, called CommuniSense, to collect road quality data. The application
serves as a tool for users to locate, describe, and photograph road hazards. We
tested our application through a two-week field study amongst 30 participants
to document various forms of road hazards from different areas in Nairobi. To
verify the authenticity of user-contributed reports from our field study, we
proposed to use online crowdsourcing using Amazon's Mechanical Turk (MTurk) to
verify whether submitted reports indeed depict road hazards. We found 92% of
user-submitted reports to match the MTurkers judgements. While our prototype
was designed and tested on a specific city, our methodology is applicable to
other developing cities.Comment: In Proceedings of 17th International Conference on Human-Computer
Interaction with Mobile Devices and Services (MobileHCI 2015
- …