9,499 research outputs found
Multi-Moji: Combining Thermal, Vibrotactile and Visual Stimuli to Expand the Affective Range of Feedback
This paper explores the combination of multiple concurrent
modalities for conveying emotional information in HCI:
temperature, vibration and abstract visual displays. Each modality
has been studied individually, but can only convey a
limited range of emotions within two-dimensional valencearousal
space. This paper is the first to systematically combine
multiple modalities to expand the available affective
range. Three studies were conducted: Study 1 measured the
emotionality of vibrotactile feedback by itself; Study 2 measured
the perceived emotional content of three bimodal combinations:
vibrotactile + thermal, vibrotactile + visual and
visual + thermal. Study 3 then combined all three modalities.
Results show that combining modalities increases the available
range of emotional states, particularly in the problematic
top-right and bottom-left quadrants of the dimensional
model. We also provide a novel lookup resource for designers
to identify stimuli to convey a range of emotions
Assessing the impact of affective feedback on end-user security awareness
A lack of awareness regarding online security behaviour can leave users and their devices vulnerable to compromise. This paper highlights potential areas where users may fall victim to online attacks, and reviews existing tools developed to raise users’ awareness of security behaviour. An ongoing research project is described, which provides a combined monitoring solution and affective feedback system, designed to provide affective feedback on automatic detection of risky security behaviour within a web browser. Results gained from the research conclude an affective feedback mechanism in a browser-based environment, can promote general awareness of online security
Affective Videogames and Modes of Affective Gaming: Assist Me, Challenge Me, Emote Me
In this paper we describe the fundamentals of affective gaming from a physiological point of view, covering some of the origins of the genre, how affective videogames operate and current conceptual and technological capabilities. We ground this overview of the ongoing research by taking an in-depth look at one of our own early biofeedback-based affective games. Based on our analysis of existing videogames and our own experience with affective videogames, we propose a new approach to game design based on several high-level design heuristics: assist me, challenge me and emote me (ACE), a series of gameplay "tweaks" made possible through affective videogames
Challenges in Developing Applications for Aging Populations
Elderly individuals can greatly benefit from the use of computer applications, which can assist in monitoring health conditions, staying in contact with friends and family, and even learning new things. However, developing accessible applications for an elderly user can be a daunting task for developers. Since the advent of the personal computer, the benefits and challenges of developing applications for older adults have been a hot topic of discussion. In this chapter, the authors discuss the various challenges developers who wish to create applications for the elderly computer user face, including age-related impairments, generational differences in computer use, and the hardware constraints mobile devices pose for application developers. Although these challenges are concerning, each can be overcome after being properly identified
Creating Bio-adaptive Visual Cues for a Social Virtual Reality Meditation Environment
This thesis examines designing and implementing adaptive visual cues for a social virtual reality meditation environment. The system described here adapts into user’s bio- and neurofeedback and uses that data in visual cues to convey information of physiological and affective states during meditation exercises supporting two simultaneous users.
The thesis shows the development process of different kinds of visual cues and attempts to pinpoint best practices, design principles and pitfalls regarding the visual cue development in this context. Also examined are the questions regarding criteria for selecting correct visual cues and how to convey information of biophysical synchronization between users.
The visual cues examined here are created especially for a virtual reality environment which differs as a platform from traditional two dimensional content such as user interfaces on a computer display. Points of interests are how to embody the visual cues into the virtual reality environment so that the user experience remains immersive and the visual cues convey information correctly and in an intuitive manner
Recommended from our members
One's own soundtrack: Affective music synthesis
Computer music usually sounds mechanical; hence, if musicality and music expression of virtual actors could be enhanced according to the user's mood, the quality of experience would be amplified. We present a solution that is based on improvisation using cognitive models, case based reasoning (CBR) and fuzzy values acting on close-to-affect-target musical notes as retrieved from CBR per context. It modifies music pieces according to the interpretation of the user's emotive state as computed by the emotive input acquisition componential of the CALLAS framework. The CALLAS framework incorporates the Pleasure-Arousal- Dominance (PAD) model that reflects emotive state of the user and represents the criteria for the music affectivisation process. Using combinations of positive and negative states for affective dynamics, the octants of temperament space as specified by this model are stored as base reference emotive states in the case repository, each case including a configurable mapping of affectivisation parameters. Suitable previous cases are selected and retrieved by the CBR subsystem to compute solutions for new cases, affect values from which control the music synthesis process allowing for a level of interactivity that makes way for an interesting environment to experiment and learn about expression in music
A Foundation for Emotional Expressivity
To express emotions to others in mobile text messaging in our view require designs that can both capture some of the ambiguity and subtleness that characterizes emotional interaction and keep the media specific qualities. Through the use of a body movement analysis and a dimensional model of emotion experiences, we arrived at a design for a mobile messaging service, eMoto. The service makes use of the sub-symbolic expressions; colors, shapes and animations, for expressing emotions in an open-ended way. Here we present the design process and a user study of those expressions, where the results show that the use of these sub-symbolic expressions can work as a foundation to use as a creative tool, but still allowing for the communication to be situated. The inspiration taken from body movements proved to be very useful as a design input. It was also reflected in the way our subjects described the expressions
Communication-Wear: User Feedback as Part of a Co-Design Process
Communication-Wear is a clothing concept that augments the mobile phone by enabling expressive messages to be exchanged remotely, by conveying a sense of touch, and presence. It proposes to synthesise conventions and cultures of fashion with those of mobile communications, where there are shared attributes in terms of communication and expression. Using garment prototypes as research probes as part of an on-going iterative co-design process, we endeavoured to mobilise participants’ tacit knowledge in order to gauge user perceptions on touch communication in a lab-based trial. The aim of this study was to determine whether established sensory associations people have with the tactile qualities of textiles could be used as signs and metaphors for experiences, moods, social interactions and gestures, related to interpersonal touch. The findings are used to inspire new design ideas for textile actuators for use in touch communication in successive iterations
- …