4,450 research outputs found
Affective games:a multimodal classification system
Affective gaming is a relatively new field of research that exploits human emotions to influence gameplay for an enhanced player experience. Changes in playerâs psychology reflect on their behaviour and physiology, hence recognition of such variation is a core element in affective games. Complementary sources of affect offer more reliable recognition, especially in contexts where one modality is partial or unavailable. As a multimodal recognition system, affect-aware games are subject to the practical difficulties met by traditional trained classifiers. In addition, inherited game-related challenges in terms of data collection and performance arise while attempting to sustain an acceptable level of immersion. Most existing scenarios employ sensors that offer limited freedom of movement resulting in less realistic experiences. Recent advances now offer technology that allows players to communicate more freely and naturally with the game, and furthermore, control it without the use of input devices. However, the affective game industry is still in its infancy and definitely needs to catch up with the current life-like level of adaptation provided by graphics and animation
Animated virtual agents to cue user attention: comparison of static and dynamic deictic cues on gaze and touch responses
This paper describes an experiment developed to study the performance of virtual agent animated cues within digital interfaces. Increasingly, agents are used in virtual environments as part of the branding process and to guide user interaction. However, the level of agent detail required to establish and enhance efficient allocation of attention remains unclear. Although complex agent motion is now possible, it is costly to implement and so should only be routinely implemented if a clear benefit can be shown. Pevious methods of assessing the effect of gaze-cueing as a solution to scene complexity have relied principally on two-dimensional static scenes and manual peripheral inputs. Two experiments were run to address the question of agent cues on human-computer interfaces. Both experiments measured the efficiency of agent cues analyzing participant responses either by gaze or by touch respectively. In the first experiment, an eye-movement recorder was used to directly assess the immediate overt allocation of attention by capturing the participantâs eyefixations following presentation of a cueing stimulus. We found that a fully animated agent could speed up user interaction with the interface. When user attention was directed using a fully animated agent cue, users responded 35% faster when compared with stepped 2-image agent cues, and 42% faster when compared with a static 1-image cue. The second experiment recorded participant responses on a touch screen using same agent cues. Analysis of touch inputs confirmed the results of gaze-experiment, where fully animated agent made shortest time response with a slight decrease on the time difference comparisons. Responses to fully animated agent were 17% and 20% faster when compared with 2-image and 1-image cue severally. These results inform techniques aimed at engaging usersâ attention in complex scenes such as computer games and digital transactions within public or social interaction contexts by demonstrating the benefits of dynamic gaze and head cueing directly on the usersâ eye movements and touch responses
Recommended from our members
Modelling 3D product visualization on the online retailer
-Purpose: An emerging body of research has investigated telepresence and presence notions in online retailersâ websites during the past two decades. Since that time considerable research has been published in different fields to explain the meanings and applications of these notions. This study aims to investigate the antecedents and consequences of 3D product simulation telepresence and the effects of the consequences on consumersâ behavioural intentions on the online retailer Website.
-Design/methodology/approach: this study developed a retailer Website in which a variety of laptops are presented by using 3D product visualizations. This research used a within-subjects design and employed two laboratory experiments. In the first experiment, a two-way repeated measure ANOVA was conducted to determine the effects of the manipulated conditions on the dependent variable (i.e., 3D telepresence). Finally, we used Amos 16 to test the overall goodness of fit of the proposed conceptual model.
-Originality/values: To the best of the authorsâ knowledge, this research is the first in the UK that used a UK sample to investigate the effects of using 3D product visualization in an electrical industry (i.e., laptops) on consumersâ experiences. Secondly, this paper merged constructs from the human-computer-interaction (HCI) field (i.e., control, vividness and telepresence) to the proposed model. Moreover, the way this paper defines interactivity and telepresence adds value to this study. Thirdly, we developed new scales to measure telepresence and control constructs to suit consumersâ experience in the online retailer context. Finally, the design of this study is original in using a website that contains 3D product visualization with both utilitarian and hedonic values.
-Findings: The manipulation checks showed that high control and animation provides most effective representation of telepresence. The overall goodness of fit of the conceptual model met the standards and showed that all the hypothesized paths were valid
Bringing tabletop technologies to kindergarten children
Taking computer technology away from the desktop and into a more physical, manipulative space, is known that provide many benefits and is generally considered to result in a system that is easier to learn and more natural to use. This paper describes a design solution that allows kindergarten children to take the benefits of the new pedagogical possibilities that tangible interaction and tabletop technologies offer for manipulative learning. After analysis of children's cognitive and psychomotor skills, we have designed and tuned a prototype game that is suitable for children aged 3 to 4 years old. Our prototype uniquely combines low cost tangible interaction and tabletop technology with tutored learning. The design has been based on the observation of children using the technology, letting them freely play with the application during three play sessions. These observational sessions informed the design decisions for the game whilst also confirming the children's enjoyment of the prototype
An Integrated Formal Task Specification Method for Smart Environments
This thesis is concerned with the development of interactive systems for smart environments. In such scenario different interaction paradigms need to be supported and according methods and development strategies need to be applied to comprise not only explicit interaction (e.g., pressing a button to adjust the light) but also implicit interactions (e.g., walking to the speakerâs desk to give a talk) to assist the user appropriately. A task-based modeling approach
is introduced allowing basing the implementing of different
interaction paradigms on the same artifact
Exploring the Affective Loop
Research in psychology and neurology shows that both body and mind are
involved when experiencing emotions (Damasio 1994, Davidson et al.
2003). People are also very physical when they try to communicate their
emotions. Somewhere in between beings consciously and unconsciously
aware of it ourselves, we produce both verbal and physical signs to make
other people understand how we feel. Simultaneously, this production of
signs involves us in a stronger personal experience of the emotions we
express.
Emotions are also communicated in the digital world, but there is little
focus on users' personal as well as physical experience of emotions in
the available digital media. In order to explore whether and how we can
expand existing media, we have designed, implemented and evaluated
/eMoto/, a mobile service for sending affective messages to others. With
eMoto, we explicitly aim to address both cognitive and physical
experiences of human emotions. Through combining affective gestures for
input with affective expressions that make use of colors, shapes and
animations for the background of messages, the interaction "pulls" the
user into an /affective loop/. In this thesis we define what we mean by
affective loop and present a user-centered design approach expressed
through four design principles inspired by previous work within Human
Computer Interaction (HCI) but adjusted to our purposes; /embodiment/
(Dourish 2001) as a means to address how people communicate emotions in
real life, /flow/ (Csikszentmihalyi 1990) to reach a state of
involvement that goes further than the current context, /ambiguity/ of
the designed expressions (Gaver et al. 2003) to allow for open-ended
interpretation by the end-users instead of simplistic, one-emotion
one-expression pairs and /natural but designed expressions/ to address
people's natural couplings between cognitively and physically
experienced emotions. We also present results from an end-user study of
eMoto that indicates that subjects got both physically and emotionally
involved in the interaction and that the designed "openness" and
ambiguity of the expressions, was appreciated and understood by our
subjects. Through the user study, we identified four potential design
problems that have to be tackled in order to achieve an affective loop
effect; the extent to which users' /feel in control/ of the interaction,
/harmony and coherence/ between cognitive and physical expressions/,/
/timing/ of expressions and feedback in a communicational setting, and
effects of users' /personality/ on their emotional expressions and
experiences of the interaction
- âŠ