39 research outputs found
Sour promotes risk-taking: an investigation into the effect of taste on risk-taking behaviour in humans
Taking risks is part of everyday life. Some people actively pursue risky activities (e.g., jumping out of a plane), while others avoid any risk (e.g., people with anxiety disorders). Paradoxically, risk-taking is a primitive behaviour that may lead to a happier life by offering a sense of excitement through self-actualization. Here, we demonstrate for the first time that sour - amongst the five basic tastes (sweet, bitter, sour, salty, and umami) - promotes risk-taking. Based on a series of three experiments, we show that sour has the potential to modulate risk-taking behaviour across two countries (UK and Vietnam), across individual differences in risk-taking personality and styles of thinking (analytic versus intuitive). Modulating risk-taking can improve everyday life for a wide range of people
TasteBud: bring taste back into the game
When we are babies we put anything and everything in our mouths, from Lego to crayons. As we grow older we increasingly rely on our other senses to explore our surroundings and objects in the world. When interacting with technology, we mainly rely on our senses of vision, touch, and hearing, and the sense of taste becomes reduced to the context of eating and food experiences. In this paper, we build on initial efforts to enhance gaming experiences through gustatory stimuli. We introduce TasteBud, a gustatory gaming interface that we integrated with the classic Minesweeper game. We first describe the details on the hardware and software design for the taste stimulation and then present initial findings from a user study. We discuss how taste has the potential to transform gaming experiences through systematically exploiting the experiences individual gustatory stimuli (e.g., sweet, bitter, sour) can elicit
Recommended from our members
What did I sniff? Mapping scents onto driving-related messages
The sense of smell is well known to provide very vivid experiences and to mediate a strong activation of crossmodal semantic representations. Despite a growing number of olfactory HCI prototypes, there have been only a few attempts to study the sense of smell as an interaction modality. Here, we focus on the exploration of olfaction for in-car interaction design by establishing a mapping between three different driving-related messages ("Slow down", "Fill gas", "Passing by a point of interest") and four scents (lemon, lavender, peppermint, rose). The results of our first study demonstrate strong associations between, for instance, the "Slow down" message and the scent of lemon, the "Fill gas" message and the scent of peppermint, the "Passing by a point of interest" message and the scent of rose. These findings have been confirmed in our second study, where participants expressed their mapping preferences while performing a simulated driving task
Error related negativity in observing interactive tasks
Error Related Negativity is triggered when a user either makes a mistake or the application behaves differently from their expectation. It can also appear while observing another user making a mistake. This paper investigates ERN in collaborative settings where observing another user (the executer) perform a task is typical and then explores its applicability to HCI. We first show that ERN can be detected on signals captured by commodity EEG headsets like an Emotiv headset when observing another person perform a typical multiple-choice reaction time task. We then investigate the anticipation effects by detecting ERN in the time interval when an executer is reaching towards an answer. We show that we can detect this signal with both a clinical EEG device and with an Emotiv headset. Our results show that online single trial detection is possible using both headsets during tasks that are typical of collaborative interactive applications. However there is a trade-off between the detection speed and the quality/prices of the headsets. Based on the results, we discuss and present several HCI scenarios for use of ERN in observing tasks and collaborative settings
Recommended from our members
Gustatory interface: the challenges of ‘how’ to stimulate the sense of taste
Gustatory interfaces have gained popularity in the field of human computer interaction, especially in the context of augmenting gaming and virtual reality experiences, but also in the context of food interaction design enabling the creation of new eating experiences. In this paper, we first review prior works on gustatory interfaces and particularly discuss them based on the use of either a chemical, electrical and/or thermal stimulation approach. We then present two concepts for gustatory interfaces that represent a more traditional delivery approach (using a mouthpiece) versus a novel approach that is based on principles of acoustic levitation (contactless delivery).We discuss the design opportunities around those two concepts in particular to overcome challenges of "how" to stimulate the sense of taste
LeviSense: a platform for the multisensory integration in levitating food and insights into its effect on flavour perception
Eating is one of the most multisensory experiences in everyday life. All of our five senses (i.e. taste, smell, vision, hearing and touch) are involved, even if we are not aware of it. However, while multisensory integration has been well studied in psychology, there is not a single platform for testing systematically the effects of different stimuli. This lack of platform results in unresolved design challenges for the design of taste-based immersive experiences. Here, we present LeviSense: the first system designed for multisensory integration in gustatory experiences based on levitated food. Our system enables the systematic exploration of different sensory effects on eating experiences. It also opens up new opportunities for other professionals (e.g., molecular gastronomy chefs) looking for innovative taste-delivery platforms. We describe the design process behind LeviSense and conduct two experiments to test a subset of the crossmodal combinations (i.e., taste and vision, taste and smell). Our results show how different lighting and smell conditions affect the perceived taste intensity, pleasantness, and satisfaction. We discuss how LeviSense creates a new technical, creative, and expressive possibilities in a series of emerging design spaces within Human-Food Interaction
Multisensory experiences in HCI
The use of vision and audition for interaction dominated the field of human-computer interaction (HCI) for decades, despite the fact that nature has provided us with many more senses for perceiving and interacting with the world around us. Recently, HCI researchers have started trying to capitalize on touch, taste, and smell when designing interactive tasks, especially in gaming, multimedia, and art environments. Here we provide a snapshot of our research into touch, taste, and smell, which we’re carrying out at the Sussex Computer Human Interaction (SCHI—pronounced “sky”) Lab at the University of Sussex in Brighton, UK
Recommended from our members
Not just seeing, but also feeling art: mid-air haptic experiences integrated in a multisensory art exhibition
The use of the senses of vision and audition as interactive means has dominated the field of Human-Computer Interaction (HCI) for decades, even though nature has provided us with many more senses for perceiving and interacting with the world around us. That said, it has become attractive for {HCI} researchers and designers to harness touch, taste, and smell in interactive tasks and experience design. In this paper, we present research and design insights gained throughout an interdisciplinary collaboration on a six-week multisensory display – Tate Sensorium – exhibited at the Tate Britain art gallery in London, UK. This is a unique and first time case study on how to design art experiences whilst considering all the senses (i.e., vision, sound, touch, smell, and taste), in particular touch, which we exploited by capitalizing on a novel haptic technology, namely, mid-air haptics. We first describe the overall set up of Tate Sensorium and then move on to describing in detail the design process of the mid-air haptic feedback and its integration with sound for the Full Stop painting by John Latham (1961). This was the first time that mid-air haptic technology was used in a museum context over a prolonged period of time and integrated with sound to enhance the experience of visual art. As part of an interdisciplinary team of curators, sensory designers, sound artists, we selected a total of three variations of the mid-air haptic experience (i.e., haptic patterns), which were alternated at dedicated times throughout the six-week exhibition. We collected questionnaire-based feedback from 2500 visitors and conducted 50 interviews to gain quantitative and qualitative insights on visitors’ experiences and emotional reactions. Whilst the questionnaire results are generally very positive with only a small variation of the visitors’ arousal ratings across the three tactile experiences designed for the Full Stop painting, the interview data shed light on the differences in the visitors’ subjective experiences. Our findings suggest multisensory designers and art curators can ensure a balance between surprising experiences versus the possibility of free exploration for visitors. In addition, participants expressed that experiencing art with the combination of mid-air haptic and sound was immersive and provided an up-lifting experience of touching without touch. We are convinced that the insights gained from this large-scale and real-world field exploration of multisensory experience design exploiting a new and emerging technology provide a solid starting point for the {HCI} community, creative industries, and art curators to think beyond conventional art experiences. Specifically, our work demonstrates how novel mid-air technology can make art more emotionally engaging and stimulating, especially abstract art that is often open to interpretation
Agency in mid-air interfaces
Touchless interfaces allow users to view, control and manipulate digital content without physically touching an interface. They are being explored in a wide range of application scenarios from medical surgery to car dashboard controllers. One aspect of touchless interaction that has not been explored to date is the Sense of Agency (SoA). The SoA refers to the subjective experience of voluntary control over actions in the external world. In this paper, we investigated the SoA in touchless systems using the intentional binding paradigm. We first compare touchless systems with physical interactions and then augmented different types of haptic feedback to explore how different outcome modalities influence users’ SoA. From our experiments, we demonstrated that an intentional binding effect is observed in both physical and touchless interactions with no statistical difference. Additionally, we found that haptic and auditory feedback help to increase SoA compared with visual feedback in touchless interfaces. We discuss these findings and identify design opportunities that take agency into consideration