1,512 research outputs found

    Optimizing The Design Of Multimodal User Interfaces

    Get PDF
    Due to a current lack of principle-driven multimodal user interface design guidelines, designers may encounter difficulties when choosing the most appropriate display modality for given users or specific tasks (e.g., verbal versus spatial tasks). The development of multimodal display guidelines from both a user and task domain perspective is thus critical to the achievement of successful human-system interaction. Specifically, there is a need to determine how to design task information presentation (e.g., via which modalities) to capitalize on an individual operator\u27s information processing capabilities and the inherent efficiencies associated with redundant sensory information, thereby alleviating information overload. The present effort addresses this issue by proposing a theoretical framework (Architecture for Multi-Modal Optimization, AMMO) from which multimodal display design guidelines and adaptive automation strategies may be derived. The foundation of the proposed framework is based on extending, at a functional working memory (WM) level, existing information processing theories and models with the latest findings in cognitive psychology, neuroscience, and other allied sciences. The utility of AMMO lies in its ability to provide designers with strategies for directing system design, as well as dynamic adaptation strategies (i.e., multimodal mitigation strategies) in support of real-time operations. In an effort to validate specific components of AMMO, a subset of AMMO-derived multimodal design guidelines was evaluated with a simulated weapons control system multitasking environment. The results of this study demonstrated significant performance improvements in user response time and accuracy when multimodal display cues were used (i.e., auditory and tactile, individually and in combination) to augment the visual display of information, thereby distributing human information processing resources across multiple sensory and WM resources. These results provide initial empirical support for validation of the overall AMMO model and a sub-set of the principle-driven multimodal design guidelines derived from it. The empirically-validated multimodal design guidelines may be applicable to a wide range of information-intensive computer-based multitasking environments

    Emotional Responses to Multisensory Environmental Stimuli: A Conceptual Framework and Literature Review.

    Get PDF
    How we perceive our environment affects the way we feel and behave. The impressions of our ambient environment are influenced by its entire spectrum of physical characteristics (e.g., luminosity, sound, scents, temperature) in a dynamic and interactive way. The ability to manipulate the sensory aspects of an environment such that people feel comfortable or exhibit a desired behavior is gaining interest and social relevance. Although much is known about the sensory effects of individual environmental characteristics, their combined effects are not a priori evident due to a wide range of non-linear interactions in the processing of sensory cues. As a result, it is currently not known how different environmental characteristics should be combined to effectively induce desired emotional and behavioral effects. To gain more insight into this matter, we performed a literature review on the emotional effects of multisensory stimulation. Although we found some interesting mechanisms, the outcome also reveals that empirical evidence is still scarce and haphazard. To stimulate further discussion and research, we propose a conceptual framework that describes how environmental interventions are likely to affect human emotional responses. This framework leads to some critical research questions that suggest opportunities for further investigation

    Enhancing performance with multisensory cues in a realistic target discrimination task

    Get PDF
    Making decisions is an important aspect of people’s lives. Decisions can be highly critical in nature, with mistakes possibly resulting in extremely adverse consequences. Yet, such decisions have often to be made within a very short period of time and with limited information. This can result in decreased accuracy and efficiency. In this paper, we explore the possibility of increasing speed and accuracy of users engaged in the discrimination of realistic targets presented for a very short time, in the presence of unimodal or bimodal cues. More specifically, we present results from an experiment where users were asked to discriminate between targets rapidly appearing in an indoor environment. Unimodal (auditory) or bimodal (audio-visual) cues could shortly precede the target stimulus, warning the users about its location. Our findings show that, when used to facilitate perceptual decision under time pressure, and in condition of limited information in real-world scenarios, spoken cues can be effective in boosting performance (accuracy, reaction times or both), and even more so when presented in bimodal form. However, we also found that cue timing plays a critical role and, if the cue-stimulus interval is too short, cues may offer no advantage. In a post-hoc analysis of our data, we also show that congruency between the response location and both the target location and the cues, can interfere with the speed and accuracy in the task. These effects should be taken in consideration, particularly when investigating performance in realistic tasks

    Multimodality in VR: A survey

    Get PDF
    Virtual reality (VR) is rapidly growing, with the potential to change the way we create and consume content. In VR, users integrate multimodal sensory information they receive, to create a unified perception of the virtual world. In this survey, we review the body of work addressing multimodality in VR, and its role and benefits in user experience, together with different applications that leverage multimodality in many disciplines. These works thus encompass several fields of research, and demonstrate that multimodality plays a fundamental role in VR; enhancing the experience, improving overall performance, and yielding unprecedented abilities in skill and knowledge transfer

    Smells, well-being and the built environment

    Get PDF
    In this Research Topic, we aimed to collect a range of contributions to understand the emotional and wellbeing responses resulting from smells in different public spaces (museums, highstreets, heritage buildings, food districts, neighborhoods, squares, etc.) to inform future spatial design and management. The articles in this Research Topic are presented according to three types of contributions: reviews and conceptual analyses, empirical research in fieldwork, in laboratory studies and technological applications. Reviews and Conceptual Analyses Xiao et al. reviewed smellscape research studies conducted in the past 10 years to identify the challenges and related areas of future research, namely smell archives and databases, social justice within odor control and management, and research into advanced building materials. Spence reviewed the changing role of smells in the built environment from negative associations with sanitation to meaningful personal and cultural associations with memories and experiences which led to an evaluation of different approaches in examining the impact of smells on people's mood or wellbeing and the challenges of researching smells in the multi-sensory environment. Moving from the sick building sydrome to sick transport sydrome, Spence further reviewed the smells in transport environments as aesthetic and functional, and suggests challenges for future transportation to produce a more tangible vision to integrate smells in the design process to achieve the right balance of olfactory stimulation. Looking backwards to scented past, Bembibre and Strlič make the case for the need of knowledge exchange and interdisciplinary interpretation of findings in the field of olfactory heritage, providing an overview of methodological and museal studies as well as challenges associated with historical scent reconstruction. Empirical Research - Fieldwork Pálsdóttir et al. carried out a field study with participants suffering from stress-related mental disorders and explored how they would describe their smellscape perception of a garden in the context of a nature-based rehabilitation intervention. In a different field study, de Groot investigated whether ambient scents could affect customers' subjective experience and spending behavior in an experiment with customers of a second-hand clothing store. The author concluded that for that to happen, the smellscape should have a meaningful link to the physical context. Masaoka et al. present the results of a study conducted to examine whether continuous odor stimuli associated with autobiographical memories could activate olfactory areas in the brain of older adults and assess whether this odor stimulation could have a protective effect against age-related cognitive decline. Empirical Research - Laboratory Studies and Technological Applications Masaoka et al. investigated the potential protective effect from age-related cognitive decline of continuous odor stimuli associated with autobiographical memories and whether those could activate the above olfactory areas in older adults. Jiang et al. used blood pressure, pulse rate, EEG, POMS, and SD data to examine the odor-visual effects of the Primula forbesii Franch compared with the non-fragrant Primula malacoides Franch on the physiological and psychological state of Chinese female college students in the indoor environment. Courrèges et al. examined the correlations between odor and texture in users' perceptions of cosmetic creams cross-culturally, in laboratory conditions, using questionnaires, minimizing the impacts of branded messages from the packing and retail spaces. Amores et al. discussed the design and technical implementation of Essence- a smartphone-controlled wearable device that monitors users' EEG and real-time sleep staging algorithm to release scents to interact with users- in home-based sleep environments. The articles included in this Research Topic represent a nice balance between the theoretical reviews, empirical studies and laboratory research, showing the vibrance and dynamic in this research field as well as new technological developments such as extended reality, emotional sensors (i.e. EEG, GSR) and odor monitoring devices. New insights are drawn into the theoretical frameworks to understand relationships between smells, wellbeing and emotions, behaviors and physiological aspects; methodological approaches to measure smell triggered emotions, experiences, and quality of life; practical explorations on the process and challenges of using smells to influence user experiences in the built environment

    Enough is enough! Understanding environmentally driven multisensory experiences

    Get PDF
    The importance of sensory perception and sensory stimulation in creating pleasant consumption experiences has received increasing attention within recent years. Yet, while numerous studies investigate antecedents and consequences of sensory perception specific to a certain sense (vision, touch, audition, smell, and taste), limited research addresses sensation from a broader perspective by examining what constitutes sensing in sensations. Multiple studies are employed to investigate the totality of sensation rather than any sense specific sensation, by framing sensational experiences within the long tradition of atmospherics research. Here, the construct of need for sensation is conceptualized to reflect the notion of totality of sensation. Following a comprehensive review of common overlaps among three main research areas – atmospherics, servicescape, and sensory marketing – exploratory research guides the development of a new scale measuring the construct need for sensation. The current study posits need for sensation as the manner by which consumers extract value through multiple sensory inputs, both focal and non-focal. This new need for sensation scale encompasses two dimensions namely sensory enjoyment and sensory avoidance, which both can be administered simultaneously to reflect different facets of need for sensation. The scale is validated as part of an experimental design to examine how different environments and levels of sensory stimulation impact consumers. Findings show that high intensity of sensation environments lower the consumer\u27s ability to accurately complete perceptual and cognitive tasks. However, these high intensity surroundings also elevate hedonic value leading to a more positive and value-added consumption experience. With regard to need for sensation, high need for sensation individuals express higher levels of hedonic value, satisfaction, and positive affect in stimulating environments; thus, confirming the validity of the new scale to detect individual differences across consumers. Results further affirm that while high need for sensation individuals gain more pleasure from a highly sensory stimulation experience; their performance is not negatively impacted. Overall, this research integrates atmospherics, services, and sensory marketing research to advance the marketing discipline. Key findings provide a starting point for an extensive stream of research focusing on sensory value-added consumption experiences

    The effect of simultaneously presented words and auditory tones on visuomotor performance

    Get PDF
    The experiment reported here used a variation of the spatial cueing task to examine the effects of unimodal and bimodal attention-orienting primes on target identification latencies and eye gaze movements. The primes were a nonspatial auditory tone and words known to drive attention consistent with the dominant writing and reading direction, as well as introducing a semantic, temporal bias (past–future) on the horizontal dimension. As expected, past-related (visual) word primes gave rise to shorter response latencies on the left hemifield and future-related words on the right. This congruency effect was differentiated by an asymmetric performance on the right space following future words and driven by the left-to-right trajectory of scanning habits that facilitated search times and eye gaze movements to lateralized targets. Auditory tone prime alone acted as an alarm signal, boosting visual search and reducing response latencies. Bimodal priming, i.e., temporal visual words paired with the auditory tone, impaired performance by delaying visual attention and response times relative to the unimodal visual word condition. We conclude that bimodal primes were no more effective in capturing participants’ spatial attention than the unimodal auditory and visual primes. Their contribution to the literature on multisensory integration is discussed.info:eu-repo/semantics/acceptedVersio

    Age-Related Differences in Multimodal Information Processing and Their Implications for Adaptive Display Design.

    Full text link
    In many data-rich, safety-critical environments, such as driving and aviation, multimodal displays (i.e., displays that present information in visual, auditory, and tactile form) are employed to support operators in dividing their attention across numerous tasks and sources of information. However, limitations of this approach are not well understood. Specifically, most research on the effectiveness of multimodal interfaces has examined the processing of only two concurrent signals in different modalities, primarily in vision and hearing. Also, nearly all studies to date have involved young participants only. The goals of this dissertation were therefore to (1) determine the extent to which people can notice and process three unrelated concurrent signals in vision, hearing and touch, (2) examine how well aging modulates this ability, and (3) develop countermeasures to overcome observed performance limitations. Adults aged 65+ years were of particular interest because they represent the fastest growing segment of the U.S. population, are known to suffer from various declines in sensory abilities, and experience difficulties with divided attention. Response times and incorrect response rates to singles, pairs, and triplets of visual, auditory, and tactile stimuli were significantly higher for older adults, compared to younger participants. In particular, elderly participants often failed to notice the tactile signal when all three cues were combined. They also frequently falsely reported the presence of a visual cue when presented with a combination of auditory and tactile cues. These performance breakdowns were observed both in the absence and presence of a concurrent visual/manual (driving) task. Also, performance on the driving task suffered the most for older adult participants and with the combined visual-auditory-tactile stimulation. Introducing a half-second delay between two stimuli significantly increased response accuracy for older adults. This work adds to the knowledge base in multimodal information processing, the perceptual and attentional abilities and limitations of the elderly, and adaptive display design. From an applied perspective, these results can inform the design of multimodal displays and enable aging drivers to cope with increasingly data-rich in-vehicle technologies. The findings are expected to generalize and thus contribute to improved overall public safety in a wide range of complex environments.PhDIndustrial and Operations EngineeringUniversity of Michigan, Horace H. Rackham School of Graduate Studieshttp://deepblue.lib.umich.edu/bitstream/2027.42/133203/1/bjpitts_1.pd
    • …
    corecore