2,159 research outputs found

    Bodily awareness and novel multisensory features

    Get PDF
    According to the decomposition thesis, perceptual experiences resolve without remainder into their different modality-specific components. Contrary to this view, I argue that certain cases of multisensory integration give rise to experiences representing features of a novel type. Through the coordinated use of bodily awareness—understood here as encompassing both proprioception and kinaesthesis—and the exteroceptive sensory modalities, one becomes perceptually responsive to spatial features whose instances couldn’t be represented by any of the contributing modalities functioning in isolation. I develop an argument for this conclusion focusing on two cases: 3D shape perception in haptic touch and experiencing an object’s egocentric location in crossmodally accessible, environmental space

    Age-Related Differences in Multimodal Information Processing and Their Implications for Adaptive Display Design.

    Full text link
    In many data-rich, safety-critical environments, such as driving and aviation, multimodal displays (i.e., displays that present information in visual, auditory, and tactile form) are employed to support operators in dividing their attention across numerous tasks and sources of information. However, limitations of this approach are not well understood. Specifically, most research on the effectiveness of multimodal interfaces has examined the processing of only two concurrent signals in different modalities, primarily in vision and hearing. Also, nearly all studies to date have involved young participants only. The goals of this dissertation were therefore to (1) determine the extent to which people can notice and process three unrelated concurrent signals in vision, hearing and touch, (2) examine how well aging modulates this ability, and (3) develop countermeasures to overcome observed performance limitations. Adults aged 65+ years were of particular interest because they represent the fastest growing segment of the U.S. population, are known to suffer from various declines in sensory abilities, and experience difficulties with divided attention. Response times and incorrect response rates to singles, pairs, and triplets of visual, auditory, and tactile stimuli were significantly higher for older adults, compared to younger participants. In particular, elderly participants often failed to notice the tactile signal when all three cues were combined. They also frequently falsely reported the presence of a visual cue when presented with a combination of auditory and tactile cues. These performance breakdowns were observed both in the absence and presence of a concurrent visual/manual (driving) task. Also, performance on the driving task suffered the most for older adult participants and with the combined visual-auditory-tactile stimulation. Introducing a half-second delay between two stimuli significantly increased response accuracy for older adults. This work adds to the knowledge base in multimodal information processing, the perceptual and attentional abilities and limitations of the elderly, and adaptive display design. From an applied perspective, these results can inform the design of multimodal displays and enable aging drivers to cope with increasingly data-rich in-vehicle technologies. The findings are expected to generalize and thus contribute to improved overall public safety in a wide range of complex environments.PhDIndustrial and Operations EngineeringUniversity of Michigan, Horace H. Rackham School of Graduate Studieshttp://deepblue.lib.umich.edu/bitstream/2027.42/133203/1/bjpitts_1.pd

    A Comparison Of Attentional Reserve Capacity Across Three Sensory Modalities

    Get PDF
    There are two theoretical approaches to the nature of attentional resources. One proposes a single, flexible pool of cognitive resources; the other poses there are multiple resources. This study was designed to systematically examine whether there is evidence for multiple resource theory using a counting task consisting of visual, auditory, and tactile signals using two experiments. The goal of the first experiment was the validation of a multi-modal secondary loading task. Thirty-two participants performed nine variations of a multi-modal counting task incorporating three modalities and three demand levels. Performance and subjective ratings of workload were measured for each of the nine conditions of the within-subjects design. Significant differences were found on the basis of task demand level, irrespective of modality. Moreover, the perceived workload associated with the tasks differed by task demand level and not by modality. These results suggest the counting task is a valid means of imposing task demands across multiple modalities. The second experiment used the same counting task as a secondary load to a primary visual monitoring task, the system monitoring component of the Multi-Attribute Task Battery (MATB). The experimental conditions consisted of performing the system monitoring task alone as a reference and performing system monitoring combined with visual, auditory, or tactile counting. Thirty-one participants were exposed to all four experimental conditions in a within-subjects design. Performance on the primary and secondary tasks was measured, and subjective workload was assessed for each condition. Participants were instructed to maintain performance on the primary task, irrespective of condition, which they did so effectively. Secondary task performance for the visual-auditory and visual-tactile conditions was significantly better than for the visual-visual dual task condition. Subjective workload ratings were also consistent with the performance measures. These results clearly indicate that there is less interference for cross-modal tasks than for intramodal tasks. These results add evidence to multiple resource theory. Finally, these results have practical implications that include human performance assessment for display and alarm development, assessment of attentional reserve capacity for adaptive automation systems, and training

    Multisensory Perception and Learning: Linking Pedagogy, Psychophysics, and Human–Computer Interaction

    Get PDF
    In this review, we discuss how specific sensory channels can mediate the learning of properties of the environment. In recent years, schools have increasingly been using multisensory technology for teaching. However, it still needs to be sufficiently grounded in neuroscientific and pedagogical evidence. Researchers have recently renewed understanding around the role of communication between sensory modalities during development. In the current review, we outline four principles that will aid technological development based on theoretical models of multisensory development and embodiment to foster in-depth, perceptual, and conceptual learning of mathematics. We also discuss how a multidisciplinary approach offers a unique contribution to development of new practical solutions for learning in school. Scientists, engineers, and pedagogical experts offer their interdisciplinary points of view on this topic. At the end of the review, we present our results, showing that one can use multiple sensory inputs and sensorimotor associations in multisensory technology to improve the discrimination of angles, but also possibly for educational purposes. Finally, we present an application, the ‘RobotAngle’ developed for primary (i.e., elementary) school children, which uses sounds and body movements to learn about angles

    Sensthetics: a crossmodal approach to the perception, and conception, of our environments

    Get PDF
    This dissertation counters the visual bias, and the simplistic approach to the senses, in architectural thought, by investigating the connections among different sense modalities (sight, sound, smell, taste and touch). Literature from the cognitive sciences shows that sensory modalities are connected perceptually; what we see affects what we hear, what we smell affects what we taste, and so on. This has a direct impact on the perceptual choices we make in our day-to-day lives. A case study conducted in an urban plaza investigates the perceptual choices people make (or what they attend to) as they explore their physical environment. Results show that people construct subjective and embodied mental maps of their environments where sensory impressions are integrated with cognitive concepts such as emotions or object recognition. Furthermore, when one sense is muted (such as closing the eyes) other senses are prioritized. A theoretical framework termed as the "Sensthetic Model" is developed illustrating the interdependence of sensory, kinesthetic and cognitive factors, and the hierarchical and lateral relationship between sense-modalities. The latter is the focus of studies with architecture students in abstract thinking exercises: a) Hierarchical: Students perceive a hierarchy of senses (sensory order) when they think about different places. Vision is primary, but not always. Touch, classically relegated to the bottom of the hierarchy, is often higher in the hierarchy and coupled with sound. b) Lateral: Students associate colors with different sounds, smells, textures, temperatures, emotions and objects and cross over modalities conceptually, with a degree of consistency. There are more associations with emotions and objects (which are not constrained to a single sense-modality), than with purely sensory images. Finally, the theoretical model is further developed as a tool to think "across" modalities (crossmodally) based on the identification of sensory orders and sensory correspondences. By focusing on the sensory modalities (nodes) and the relationships among them (connections), the model serves as a conceptual tool for professionals to create sensory environments. This dissertation is an initial step beyond the aesthetics of appearance, towards the Sensthetics of experience

    Advanced Multimodal Solutions for Information Presentation

    Get PDF
    High-workload, fast-paced, and degraded sensory environments are the likeliest candidates to benefit from multimodal information presentation. For example, during EVA (Extra-Vehicular Activity) and telerobotic operations, the sensory restrictions associated with a space environment provide a major challenge to maintaining the situation awareness (SA) required for safe operations. Multimodal displays hold promise to enhance situation awareness and task performance by utilizing different sensory modalities and maximizing their effectiveness based on appropriate interaction between modalities. During EVA, the visual and auditory channels are likely to be the most utilized with tasks such as monitoring the visual environment, attending visual and auditory displays, and maintaining multichannel auditory communications. Previous studies have shown that compared to unimodal displays (spatial auditory or 2D visual), bimodal presentation of information can improve operator performance during simulated extravehicular activity on planetary surfaces for tasks as diverse as orientation, localization or docking, particularly when the visual environment is degraded or workload is increased. Tactile displays offer a third sensory channel that may both offload information processing effort and provide a means to capture attention when urgently required. For example, recent studies suggest that including tactile cues may result in increased orientation and alerting accuracy, improved task response time and decreased workload, as well as provide self-orientation cues in microgravity on the ISS (International Space Station). An important overall issue is that context-dependent factors like task complexity, sensory degradation, peripersonal vs. extrapersonal space operations, workload, experience level, and operator fatigue tend to vary greatly in complex real-world environments and it will be difficult to design a multimodal interface that performs well under all conditions. As a possible solution, adaptive systems have been proposed in which the information presented to the user changes as a function of taskcontext-dependent factors. However, this presupposes that adequate methods for detecting andor predicting such factors are developed. Further, research in adaptive systems for aviation suggests that they can sometimes serve to increase workload and reduce situational awareness. It will be critical to develop multimodal display guidelines that include consideration of smart systems that can select the best display method for a particular contextsituation.The scope of the current work is an analysis of potential multimodal display technologies for long duration missions and, in particular, will focus on their potential role in EVA activities. The review will address multimodal (combined visual, auditory andor tactile) displays investigated by NASA, industry, and DoD (Dept. of Defense). It also considers the need for adaptive information systems to accommodate a variety of operational contexts such as crew status (e.g., fatigue, workload level) and task environment (e.g., EVA, habitat, rover, spacecraft). Current approaches to guidelines and best practices for combining modalities for the most effective information displays are also reviewed. Potential issues in developing interface guidelines for the Exploration Information System (EIS) are briefly considered

    Visual experience is not necessary for efficient survey spatial cognition: Evidence from blindness

    Get PDF
    This study investigated whether the lack of visual experience affects the ability to create spatial infer-ential representations of the survey type. We compared the performance of persons with congenital blindness and that of blindfolded sighted persons on four survey representation-based tasks (Experiment 1). Results showed that persons with blindness performed better than blindfolded sighted controls. We repeated the same tests introducing a third group of persons with late blindness (Experiment 2). This last group performed better than blindfolded sighted participants, whereas differences between participants with late and congenital blindness were nonsignificant. The present findings are compatible with results of other studies, which found that when visual perception is lacking, skill in gathering environmental spatial information provided by nonvisual modalities may contribute to a proper spatial encoding. It is concluded that, although it cannot be asserted that total lack of visual experience incurs no cost, our findings are further evidence that visual experience is not a necessary condition for the development of spatial inferential complex representations. There is a general consensus on the crucial role of visual perception in guiding many of our daily movements in large- and small-scale environ

    Audiotactile interactions in temporal perception

    Full text link

    Digitizing the chemical senses: possibilities & pitfalls

    Get PDF
    Many people are understandably excited by the suggestion that the chemical senses can be digitized; be it to deliver ambient fragrances (e.g., in virtual reality or health-related applications), or else to transmit flavour experiences via the internet. However, to date, progress in this area has been surprisingly slow. Furthermore, the majority of the attempts at successful commercialization have failed, often in the face of consumer ambivalence over the perceived benefits/utility. In this review, with the focus squarely on the domain of Human-Computer Interaction (HCI), we summarize the state-of-the-art in the area. We highlight the key possibilities and pitfalls as far as stimulating the so-called ‘lower’ senses of taste, smell, and the trigeminal system are concerned. Ultimately, we suggest that mixed reality solutions are currently the most plausible as far as delivering (or rather modulating) flavour experiences digitally is concerned. The key problems with digital fragrance delivery are related to attention and attribution. People often fail to detect fragrances when they are concentrating on something else; And even when they detect that their chemical senses have been stimulated, there is always a danger that they attribute their experience (e.g., pleasure) to one of the other senses – this is what we call ‘the fundamental attribution error’. We conclude with an outlook on digitizing the chemical senses and summarize a set of open-ended questions that the HCI community has to address in future explorations of smell and taste as interaction modalities
    • …
    corecore