4,191 research outputs found

    Using Sound to Enhance Users’ Experiences of Mobile Applications

    Get PDF
    The latest smartphones with GPS, electronic compass, directional audio, touch screens etc. hold potentials for location based services that are easier to use compared to traditional tools. Rather than interpreting maps, users may focus on their activities and the environment around them. Interfaces may be designed that let users search for information by simply pointing in a direction. Database queries can be created from GPS location and compass direction data. Users can get guidance to locations through pointing gestures, spatial sound and simple graphics. This article describes two studies testing prototypic applications with multimodal user interfaces built on spatial audio, graphics and text. Tests show that users appreciated the applications for their ease of use, for being fun and effective to use and for allowing users to interact directly with the environment rather than with abstractions of the same. The multimodal user interfaces contributed significantly to the overall user experience

    An integrated approach to rotorcraft human factors research

    Get PDF
    As the potential of civil and military helicopters has increased, more complex and demanding missions in increasingly hostile environments have been required. Users, designers, and manufacturers have an urgent need for information about human behavior and function to create systems that take advantage of human capabilities, without overloading them. Because there is a large gap between what is known about human behavior and the information needed to predict pilot workload and performance in the complex missions projected for pilots of advanced helicopters, Army and NASA scientists are actively engaged in Human Factors Research at Ames. The research ranges from laboratory experiments to computational modeling, simulation evaluation, and inflight testing. Information obtained in highly controlled but simpler environments generates predictions which can be tested in more realistic situations. These results are used, in turn, to refine theoretical models, provide the focus for subsequent research, and ensure operational relevance, while maintaining predictive advantages. The advantages and disadvantages of each type of research are described along with examples of experimental results

    Mouse frontal cortex mediates additive multisensory decisions

    Get PDF
    The brain can combine auditory and visual information to localize objects. However, the cortical substrates underlying audiovisual integration remain uncertain. Here, we show that mouse frontal cortex combines auditory and visual evidence; that this combination is additive, mirroring behavior; and that it evolves with learning. We trained mice in an audiovisual localization task. Inactivating frontal cortex impaired responses to either sensory modality, while inactivating visual or parietal cortex affected only visual stimuli. Recordings from >14,000 neurons indicated that after task learning, activity in the anterior part of frontal area MOs (secondary motor cortex) additively encodes visual and auditory signals, consistent with the mice's behavioral strategy. An accumulator model applied to these sensory representations reproduced the observed choices and reaction times. These results suggest that frontal cortex adapts through learning to combine evidence across sensory cortices, providing a signal that is transformed into a binary decision by a downstream accumulator

    A Comparison Of Attentional Reserve Capacity Across Three Sensory Modalities

    Get PDF
    There are two theoretical approaches to the nature of attentional resources. One proposes a single, flexible pool of cognitive resources; the other poses there are multiple resources. This study was designed to systematically examine whether there is evidence for multiple resource theory using a counting task consisting of visual, auditory, and tactile signals using two experiments. The goal of the first experiment was the validation of a multi-modal secondary loading task. Thirty-two participants performed nine variations of a multi-modal counting task incorporating three modalities and three demand levels. Performance and subjective ratings of workload were measured for each of the nine conditions of the within-subjects design. Significant differences were found on the basis of task demand level, irrespective of modality. Moreover, the perceived workload associated with the tasks differed by task demand level and not by modality. These results suggest the counting task is a valid means of imposing task demands across multiple modalities. The second experiment used the same counting task as a secondary load to a primary visual monitoring task, the system monitoring component of the Multi-Attribute Task Battery (MATB). The experimental conditions consisted of performing the system monitoring task alone as a reference and performing system monitoring combined with visual, auditory, or tactile counting. Thirty-one participants were exposed to all four experimental conditions in a within-subjects design. Performance on the primary and secondary tasks was measured, and subjective workload was assessed for each condition. Participants were instructed to maintain performance on the primary task, irrespective of condition, which they did so effectively. Secondary task performance for the visual-auditory and visual-tactile conditions was significantly better than for the visual-visual dual task condition. Subjective workload ratings were also consistent with the performance measures. These results clearly indicate that there is less interference for cross-modal tasks than for intramodal tasks. These results add evidence to multiple resource theory. Finally, these results have practical implications that include human performance assessment for display and alarm development, assessment of attentional reserve capacity for adaptive automation systems, and training

    Visual selective attention is equally functional for individuals with low and high working memory capacity: Evidence from accuracy and eye movements

    Get PDF
    Selective attention and working memory capacity (WMC) are related constructs, but debate about the manner in which they are related remains active. One elegant explanation of variance in WMC is that the efficiency of filtering irrelevant information is the crucial determining factor, rather than differences in capacity per se. We examined this hypothesis by relating WMC (as measured by complex span tasks) to accuracy and eye movements during visual change detection tasks with different degrees of attentional filtering and allocation requirements. Our results did not indicate strong filtering differences between high- and low-WMC groups, and where differences were observed, they were counter to those predicted by the strongest attentional filtering hypothesis. Bayes factors indicated evidence favoring positive or null relationships between WMC and correct responses to unemphasized information, as well as between WMC and the time spent looking at unemphasized information. These findings are consistent with the hypothesis that individual differences in storage capacity, not only filtering efficiency, underlie individual differences in working memory

    Development of tests for measurement of primary perceptual-motor performance

    Get PDF
    Tests for measuring primary perceptual-motor performance for assessing space environment effects on human performanc

    Psychophysical and electrophysiological investigations into the mechanisms supporting everyday communication

    Full text link
    Thesis (Ph.D.)--Boston UniversityHumans solve the so-called "cocktail party problem" with relative ease, and are generally able to selectively direct their attention to process and recall acoustic information from one sound source in the presence of other irrelevant stimuli that are competing for cognitive resources. This ability depends on a variety of factors, including volitional control of selective attention, the ability to store information in memory for recall at a later time, and the ability to integrate information across multiple sensory modalities. Here, psychophysical and electroencephalography (EEG) experiments were conducted to study these three factors. The effects of selective attention on cortical and subcortical structures were examined using EEG recorded during a dichotic listening task. Cortical potentials showed robust effects of attention (demonstrated by the ability to classify responses to attended and ignored speech based on short segments of EEG responses); however, potentials originating in the brainstem did not, even though stimuli were engineered to maximize the separability of the neural representation of the competing sources in the auditory periphery and thus the possibility of seeing attention-specific modulation of subcortical responses. In another study, the relationship between object formation and memory processing was explored in a psychophysical experiment examining how sequences of nonverbal auditory stimuli are stored and recalled from short-term memory. The results of this study support the notion that auditory short-term memory, like visual short-term memory, can be explained in terms of object formation. In particular, short-term memory performance is affected by stream formation and the perceptual costs involved in switching attention between multiple streams. Finally, effects of audiovisual integration were studied in a psychophysical experiment using complex speech-like stimuli (zebra finch songs). Results show visual cues improve performance differently depending on whether target identification is limited by energetic masking or whether it is limited by object formation difficulties and uncertainty about when a target occurs. Together, these studies support the idea that everyday communication depends on an interplay of many mechanisms including attention, memory, and multisensory integration, each of which is influenced by perceptual organization

    Vision for Augmented Humans

    Get PDF
    • 

    corecore