84 research outputs found
Reconfigurable Auditory-Visual Display
System and method for visual and audible communication between a central operator and N mobile communicators (N greater than or equal to 2), including an operator transceiver and interface, configured to receive and display, for the operator, visually perceptible and audibly perceptible signals from each of the mobile communicators. The interface (1) presents an audible signal from each communicator as if the audible signal is received from a different location relative to the operator and (2) allows the operator to select, to assign priority to, and to display, the visual signals and the audible signals received from a specified communicator. Each communicator has an associated signal transmitter that is configured to transmit at least one of the visual signals and the audio signal associated with the communicator, where at least one of the signal transmitters includes at least one sensor that senses and transmits a sensor value representing a selected environmental or physiological parameter associated with the communicator
Fast Modal Sounds with Scalable Frequency-Domain Synthesis
International audienceAudio rendering of impact sounds, such as those caused by falling objects or explosion debris, adds realism to interactive 3D audiovisual applications, and can be convincingly achieved using modal sound synthesis. Unfortunately, mode-based computations can become prohibitively expensive when many objects, each with many modes, are impacted simultaneously. We introduce a fast sound synthesis approach, based on short-time Fourier Tranforms, that exploits the inherent sparsity of modal sounds in the frequency domain. For our test scenes, this "fast mode summation" can give speedups of 5-8 times compared to a time-domain solution, with slight degradation in quality. We discuss different reconstruction windows, affecting the quality of impact sound "attacks". Our Fourier-domain processing method allows us to introduce a scalable, real-time, audio processing pipeline for both recorded and modal sounds, with auditory masking and sound source clustering. To avoid abrupt computation peaks, such as during the simultaneous impacts of an explosion, we use crossmodal perception results on audiovisual synchrony to effect temporal scheduling. We also conducted a pilot perceptual user evaluation of our method. Our implementation results show that we can treat complex audiovisual scenes in real time with high quality
Information Presentation: Human Research Program - Space Human Factors and Habitability, Space Human Factors Engineering Project
The goal of the Information Presentation Directed Research Project (DRP) is to address design questions related to the presentation of information to the crew. The major areas of work, or subtasks, within this DRP are: 1) Displays, 2) Controls, 3) Electronic Procedures and Fault Management, and 4) Human Performance Modeling. This DRP is a collaborative effort between researchers atJohnson Space Center and Ames Research Center.
Information Presentation
The goal of the Information Presentation Directed Research Project (DRP) is to address design questions related to the presentation of information to the crew on flight vehicles, surface landers and habitats, and during extra-vehicular activities (EVA). Designers of displays and controls for exploration missions must be prepared to select the text formats, label styles, alarms, electronic procedure designs, and cursor control devices that provide for optimal crew performance on exploration tasks. The major areas of work, or subtasks, within the Information Presentation DRP are: 1) Controls, 2) Displays, 3) Procedures, and 4) EVA Operations
Designing informative warning signals: Effects of indicator type, modality, and task demand on recognition speed and accuracy
An experiment investigated the assumption that natural indicators which exploit
existing learned associations between a signal and an event make more effective
warnings than previously unlearned symbolic indicators. Signal modality (visual,
auditory) and task demand (low, high) were also manipulated. Warning
effectiveness was indexed by accuracy and reaction time (RT) recorded during
training and dual task test phases. Thirty-six participants were trained to
recognize 4 natural and 4 symbolic indicators, either visual or auditory, paired
with critical incidents from an aviation context. As hypothesized, accuracy was
greater and RT was faster in response to natural indicators during the training
phase. This pattern of responding was upheld in test phase conditions with
respect to accuracy but observed in RT only in test phase conditions involving
high demand and the auditory modality. Using the experiment as a specific
example, we argue for the importance of considering the cognitive contribution
of the user (viz., prior learned associations) in the warning design process.
Drawing on semiotics and cognitive psychology, we highlight the indexical nature
of so-called auditory icons or natural
indicators and argue that the cogniser is an indispensable element
in the tripartite nature of signification
Depth cues and perceived audiovisual synchrony of biological motion
Due to their different propagation times, visual and auditory signals from external events arrive at the human sensory receptors with a disparate delay. This delay consistently varies with distance, but, despite such variability, most events are perceived as synchronic. There is, however, contradictory data and claims regarding the existence of compensatory mechanisms for distance in simultaneity judgments.
Principal Findings:
In this paper we have used familiar audiovisual events – a visual walker and footstep sounds – and manipulated the number of depth cues. In a simultaneity judgment task we presented a large range of stimulus onset asynchronies corresponding to distances of up to 35 meters. We found an effect of distance over the simultaneity estimates, with greater distances requiring larger stimulus onset asynchronies, and vision always leading. This effect was stronger when both visual and auditory cues were present but was interestingly not found when depth cues were impoverished.
Significance:
These findings reveal that there should be an internal mechanism to compensate for audiovisual delays, which critically depends on the depth information available.FEDERFundação para a Ciência e a Tecnologia (FCT
- …