10 research outputs found

    Attentional processes in auditory discriminations

    Get PDF
    A set of experiments is described which assessed the ability of human observers to monitor two earphone channels in order to perform one or two independent frequency discriminations. Performance (d′) was significantly poorer with dichotic stimulus presentation than in monaural control conditions. A detailed analysis of the data suggested that two factors were involved in the dichotic performance deficits. The first factor was a limited ability of the observers to perceptually separate the different stimuli presented to the two earphone channels. Under certain stimulus conditions, the channel-separation factor was significant enough to produce a low performance ceiling which overshadowed additional deficits caused by the second factor. This second factor was a limited ability to time-share between two channels to perform the frequency discriminations with the same efficiency as when attention was directed toward a single channel. When overshadowed by the performance floor caused by channel-separation limitations, the time-sharing limitations may be examined only by an analysis of performance conditional upon contralateral stimulus events. Providing the observers with stimulus parameters which make the channels more dissimilar (i.e., spectral or temporal separation) improved performance when attention was directed to only one channel. Sequential stimulus presentation was partially effective in increasing performance in selective- (or focused-) attention tasks, due to a unidirectional pattern of temporal interference, consistent with data on pitch recognition analogues to masking phenomena. The present investigation appears to make a significant step towards resolving an apparent controversy between several groups of researchers with regards to the nature of performance deficits in two-channel signal-detection paradigms

    Tinnitus and Neural Activity

    No full text

    Analysis of army-wide hearing conservation database for hearing profiles related to crew-served and individual weapon systems

    No full text
    Damage-risk criteria (DRC) for noise exposures are designed to protect 95% of the exposed populations from hearing injuries caused by those noise exposures. The current DRC used by the US military follows OSHA guidelines for continuous noise. The current military DRC for impulse exposures follows the recommendations from the National Academy of Sciences - National Research Council Committee on Hearing, Bioacoustics, and Biomechanics (CHABA) and are contained in the current military standard, MIL-STD-1474D "Noise Limits." Suggesting that the MIL-STD for impulse exposure is too stringent, various individuals have proposed that the DRC for exposure to high-level impulses be relaxed. The purpose of this study is to evaluate the current hearing status of US Army Soldiers, some of whom can be, by their military occupational specialties (MOS), reasonably expected to be routinely exposed to high-level impulses from weapon systems. The Defense Occupational and Environmental Health Readiness System - Hearing Conservation (DOEHRS-HC) was queried for the hearing status of enlisted Soldiers of 32 different MOSs. The results indicated that less than 95% of the Soldiers in the DOEHRS-HC database were classified as having normal hearing. In other words, the goal of the DRC used for limiting noise injuries (from continuous and impulse exposures) was not stringent enough to prevent hearing injuries in all but the most susceptible Soldiers. These results suggest that the current military noise DRC should not be relaxed
    corecore