96 research outputs found

    The effect of spatial–temporal audiovisual disparities on saccades in a complex scene

    Get PDF
    In a previous study we quantified the effect of multisensory integration on the latency and accuracy of saccadic eye movements toward spatially aligned audiovisual (AV) stimuli within a rich AV-background (Corneil et al. in J Neurophysiol 88:438–454, 2002). In those experiments both stimulus modalities belonged to the same object, and subjects were instructed to foveate that source, irrespective of modality. Under natural conditions, however, subjects have no prior knowledge as to whether visual and auditory events originated from the same, or from different objects in space and time. In the present experiments we included these possibilities by introducing various spatial and temporal disparities between the visual and auditory events within the AV-background. Subjects had to orient fast and accurately to the visual target, thereby ignoring the auditory distractor. We show that this task belies a dichotomy, as it was quite difficult to produce fast responses (<250 ms) that were not aurally driven. Subjects therefore made many erroneous saccades. Interestingly, for the spatially aligned events the inability to ignore auditory stimuli produced shorter reaction times, but also more accurate responses than for the unisensory target conditions. These findings, which demonstrate effective multisensory integration, are similar to the previous study, and the same multisensory integration rules are applied (Corneil et al. in J Neurophysiol 88:438–454, 2002). In contrast, with increasing spatial disparity, integration gradually broke down, as the subjects’ responses became bistable: saccades were directed either to the auditory (fast responses), or to the visual stimulus (late responses). Interestingly, also in this case responses were faster and more accurate than to the respective unisensory stimuli

    Spectral-temporal processing of naturalistic sounds in monkeys and humans

    Get PDF
    Human speech and vocalizations in animals are rich in joint spectrotemporal (S-T) modulations, wherein acoustic changes in both frequency and time are functionally related. In principle, the primate auditory system could process these complex dynamic sounds based on either an inseparable representation of S-T features or, alternatively, a separable representation. The separability hypothesis implies an independent processing of spectral and temporal modulations. We collected comparative data on the S-T hearing sensitivity in humans and macaque monkeys to a wide range of broadband dynamic spectrotemporal ripple stimuli employing a yes-no signal-detection task. Ripples were systematically varied, as a function of density (spectral modulation frequency), velocity (temporal modulation frequency), or modulation depth, to cover a listener's full S-T modulation sensitivity, derived from a total of 87 psychometric ripple detection curves. Audiograms were measured to control for normal hearing. Determined were hearing thresholds, reaction time distributions, and S-T modulation transfer functions (MTFs), both at the ripple detection thresholds and at suprathreshold modulation depths. Our psychophysically derived MTFs are consistent with the hypothesis that both monkeys and humans employ analogous perceptual strategies: S-T acoustic information is primarily processed separable. Singular value decomposition (SVD), however, revealed a small, but consistent, inseparable spectral-temporal interaction. Finally, SVD analysis of the known visual spatiotemporal contrast sensitivity function (CSF) highlights that human vision is space-time inseparable to a much larger extent than is the case for S-T sensitivity in hearing. Thus, the specificity with which the primate brain encodes natural sounds appears to be less strict than is required to adequately deal with natural images

    Improved Horizontal Directional Hearing in Bone Conduction Device Users with Acquired Unilateral Conductive Hearing Loss

    Get PDF
    We examined horizontal directional hearing in patients with acquired severe unilateral conductive hearing loss (UCHL). All patients (n = 12) had been fitted with a bone conduction device (BCD) to restore bilateral hearing. The patients were tested in the unaided (monaural) and aided (binaural) hearing condition. Five listeners without hearing loss were tested as a control group while listening with a monaural plug and earmuff, or with both ears (binaural). We randomly varied stimulus presentation levels to assess whether listeners relied on the acoustic head-shadow effect (HSE) for horizontal (azimuth) localization. Moreover, to prevent sound localization on the basis of monaural spectral shape cues from head and pinna, subjects were exposed to narrow band (1/3 octave) noises. We demonstrate that the BCD significantly improved sound localization in 8/12 of the UCHL patients. Interestingly, under monaural hearing (BCD off), we observed fairly good unaided azimuth localization performance in 4/12 of the patients. Our multiple regression analysis shows that all patients relied on the ambiguous HSE for localization. In contrast, acutely plugged control listeners did not employ the HSE. Our data confirm and further extend results of recent studies on the use of sound localization cues in chronic and acute monaural listening

    Recent developments in genetics and medically assisted reproduction : from research to clinical applications

    Get PDF
    Two leading European professional societies, the European Society of Human Genetics and the European Society for Human Reproduction and Embryology, have worked together since 2004 to evaluate the impact of fast research advances at the interface of assisted reproduction and genetics, including their application into clinical practice. In September 2016, the expert panel met for the third time. The topics discussed highlighted important issues covering the impacts of expanded carrier screening, direct-to-consumer genetic testing, voiding of the presumed anonymity of gamete donors by advanced genetic testing, advances in the research of genetic causes underlying male and female infertility, utilisation of massively parallel sequencing in preimplantation genetic testing and non-invasive prenatal screening, mitochondrial replacement in human oocytes, and additionally, issues related to cross-generational epigenetic inheritance following IVF and germline genome editing. The resulting paper represents a consensus of both professional societies involved.Peer reviewe

    Neural encoding of instantaneous kinematics of eye-head gaze shifts in monkey superior Colliculus

    No full text
    Abstract The midbrain superior colliculus is a crucial sensorimotor stage for programming and generating saccadic eye-head gaze shifts. Although it is well established that superior colliculus cells encode a neural command that specifies the amplitude and direction of the upcoming gaze-shift vector, there is controversy about the role of the firing-rate dynamics of these neurons during saccades. In our earlier work, we proposed a simple quantitative model that explains how the recruited superior colliculus population may specify the detailed kinematics (trajectories and velocity profiles) of head-restrained saccadic eye movements. We here show that the same principles may apply to a wide range of saccadic eye-head gaze shifts with strongly varying kinematics, despite the substantial nonlinearities and redundancy in programming and execute rapid goal-directed eye-head gaze shifts to peripheral targets. Our findings could provide additional evidence for an important role of the superior colliculus in the optimal control of saccades

    200 years Franciscus Cornelis Donders

    No full text

    Microstimulation in a spiking neural network model of the midbrain superior colliculus.

    Get PDF
    The midbrain superior colliculus (SC) generates a rapid saccadic eye movement to a sensory stimulus by recruiting a population of cells in its topographically organized motor map. Supra-threshold electrical microstimulation in the SC reveals that the site of stimulation produces a normometric saccade vector with little effect of the stimulation parameters. Moreover, electrically evoked saccades (E-saccades) have kinematic properties that strongly resemble natural, visual-evoked saccades (V-saccades). These findings support models in which the saccade vector is determined by a center-of-gravity computation of activated neurons, while its trajectory and kinematics arise from downstream feedback circuits in the brainstem. Recent single-unit recordings, however, have indicated that the SC population also specifies instantaneous kinematics. These results support an alternative model, in which the desired saccade trajectory, including its kinematics, follows from instantaneous summation of movement effects of all SC spike trains. But how to reconcile this model with microstimulation results? Although it is thought that microstimulation activates a large population of SC neurons, the mechanism through which it arises is unknown. We developed a spiking neural network model of the SC, in which microstimulation directly activates a relatively small set of neurons around the electrode tip, which subsequently sets up a large population response through lateral synaptic interactions. We show that through this mechanism the population drives an E-saccade with near-normal kinematics that are largely independent of the stimulation parameters. Only at very low stimulus intensities the network recruits a population with low firing rates, resulting in abnormally slow saccades
    • 

    corecore