91 research outputs found

    Building and Testing of an Adaptive Optics System for Optical Microscopy

    Get PDF
    Adaptive optics (AO), as the technology of compensating the wavefront distortion can significantly improve the performance of existing optical systems. An adaptive optics system is used to correct the wavefront distortion caused by the imperfection of optical elements and environment. It was originally developed for military and astronomy applications to mitigate the adverse effect of wavefront distortions caused by Earthâs atmosphere turbulence. With a closed-loop AO system, distortions caused by the environment can be reduced dramatically. As the technology matures, AO systems can be integrated into a wide variety of optical systems to improve their performance. The goal of this project is to build such an AO system which can be integrated into high-resolution optical microscopy. A Thorlabs Adaptive Optics Kit was set up. A Shack-Hartmann Wavefront sensor, a Deformable Mirror and other necessary optics hardware was combined together on a breadboard, and the control software was also implemented to form the feedback loop.https://ecommons.udayton.edu/stander_posters/1183/thumbnail.jp

    Timing of finger tapping to frequency modulated acoustic stimuli

    No full text
    This study examined the timing of synchronous finger tapping to continuous frequency modulation (FM) and to click trains. Tapping to click trains was found to statistically significantly anticipate the acoustic stimulus. Tapping to continuous FM occurred before the instantaneous frequency rose through its mean value (i.e. at zero phase of the sinusoidal FM). The anticipation of zero phase of the FM was similar in magnitude to the anticipation of the click stimuli. However, there was a systematic departure from this timing when the FM depth was varied, the cause of which is unclear. The perceived timing of acoustic stimuli will influence the timing of motor responses to the stimuli. These results may therefore be relevant to the timing of perceptual centres of acoustic stimuli including speech

    Modelling visual change detection and identification under free viewing conditions

    No full text
    We examined whether the abilities of observers to perform an analogue of a real-world monitoring task involving detection and identification of changes to items in a visual display could be explained better by models based on signal detection theory (SDT) or high threshold theory (HTT). Our study differed from most previous studies in that observers were allowed to inspect the initial display for 3s, simulating the long inspection times typical of natural viewing, and their eye movements were not constrained. For the majority of observers, combined change detection and identification performance was best modelled by a SDT-based process that assumed that memory resources were distributed across all eight items in our displays. Some observers required a parameter to allow for sometimes making random guesses at the identities of changes they had missed. However, the performance of a small proportion of observers was best explained by a HTT-based model that allowed for lapses of attention

    Sound localisation during illusory self-rotation

    No full text
    Auditory elevation localisation was investigated under conditions of illusory self-rotation (i.e., vection) induced by movement of wide-field visual stimuli around participants' z-axes. Contrary to previous findings which suggest that auditory cues to sound-source elevation are discounted during vection, we found little evidence that vection affects judgements of source elevation. Our results indicate that the percept of auditory space during vection is generally consistent with the available head-centered auditory cues to source elevation. Auditory information about the head-centered location of a source appears to be integrated, without modification, with visual information about head motion to determine the perceived exocentric location of the source

    Metacognitive monitoring and control in visual change detection:Implications for situation awareness and cognitive control

    Get PDF
    Metacognitive monitoring and control of situation awareness (SA) are important for a range of safety-critical roles (e.g., air traffic control, military command and control). We examined the factors affecting these processes using a visual change detection task that included representative tactical displays. SA was assessed by asking novice observers to detect changes to a tactical display. Metacognitive monitoring was assessed by asking observers to estimate the probability that they would correctly detect a change, either after study of the display and before the change (judgement of learning; JOL) or after the change and detection response (judgement of performance; JOP). In Experiment 1, observers failed to detect some changes to the display, indicating imperfect SA, but JOPs were reasonably well calibrated to objective performance. Experiment 2 examined JOLs and JOPs in two task contexts: with study-time limits imposed by the task or with self-pacing to meet specified performance targets. JOPs were well calibrated in both conditions as were JOLs for high performance targets. In summary, observers had limited SA, but good insight about their performance and learning for high performance targets and allocated study time appropriately

    Variability in the headphone-to-ear-canal transfer function

    No full text
    Headphone-to-ear-canal transfer functions (HpTFs) for 20 headphone placements were measured for each ear of three participants and an acoustic manikin. Head-related transfer functions (HRTFs) were measured for nine sound-source locations within a 14.5° radius of each of eight representative locations. Noises were convolved with these functions and passed through a cochlear filter model to estimate cochlear excitation. The variability of the magnitudes of the filtered HpTFs was much less than the variability of the magnitudes of the unfiltered HpTFs. It was also considerably less than the variability of the magnitudes of the filtered HRTFs. In addition, the variability of the group delays of the HpTFs for the three human participants was considerably less than the minimum discriminable interaural time difference. It follows that much of the information in HRTFs that could provide a cue to sound-source location will not be masked by the variability of HpTFs across headphone placements. The spatial fidelity of an individualized virtual audio display, therefore, will not necessarily be compromised by variability in HpTFs

    Phase effects in forward masking of the compound action potential: A comparison of responses to stimulus and distortion frequencies

    No full text
    When a complex stimulus is presented, new frequencies (distortion products, DPs) are generated within the cochlea. The most intense DPs are lower in frequency than the stimulus tones (primaries). It is not clear whether the relative phase of stimuli is encoded by neural channels tuned to the primaries or by channels tuned to the DPs. We estimated the response of auditory nerve fibres tuned to each of these channels as a function of the relative phase of harmonic stimuli. The compound action potential (CAP) evoked by probes at the primary or distortion frequencies was masked by harmonic 2-tone maskers and cochlear generated DPs. The degree of masking reflected the response to the masker of fibres tuned to the probe. Changes in relative phase of the primaries resulted in a large modulation of the response of fibres tuned to the DPs. Except for a primary frequency ratio of 1:2, the response of fibres tuned to the primaries was only shallowly modulated by changes in relative phase. However, the level of response to the DPs was much lower than the response to the stimulus tones

    Spectral hyperacuity in the cat: neural response to frequency modulated tone pairs

    No full text
    When two tones are presented to the ear, distortion products are generated which are lower in frequency than the presented (primary) tones. We studied the responses of neurons from the inferior colliculus of the cat to small frequency modulations (FM) of primary tone combinations which gave rise to distortion products within the neuron\u27s response area. Neural discharges were modulated in response to the FM of the distortion product in a similar manner to modulation of discharges by FM of a pure tone to which these neurons were sensitive. However, very shallow, neurally-insignificant FM of high-frequency primaries could be transposed into significant FM of lower frequency distortion products. Because the sensitivity of a low-frequency neuron to a transposed FM exceeds that of neurons sensitive to a single tone with the same FM, the effect is termed hyperacuity

    Data for Fitts' task with audio feedback

    No full text
    Dataset for McAnally & Wallis, Effects of Auditory Feedback on Visually-guided Movement in Real and Virtual Space</p

    Spectral integration time of the auditory localisation system

    No full text
    For the elevation and front-versus-back hemifield of a sound source to be accurately determined, the sound must contain a broad range of frequencies. Experiment 1 of this study examined the spectral integration time of the auditory localisation system by measuring the accuracy with which frequency-modulated (FM) tones of modulation periods ranging from 0.5 to 200 ms can be localised. For each of the four participants, judgements of sound-source elevation and front-back hemifield were most accurate for a modulation period of 5 ms. Accuracy levels for the 5 ms modulation period approached those for a pink-noise stimulus. This suggests that the spectral integration time of the auditory localisation system is around 5 ms. Supporting evidence for this conclusion was sought in experiment 2, in which two participants localised noise stimuli that had magnitude spectra identical to those of 5 ms equivalent-rectangular-duration samples of the FM tones from experiment 1. For both participants, functions relating localisation error measures (i.e., elevation error and frequency of front-back confusion) to modulation period for spectrally matched noises were similar to those for FM tones. Crow
    • …
    corecore