583 research outputs found
Perceived Blur in Naturally Contoured Images Depends on Phase
Perceived blur is an important measure of image quality and clinical visual function. The magnitude of image blur varies across space and time under natural viewing conditions owing to changes in pupil size and accommodation. Blur is frequently studied in the laboratory with a variety of digital filters, without comparing how the choice of filter affects blur perception. We examine the perception of image blur in synthetic images composed of contours whose orientation and curvature spatial properties matched those of natural images but whose blur could be directly controlled. The images were blurred by manipulating the slope of the amplitude spectrum, Gaussian low-pass filtering or filtering with a Sinc function, which, unlike slope or Gaussian filtering, introduces periodic phase reversals similar to those in optically blurred images. For slope-filtered images, blur discrimination thresholds for over-sharpened images were extremely high and perceived blur could not be matched with either Gaussian or Sinc filtered images, suggesting that directly manipulating image slope does not simulate the perception of blur. For Gaussian- and Sinc-blurred images, blur discrimination thresholds were dipper-shaped and were well-fit with a simple variance discrimination model and with a contrast detection threshold model, but the latter required different contrast sensitivity functions for different types of blur. Blur matches between Gaussian- and Sinc-blurred images were used to test several models of blur perception and were in good agreement with models based on luminance slope, but not with spatial frequency based models. Collectively, these results show that the relative phases of image components, in addition to their relative amplitudes, determines perceived blur
Can Neuromorphic Computer Vision Inform Vision Science? Disparity Estimation as a Case Study
The primate visual system efficiently and effectively solves a multitude of tasks from orientation detection to motion detection. The Computer Vision community is therefore beginning to implement algorithms that mimic the processing hierarchies present in the primate visual system in the hope of achieving flexible and robust artificial vision systems. Here, we reappropriate the neuroscience “borrowed” by the Computer Vision community and ask whether neuromorphic computer vision solutions may give us insight into the functioning of the primate visual system. Specifically, we implement a neuromorphic algorithm for disparity estimation and compare its performance against that of human observers. The algorithm greatly outperforms human subjects when tuned with parameters to compete with non-neural approaches to disparity estimation on benchmarking stereo image datasets. Conversely, when the algorithm is implemented with biologically plausible receptive field sizes, spatial selectivity, phase tuning, and neural noise, its performance is directly relatable to that of human observers. The receptive field size and the number of spatial scales sensibly determine the range of spatial frequencies in which the algorithm successfully operates. The algorithm’s phase tuning and neural noise in turn determine the algorithm’s peak disparity sensitivity. When included, retino-cortical mapping strongly degrades disparity estimation in the model’s periphery, further closening human and algorithm performance. Hence, a neuromorphic computer vision algorithm can be reappropriated to model human behavior, and can provide interesting insights into which aspects of human visual perception have been or are yet to be explained by vision science
Evaluation of the Tobii EyeX Eye tracking controller and Matlab toolkit for research
The Tobii Eyex Controller is a new low-cost binocular eye tracker marketed for integration in gaming and consumer applications. The manufacturers claim that the system was conceived for natural eye gaze interaction, does not require continuous recalibration, and allows moderate head movements. The Controller is provided with a SDK to foster the development of new eye tracking applications. We review the characteristics of the device for its possible use in scientific research. We develop and evaluate an open source Matlab Toolkit that can be employed to interface with the EyeX device for gaze recording in behavioral experiments. The Toolkit provides calibration procedures tailored to both binocular and monocular experiments, as well as procedures to evaluate other eye tracking devices. The observed performance of the EyeX (i.e. accuracy < 0.6°, precision < 0.25°, latency < 50 ms and sampling frequency ≈55 Hz), is sufficient for some classes of research application. The device can be successfully employed to measure fixation parameters, saccadic, smooth pursuit and vergence eye movements. However, the relatively low sampling rate and moderate precision limit the suitability of the EyeX for monitoring micro-saccadic eye movements or for real-time gaze-contingent stimulus control. For these applications, research grade, high-cost eye tracking technology may still be necessary. Therefore, despite its limitations with respect to high-end devices, the EyeX has the potential to further the dissemination of eye tracking technology to a broad audience, and could be a valuable asset in consumer and gaming applications as well as a subset of basic and clinical research settings
A Space-Variant Model for Motion Interpretation across the Visual Field
We implement a neural model for the estimation of the focus of radial motion (FRM) at different retinal locations and we assess the model by comparing its results with respect to the precision with which human observers can estimate the FRM in naturalistic, moving dead leaves stimuli. The proposed neural model describes the deep hierarchy of the first stages of the dorsal visual pathway [Solari et al., 2014]. Such a model is space-variant, since it takes into account the retino-cortical transformation of the primate visual system through log-polar mapping that produces a cortical representation of the visual signal to the retina. The log-polar transform of the retinal image is the input to the cortical motion estimation stage where optic flow is computed by a three-layer population of cells. A population of spatio-temporal oriented Gabor filters approximates the simple cells of area V1 (first layer), which are combined into complex cells as motion energy units (second layer). The responses of the complex cells are pooled (third layer) to encode the magnitude and direction of velocities as in the extrastriate motion pathway between area MT and MST. The sensitivity to complex motion patterns that has been found in area MST is modeled through a population of adaptive templates, and from the responses of such a population the first order description of optic flow is derived. Information about self-motion (e.g. direction of heading) is estimated by combining such first-order descriptors computed in the cortical domain
Modelling Short-Latency Disparity-Vergence Eye Movements Under Dichoptic Unbalanced Stimulation
Vergence eye movements align the optical axes of our two eyes onto an object of interest, thus facilitating the binocular summation of the images projected onto the left and the right retinae into a single percept. Both the computational substrate and the functional behaviour of binocular vergence eye movements have been the topic of in depth investigation. Here, we attempt to bring together what is known about computation and function of vergence mechanism. To this aim, we evaluated of a biologically inspired model of horizontal and vertical vergence control, based on a network of V1 simple and complex cells. The model performances were compared to that of human observers, with dichoptic stimuli characterized by a varying amounts of interocular correlation, interocular contrast, and vertical disparity.
The model provides a qualitative explanation of psychophysiological data. Nevertheless, human vergence response to interocular contrast differs from model’s behavior, suggesting that the proposed disparity-vergence model may be improved to account for human behavior. More than this, this observation also highlights how dichoptic unbalanced stimulation can be used to investigate the significant but neglected role of sensory processing in motor planning of eye movements in depth
Acuity, crowding, reading and fixation stability
AbstractPeople with age-related macular disease frequently experience reading difficulty that could be attributed to poor acuity, elevated crowding or unstable fixation associated with peripheral visual field dependence. We examine how the size, location, spacing and instability of retinal images affect the visibility of letters and words at different eccentricities. Fixation instability was simulated in normally sighted observers by randomly jittering single or crowded letters or words along a circular arc of fixed eccentricity. Visual performance was assessed at different levels of instability with forced choice measurements of acuity, crowding and reading speed in a rapid serial visual presentation paradigm. In the periphery: (1) acuity declined; (2) crowding increased for acuity- and eccentricity-corrected targets; and (3), the rate of reading fell with acuity-, crowding- and eccentricity-corrected targets. Acuity and crowding were unaffected by even high levels of image instability. However, reading speed decreased with image instability, even though the visibility of the component letters was unaffected. The results show that reading performance cannot be standardised across the visual field by correcting the size, spacing and eccentricity of letters or words. The results suggest that unstable fixation may contribute to reading difficulties in people with low vision and therefore that rehabilitation may benefit from fixation training
Border distinctness in amblyopia
AbstractOn the basis of the contrast sensitivity loss in amblyopia which mainly affects higher spatial frequencies, one would expect amblyopes to perceive sharp edges as blurred. We show that they perceive sharp edges as sharp and have veridical edge blur perception. Contrary to the currently accepted view, this suggests that the amblyopic visual system is not characterized by a blurred visual representation
Impact of Natural Blind Spot Location on Perimetry.
We study the spatial distribution of natural blind spot location (NBSL) and its impact on perimetry. Pattern deviation (PD) values of 11,449 reliable visual fields (VFs) that are defined as clinically unaffected based on summary indices were extracted from 11,449 glaucoma patients. We modeled NBSL distribution using a two-dimensional non-linear regression approach and correlated NBSL with spherical equivalent (SE). Additionally, we compared PD values of groups with longer and shorter distances than median, and larger and smaller angles than median between NBSL and fixation. Mean and standard deviation of horizontal and vertical NBSL were 14.33° ± 1.37° and -2.06° ± 1.27°, respectively. SE decreased with increasing NBSL (correlation: r = -0.14, p \u3c 0.001). For NBSL distances longer than median distance (14.32°), average PD values decreased in the upper central (average difference for significant points (ADSP): -0.18 dB) and increased in the lower nasal VF region (ADSP: 0.14 dB). For angles in the direction of upper hemifield relative to the median angle (-8.13°), PD values decreased in lower nasal (ADSP: -0.11 dB) and increased in upper temporal VF areas (ADSP: 0.19 dB). In conclusion, we demonstrate that NBSL has a systematic effect on the spatial distribution of VF sensitivity
Integrating retinotopic features in spatiotopic coordinates
The receptive fields of early visual neurons are anchored in retinotopic coordinates (Hubel and Wiesel, 1962). Eye movements shift these receptive fields and therefore require that different populations of neurons encode an object's constituent features across saccades. Whether feature groupings are preserved across successive fixations or processing starts anew with each fixation has been hotly debated (Melcher and Morrone, 2003; Melcher, 2005, 2010; Knapen et al., 2009; Cavanagh et al., 2010a,b; Morris et al., 2010). Here we show that feature integration initially occurs within retinotopic coordinates, but is then conserved within a spatiotopic coordinate frame independent of where the features fall on the retinas. With human observers, we first found that the relative timing of visual features plays a critical role in determining the spatial area over which features are grouped. We exploited this temporal dependence of feature integration to show that features co-occurring within 45 ms remain grouped across eye movements. Our results thus challenge purely feedforward models of feature integration (Pelli, 2008; Freeman and Simoncelli, 2011) that begin de novo after every eye movement, and implicate the involvement of brain areas beyond early visual cortex. The strong temporal dependence we quantify and its link with trans-saccadic object perception instead suggest that feature integration depends, at least in part, on feedback from higher brain areas (Mumford, 1992; Rao and Ballard, 1999; Di Lollo et al., 2000; Moore and Armstrong, 2003; Stanford et al., 2010)
- …