22 research outputs found

    Cholinergic enhancement reduces orientation-specific surround suppression but not visual crowding

    Get PDF
    Acetylcholine (ACh) reduces the spatial spread of excitatory fMRI responses in early visual cortex and receptive field size of V1 neurons. We investigated the perceptual consequences of these physiological effects of ACh with surround suppression and crowding, two phenomena that involve spatial interactions between visual field locations. Surround suppression refers to the reduction in perceived stimulus contrast by a high-contrast surround stimulus. For grating stimuli, surround suppression is selective for the relative orientations of the center and surround, suggesting that it results from inhibitory interactions in early visual cortex. Crowding refers to impaired identification of a peripheral stimulus in the presence of flankers and is thought to result from excessive integration of visual features. We increased synaptic ACh levels by administering the cholinesterase inhibitor donepezil to healthy human subjects in a placebo-controlled, double-blind design. In Experiment 1, we measured surround suppression of a central grating using a contrast discrimination task with three conditions: (1) surround grating with the same orientation as the center (parallel), (2) surround orthogonal to the center, or (3) no surround. Contrast discrimination thresholds were higher in the parallel than in the orthogonal condition, demonstrating orientation-specific surround suppression (OSSS). Cholinergic enhancement decreased thresholds only in the parallel condition, there by reducing OSSS. In Experiment 2, subjects performed a crowding task in which they reported the identity of a peripheral letter flanked by letters on either side. We measured the critical spacing between the targets and flanking letters that allowed reliable identification. Cholinergic enhancement with donepezil had no effect on critical spacing. Our findings suggest that ACh reduces spatial interactions in tasks involving segmentation of visual field locations but that these effects may be limited to early visual cortical processing

    Visual motion shifts saccade targets.

    No full text

    Visual motion shifts saccade targets.

    No full text
    Saccades are made thousands of times a day and are the principal means of localizing objects in our environment. However, the saccade system faces the challenge of accurately localizing objects as they are constantly moving relative to the eye and head. Any delays in processing could cause errors in saccadic localization. To compensate for these delays, the saccade system might use one or more sources of information to predict future target locations, including changes in position of the object over time, or its motion. Another possibility is that motion influences the represented position of the object for saccadic targeting, without requiring an actual change in target position. We tested whether the saccade system can use motion-induced position shifts to update the represented spatial location of a saccade target, by using static drifting Gabor patches with either a soft or a hard aperture as saccade targets. In both conditions, the aperture always remained at a fixed retinal location. The soft aperture Gabor patch resulted in an illusory position shift, whereas the hard aperture stimulus maintained the motion signals but resulted in a smaller illusory position shift. Thus, motion energy and target location were equated, but a position shift was generated in only one condition. We measured saccadic localization of these targets and found that saccades were indeed shifted, but only with a soft-aperture Gabor patch. Our results suggest that motion shifts the programmed locations of saccade targets, and this remapped location guides saccadic localization

    View to the U: an eye on UTM Research

    No full text
    This is an audio recording from the podcast series "View to the U: An eye on UTM research".To kick off this new season of VIEW to the U, we are picking up where we left off – with representation from UTM’s Department of Psychology – but this time around the featured guests are two new faculty members, Professors Anna Kosovicheva and Benjamin Wolfe, co-directors of the Applied Perception and Psychophysics Lab, or APPLY Lab, that was recently established at UTM. Anna and Ben are helping me launch the new season: “Without further ado” is the theme for the year, and throughout this season, I will introduce some of the new people from UTM’s vibrant and ever-growing research community. Over the course of this interview, Anna and Ben talk about their research in the APPLY lab, which focuses on how we take in information, particularly visual perception and overall how vision works, and the applications for activities such as driving and reading. We also talk about some of their out-of-the-lab pursuits and the creative ways they spend some of their free time

    A dichoptic feedback-based oculomotor training method to manipulate interocular alignment

    No full text
    Strabismus is a prevalent impairment of binocular alignment that is associated with a spectrum of perceptual deficits and social disadvantages. Current treatments for strabismus involve ocular alignment through surgical or optical methods and may include vision therapy exercises. In the present study, we explore the potential of real-time dichoptic visual feedback that may be used to quantify and manipulate interocular alignment. A gaze-contingent ring was presented independently to each eye of 11 normally-sighted observers as they fixated a target dot presented only to their dominant eye. Their task was to center the rings within 2\ub0 of the target for at least 1 s, with feedback provided by the sizes of the rings. By offsetting the ring in the non-dominant eye temporally or nasally, this task required convergence or divergence, respectively, of the non-dominant eye. Eight of 11 observers attained 5\ub0 asymmetric convergence and 3 of 11 attained 3\ub0 asymmetric divergence. The results suggest that real-time gaze-contingent feedback may be used to quantify and transiently simulate strabismus and holds promise as a method to augment existing therapies for oculomotor alignment disorders

    Foveal input is not required for perception of crowd facial expression

    No full text
    The visual system extracts average features from groups of objects (Ariely, 2001; Dakin & Watt, 1997; Watamaniuk & Sekuler, 1992), including high-level stimuli such as faces (Haberman & Whitney, 2007, 2009). This phenomenon, known as ensemble perception, implies a covert process, which would not require fixation of individual stimulus elements. However, some evidence suggests that ensemble perception may instead be a process of averaging foveal input across sequential fixations (Ji, Chen, & Fu, 2013; Jung, Bulthoff, Thornton, Lee, & Armann, 2013). To test directly whether foveating objects is necessary, we measured observers' sensitivity to average facial emotion in the absence of foveal input. Subjects viewed arrays of 24 faces, either in the presence or absence of a gaze-contingent foveal occluder, and adjusted a test face to match the average expression of the array. We found no difference in accuracy between the occluded and non-occluded conditions, demonstrating that foveal input is not required for ensemble perception. Unsurprisingly, without foveal input, subjects spent significantly less time directly fixating faces, but this did not translate into any difference in sensitivity to ensemble expression. Next, we varied the number of faces visible from the set to test whether subjects average multiple faces from the crowd. In both conditions, subjects' performance improved as more faces were presented, indicating that subjects integrated information from multiple faces in the display regardless of whether they had access to foveal information. Our results demonstrate that ensemble perception can be a covert process, not requiring access to direct foveal information

    Foveal input is not required for perception of crowd facial expression

    No full text
    The visual system extracts average features from groups of objects (Ariely, 2001; Dakin & Watt, 1997; Watamaniuk & Sekuler, 1992), including high-level stimuli such as faces (Haberman & Whitney, 2007, 2009). This phenomenon, known as ensemble perception, implies a covert process, which would not require fixation of individual stimulus elements. However, some evidence suggests that ensemble perception may instead be a process of averaging foveal input across sequential fixations (Ji, Chen, & Fu, 2013; Jung, Bulthoff, Thornton, Lee, & Armann, 2013). To test directly whether foveating objects is necessary, we measured observers' sensitivity to average facial emotion in the absence of foveal input. Subjects viewed arrays of 24 faces, either in the presence or absence of a gaze-contingent foveal occluder, and adjusted a test face to match the average expression of the array. We found no difference in accuracy between the occluded and non-occluded conditions, demonstrating that foveal input is not required for ensemble perception. Unsurprisingly, without foveal input, subjects spent significantly less time directly fixating faces, but this did not translate into any difference in sensitivity to ensemble expression. Next, we varied the number of faces visible from the set to test whether subjects average multiple faces from the crowd. In both conditions, subjects' performance improved as more faces were presented, indicating that subjects integrated information from multiple faces in the display regardless of whether they had access to foveal information. Our results demonstrate that ensemble perception can be a covert process, not requiring access to direct foveal information

    Detection of brake lights while distracted: Separating peripheral vision from cognitive load

    No full text
    Drivers rarely focus exclusively on driving, even with the best of intentions. They are distracted by passengers, navigation systems, smartphones, and driver assistance systems. Driving itself requires performing simultaneous tasks, including lane keeping, looking for signs, and avoiding pedestrians. The dangers of multitasking while driving, and efforts to combat it, often focus on the distraction itself, rather than on how a distracting task can change what the driver can perceive. Critically, some distracting tasks require the driver to look away from the road, which forces the driver to use peripheral vision to detect driving-relevant events. As a consequence, both looking away and being distracted may degrade driving performance. To assess the relative contributions of these factors, we conducted a laboratory experiment in which we separately varied cognitive load and point of gaze. Subjects performed a visual 0-back or 1-back task at one of four fixation locations superimposed on a real-world driving video, while simultaneously monitoring for brake lights in their lane of travel. Subjects were able to detect brake lights in all conditions, but once the eccentricity of the brake lights increased, they responded more slowly and missed more braking events. However, our cognitive load manipulation had minimal effects on detection performance, reaction times, or miss rates for brake lights. These results suggest that, for tasks that require the driver to look off-road, the decrements observed may be due to the need to use peripheral vision to monitor the road, rather than due to the distraction itself
    corecore