411,345 research outputs found
Recommended from our members
Binocular integration using stereo motion cues to drive behavior in mice
The visual system presents an opportunity to study how two signals converge to generate a novel representation of the world: depth. The slight difference in positions between the two eyes means that different images are encoded by the left and right eyes by generating disparity signals. Another way to generate depth signals is by presenting different motion signals to the two eyes. Even though the binocular visual system has been studied for a long time, the mechanisms behind binocular integration when objects move in depth are largely unknown. In this dissertation, I demonstrate a new model for studying motion-in-depth signals using mice. Mice are an attractive animal to study the binocular visual system not only because they share common visual pathway as primates and other mammals, but also because there are genetic tools that can be used to study the underlying circuitry for binocular integration during motion-in-depth cues. Thus far there have been very few studies regarding binocularity in mice. This dissertation will focus on the behavioral output during stereoscopic motion-in-depth signals in mice and investigate visual areas involved in these behaviors. In the first section, I investigate whether mice discriminate motion-in-depth signals like primates, using disparity and motion signals presented to each eye. I find that mice are able to discriminate towards and away stimuli and that the binocular neurons in the visual cortex were critical for the computation of this signal. In the second section we measured optokinetic eye movement generated by motion-in-depth stimulus. I found that vergence eye movement in mice is driven primarily by the motion signals presented in each eye. This phenomenon can be explained largely by the summation of monocular motor signals of the two eyes that happens subcortically. These two experiments both show clear behavioral output that can be only generated when presented with binocular motion-in-depth signals. I find both cortical and subcortical components of binocular integration that are responsible for the generation of these behavior outputs which demonstrates the complicated nature of binocular integration associated with motion-in-depth signals. My work in this dissertation provides the foundation for studying binocular integration in rodentsNeuroscienc
Recommended from our members
Visual Attention in Jumping Spiders
The different ways that animals extract and analyze visual information from their environment is of interest to sensory ecologists. Jumping spiders, well-known for visually guided mating and hunting behavior, are an interesting model for the study of visual attention because they quickly and efficiently integrate information from eight eyes with a small brain. Stimuli in front of the spider are examined by two functionally and morphologically distinct pairs of forward-facing eyes. The principal eyes discern fine details and have small retinas and thus a small visual field. However, their position at the back of moveable tubes within the cephalothorax expands this visual field. The anterolateral eyes, one of the three pairs of secondary eyes, have lower spatial acuity and a larger visual field that overlaps with that of the principal eyes. They act as motion detectors, directing the principal eyes to objects appearing in their visual field. In Chapter 1, using a salticid-specific eyetracker, I explore how characteristics of a stimulus influence whether the secondary eyes redirect the gaze of the principal eyes from a principal stimulus to a new stimulus appearing in the visual field. I found that spiders suppressed redirection of the principal eyes when engaged by a salient stimulus, and redirected to moving peripheral stimuli more frequently than to stationary peripheral stimuli.
The principal eyes are also known to engage in a complex behavior called “scanning,” involving both dorsoventral and rotational movement. One hypothesis regarding scanning’s function is that it helps spiders identify important lines and angles in stimuli. However, scanning routines are not well understood. In Chapter 2, I measured scanning behaviors when spiders were watching quickly moving versus still or slowly moving images. I found that spiders spent more time overall looking at still or slowly moving images, and that stimulus speed does not appear to affect rotational movement of the retinas. Overall, I conclude that motion in an appearing stimulus elicits the attention of the principal eyes, but it remains unclear how and whether scanning functions in the extraction of detail from moving stimuli
Dummy eye measurements of microsaccades: testing the influence of system noise and head movements on microsaccade detection in a popular video-based eye tracker
Whereas early studies of microsaccades have predominantly relied on custom-built eye trackers and manual tagging of microsaccades, more recent work tends to use video-based eye tracking and automated algorithms for microsaccade detection. While data from these newer studies suggest that microsaccades can be reliably detected with video-based systems, this has not been systematically evaluated. I here present a method and data examining microsaccade detection in an often used video-based system (the Eyelink II system) and a commonly used detection algorithm (Engbert & Kliegl, 2003; Engbert & Mergenthaler, 2006). Recordings from human participants and those obtained using a pair of dummy eyes, mounted on a pair of glasses either worn by a human participant (i.e., with head motion) or a dummy head (no head motion) were compared. Three experiments were conducted. The first experiment suggests that when microsaccade measurements make use of the pupil detection mode, microsaccade detections in the absence of eye movements are sparse in the absence of head movements, but frequent with head movements (despite the use of a chin rest). A second experiment demonstrates that by using measurements that rely on a combination of corneal reflection and pupil detection, false microsaccade detections can be largely avoided as long as a binocular criterion is used. A third experiment examines whether past results may have been affected by possible incorrect detections due to small head movements. It shows that despite the many detections due to head movements, the typical modulation of microsaccade rate after stimulus onset is found only when recording from the participants’ eyes
Spectral and ocellar inputs to honeybee motion-sensitive descending neurons
Optomotor reflexes have been observed in many insects and in some cases the neural pathways that mediate these reflexes have been identified physiologically and anatomically. In honeybees Kaiser (1975) established that the spectral sensitivity of optomotor responses in bees almost exactly matched that of the green photoreceptors, suggesting an exclusive input from green photoreceptors. However, physiological studies showed that the motion detectors in the optic lobes have a secondary response peak in the UV region of the spectrum suggesting that there may be more than one type of photoreceptor involved in the optomotor response. Thus in this thesis, I investigate the neural basis of motion and spectral wavelength processing in motion-sensitive descending neurons, which are on the optomotor response pathway, to reveal the neural contributions from other spectral receptor types. In this study, intracellular recording techniques were utilised. The stimuli consisted of a wide-field LED (light emitting diode) display in which green (peak 530 nm) and short-wavelength (peak 380 nm) LEDs were mounted in pairs across a wide visual area. Six types of motion-sensitive descending neurons were recorded and anatomically identified, including two pitch-sensitive neurons (Locth3, DNII2), two roll-sensitive neurons (DNIV2 and DNIV3) and two yaw-sensitive neurons (DNVII1 and DNVII2). The results show that for the vertical sensitive (pitch and roll) neurons, the cells have equal-sized excitatory responses to motion when using short-wavelength or green motion stimulation. However, for the horizontal sensitive (yaw-sensitive) neurons excitatory responses only occurred for the green stimulus in the preferred direction. The short-wavelength stimulus induced clear inhibitory responses for all tested motion directions. The results suggest that besides green photoreceptors, the motion-sensitive descending neurons also receive inputs from the short-wavelength photoreceptors, but only for motion detectors tuned for vertical motion. Honeybees, like most flying insects, have three ocelli (simple eyes) located on the top of the head, in addition to the compound eyes. However, the exact function of the bee ocelli and the information computation between the ocelli, compound eyes and central brain remain unclear. In this thesis, I investigate the ocellar properties morphologically, anatomically and physiologically. Semi-thin sections and focal length measurements were performed on both median and lateral ocelli, a 3-dimensional reconstruction model of the honeybee ocellar lenses and retinas was developed to understand the visual fields of the ocelli. Intracellular electrophysiology experiments were carried out on descending neurons to understand the information processing between the ocelli and compound eyes. Cell responses to different stimuli were recorded with and without the ocelli covered. It is shown that the ocellar input provides a faster response to motion stimuli than with compound eye stimulation alone, and also increases the amplitude of responses to flashed stimuli. In the case of the DNII2 neuron, it is also shown that the ocelli provide a directional contribution to the responses
The Practice of the Circle
I. Recognition. Drawing the Circle The metaphor of the circle is here taken into account as a typifying image of the Emersonian moral. In this paper I shall attempt to provide an explanatory synthesis of the concept of “circle,” in its acceptation as “the outlined,” “path,” “itinerary.” The circle will be not investigated as a mere geometrical figure, but as a condition of motion, an occasion of processuality. It will be presented as a route which unravels itself before the subject’s eyes and..
Recommended from our members
Perceptual models for high-refresh-rate rendering
Rendering realistic images requires substantial computational power. With new high-refresh-rate displays as well as the renaissance of virtual reality (VR) and augmented reality (AR), one cannot expect that GPU performance will scale fast enough to meet the requirements of immersive photo-realistic rendering with current rendering techniques.
In this dissertation, I follow the dual of the well-known computer vision approach: vision is inverse graphics: to improve graphical algorithms, I consider the operation of the human visual system. I propose to model and exploit the limitations of the visual system in the context of novel high-refresh-rate displays; specifically, I focus on spatio-temporal perception, a topic that has received remarkably less attention than spatial-only perception so far.
I present three main contributions. First, I demonstrate the validity of the perceptual approach by presenting a conceptually simple rendering technique motivated by our eyes' limited sensitivity to high spatio-temporal change which reduces the rendering load and transmission requirement of current-generation VR headsets without introducing perceivable visual artefacts. Second, I present two visual models related to motion perception: (a) a metric for detecting flicker; and (b) a comprehensive visual model to predict perceived motion quality on monitors with arbitrary refresh rates and monitor resolutions. Third, I propose an adaptive rendering algorithm that utilises the proposed models. All algorithms operate on physical colorimetric units (instead of display-referenced pixel values), for which I provide the appropriate display measurements and models. All proposed algorithms and visual models are calibrated and validated with psychophysical experiments
- …