569 research outputs found
Recommended from our members
Grasping Deficits and Adaptations in Adults with Stereo Vision Losses
PURPOSE. To examine the effects of permanent versus brief reductions in binocular stereo vision on reaching and grasping (prehension) skills.
METHODS. The first experiment compared prehension proficiency in 20 normal and 20 adults with long-term stereo-deficiency (10 with coarse and 10 with undetectable disparity sensitivities) when using binocular vision or just the dominant or nondominant eye. The second experiment examined effects of temporarily mimicking similar stereoacuity losses in normal adults, by placing defocusing low- or high-plus lenses over one eye, compared with their control (neutral lens) binocular performance. Kinematic and error measures of prehension planning and execution were quantified from movements of the subjects’ preferred hand recorded while they reached, precision-grasped, and lifted cylindrical objects (two sizes, four locations) on 40 to 48 trials under each viewing condition.
RESULTS. Performance was faster and more accurate with normal compared with reduced binocular vision and least accomplished under monocular conditions. Movement durations were extended (up to ∼100 ms) whenever normal stereo vision was permanently (ANOVA P < 0.05) or briefly (ANOVA P < 0.001) reduced, with a doubling of error rates in executing the grasp (ANOVA P < 0.001). Binocular deficits in reaching occurred during its end phase (prolonged final approach, more velocity corrections, poorer coordination with object contact) and generally increased with the existing loss of disparity sensitivity. Binocular grasping was more uniformly impaired by stereoacuity loss and influenced by its duration. Adults with long-term stereo-deficiency showed increased variability in digit placement at initial object contact, and they adapted by prolonging (by ∼25%) the time spent subsequently applying their grasp (ANOVA P < 0.001). Brief stereoreductions caused systematic shifts in initial digit placement and two to three times more postcontact adjustments in grip position (ANOVA P < 0.01).
CONCLUSIONS. High-grade binocular stereo vision is essential for skilled precision grasping. Reduced disparity sensitivity results in inaccurate grasp-point selection and greater reliance on nonvisual (somesthetic) information from object contact to control grip stability
Recommended from our members
Case Studies in Invertebrate Visual Processing: I. Spectral and Spatial Processing in the Early Visual System of Drosophila melanogaster II. Binocular Stereopsis in Sepia officinalis
This thesis addresses two aspects of visual processing in two different invertebrate organisms.
The fruit fly, Drosophila melanogaster, has emerged as a key model for invertebrate vision research. Despite extensive characterisation of motion vision, very little is known about how flies process colour information, or how the spectral content of light affects other visual modalities. With the aim to accurately dissect the different components of the Drosophila visual system responsible for processing colour, I have developed a versatile visual stimulation setup to probe for the combinations of spatial, temporal and spectral visual response properties. Using flies that express neural activity indicators, I can track visual responses to a colour stimulus (i.e. narrow bands of light across the spectrum) via a two-photon imaging system. The visual stimulus is projected on a specialised screen material that scatters wavelengths of light across the spectrum equally at all locations of the screen, thus enabling presentation of spatially structured stimuli. Using this setup, I have characterised spectral responses, intensity-response relationships, and receptive fields of neurons in the early visual system of a variety of genetically modified strains of Drosophila. Specifically, I compared visual responses in the medulla of flies expressing either a subset or all photoreceptor opsins, with differing levels of screening pigment present in the eye. I found layer-specific shifts of spectral response properties correlating with projection regions of photoreceptor terminals. I also
3
found that a reduction in screening pigment shifts the general spectral response in the neuropil towards the longer wavelengths of light. I have also mapped receptive fields across the different layers of the medulla for the peak spectral response wavelength. My results suggest that receptive field dimensions match the expected size predicted by the conservation of a columnar organisation in the medulla, with little variation from layer to layer. In a subset of these cells, we see an elongated receptive field suggestive of static orientation selectivity with an apparent split in the preferred axis of orientation of these receptive fields, with a near-orthogonal angle between the summed vectors of the split populations.
The camera type eyes of vertebrates and cephalopods exhibit remarkable convergence, but it is currently unknown if the mechanisms for visual information processing in these brains, which exhibit wildly disparate architecture, is also shared. I chose to investigate the visual processing mechanism known as stereopsis in the cuttlefish Sepia officinalis. Stereoscopic vision is used to assess depth information by comparing the disparity between left and right visual fields. This strategy is commonplace in vertebrates having evolved multiple times independently but has only been demonstrated in one invertebrate: the praying mantis. Cuttlefish require precise distance estimation during their predatory hunt when they extend two tentacles in a ballistic strike to catch their target. Using a 3D perception paradigm whereby the cuttlefish were fitted with anaglyph glasses, I show that these animals use stereopsis to resolve distance to their prey. Although this is not an exclusive depth perception mechanism for hunting, it does shorten the time and distance covered prior to striking at a target. Furthermore, stereopsis in cuttlefish works differently to vertebrates, as cuttlefish can extract stereopsis cues from anti-correlated stimuli.BBSRC Doctoral Training Partnershi
Ongoing Emergence: A Core Concept in Epigenetic Robotics
We propose ongoing emergence as a core concept in
epigenetic robotics. Ongoing emergence refers to the
continuous development and integration of new skills
and is exhibited when six criteria are satisfied: (1)
continuous skill acquisition, (2) incorporation of new
skills with existing skills, (3) autonomous development
of values and goals, (4) bootstrapping of initial skills, (5)
stability of skills, and (6) reproducibility. In this paper
we: (a) provide a conceptual synthesis of ongoing
emergence based on previous theorizing, (b) review
current research in epigenetic robotics in light of ongoing
emergence, (c) provide prototypical examples of ongoing
emergence from infant development, and (d) outline
computational issues relevant to creating robots
exhibiting ongoing emergence
Near-optimal combination of disparity across a log-polar scaled visual field
The human visual system is foveated: we can see fine spatial details in central vision, whereas resolution is poor in our peripheral visual field, and this loss of resolution follows an approximately logarithmic decrease. Additionally, our brain organizes visual input in polar coordinates. Therefore, the image projection occurring between retina and primary visual cortex can be mathematically described by the log-polar transform. Here, we test and model how this space-variant visual processing affects how we process binocular disparity, a key component of human depth perception. We observe that the fovea preferentially processes disparities at fine spatial scales, whereas the visual periphery is tuned for coarse spatial scales, in line with the naturally occurring distributions of depths and disparities in the real-world. We further show that the visual system integrates disparity information across the visual field, in a near-optimal fashion. We develop a foveated, log-polar model that mimics the processing of depth information in primary visual cortex and that can process disparity directly in the cortical domain representation. This model takes real images as input and recreates the observed topography of human disparity sensitivity. Our findings support the notion that our foveated, binocular visual system has been moulded by the statistics of our visual environment
Change blindness: eradication of gestalt strategies
Arrays of eight, texture-defined rectangles were used as stimuli in a one-shot change blindness (CB) task where there was a 50% chance that one rectangle would change orientation between two successive presentations separated by an interval. CB was eliminated by cueing the target rectangle in the first stimulus, reduced by cueing in the interval and unaffected by cueing in the second presentation. This supports the idea that a representation was formed that persisted through the interval before being 'overwritten' by the second presentation (Landman et al, 2003 Vision Research 43149–164]. Another possibility is that participants used some kind of grouping or Gestalt strategy. To test this we changed the spatial position of the rectangles in the second presentation by shifting them along imaginary spokes (by ±1 degree) emanating from the central fixation point. There was no significant difference seen in performance between this and the standard task [F(1,4)=2.565, p=0.185]. This may suggest two things: (i) Gestalt grouping is not used as a strategy in these tasks, and (ii) it gives further weight to the argument that objects may be stored and retrieved from a pre-attentional store during this task
Choosing Your Poison: Optimizing Simulator Visual System Selection as a Function of Operational Tasks
Although current technology simulator visual systems can achieve extremely realistic levels they do not completely replicate the experience of a pilot sitting in the cockpit, looking at the outside world. Some differences in experience are due to visual artifacts, or perceptual features that would not be present in a naturally viewed scene. Others are due to features that are missing from the simulated scene. In this paper, these differences will be defined and discussed. The significance of these differences will be examined as a function of several particular operational tasks. A framework to facilitate the choice of visual system characteristics based on operational task requirements will be proposed
Engineering Data Compendium. Human Perception and Performance, Volume 1
The concept underlying the Engineering Data Compendium was the product an R and D program (Integrated Perceptual Information for Designers project) aimed at facilitating the application of basic research findings in human performance to the design of military crew systems. The principal objective was to develop a workable strategy for: (1) identifying and distilling information of potential value to system design from existing research literature, and (2) presenting this technical information in a way that would aid its accessibility, interpretability, and applicability by system designers. The present four volumes of the Engineering Data Compendium represent the first implementation of this strategy. This is Volume 1, which contains sections on Visual Acquisition of Information, Auditory Acquisition of Information, and Acquisition of Information by Other Senses
Neural mechanisms for reducing uncertainty in 3D depth perception
In order to navigate and interact within their environment, animals must process and interpret sensory information to generate a representation or ‘percept’ of that environment. However, sensory information is invariably noisy, ambiguous, or incomplete due to the constraints of sensory apparatus, and this leads to uncertainty in perceptual interpretation. To overcome these problems, sensory systems have evolved multiple strategies for reducing perceptual uncertainty in the face of uncertain visual input, thus optimizing goal-oriented behaviours. Two available strategies have been observed even in the simplest of neural systems, and are represented in Bayesian formulations of perceptual inference: sensory integration and prior experience. In this thesis, I present a series of studies that examine these processes and the neural mechanisms underlying them in the primate visual system, by studying depth perception in human observers. Chapters 2 & 3 used functional brain imaging to localize cortical areas involved in integrating multiple visual depth cues, which enhance observers’ ability to judge depth. Specifically, we tested which of two possible computational methods the brain uses to combine depth cues. Based on the results we applied disruption techniques to examine whether these select brain regions are critical for depth cue integration. Chapters 4 & 5 addressed the question of how memory systems operating over different time scales interact to resolve perceptual ambiguity when the retinal signal is compatible with more than one 3D interpretation of the world. Finally, we examined the role of higher cortical regions (parietal cortex) in depth perception and the resolution of ambiguous visual input by testing patients with brain lesions
- …