19 research outputs found
Recommended from our members
Gaze-grasp coordination in obstacle avoidance: differences between binocular and monocular viewing
Most adults can skillfully avoid potential obstacles when acting in everyday cluttered scenes. We examined how gaze and hand movements are normally coordinated for obstacle avoidance and whether these are altered when binocular depth information is unavailable. Visual fixations and hand movement kinematics were simultaneously recorded, while 13 right-handed subjects reached-to-precision grasp a cylindrical household object presented alone or with a potential obstacle (wine glass) located to its left (thumb's grasp side), right or just behind it (both closer to the finger's grasp side) using binocular or monocular vision. Gaze and hand movement strategies differed significantly by view and obstacle location. With binocular vision, initial fixations were near the target's centre of mass (COM) around the time of hand movement onset, but usually shifted to end just above the thumb's grasp site at initial object contact, this mainly being made by the thumb, consistent with selecting this digit for guiding the grasp. This strategy was associated with faster binocular hand movements and improved end-point grip precision across all trials than with monocular viewing, during which subjects usually continued to fixate the target closer to its COM despite a similar prevalence of thumb-first contacts. While subjects looked directly at the obstacle at each location on a minority of trials and their overall fixations on the target were somewhat biased towards the grasp side nearest to it, these gaze behaviours were particularly marked on monocular vision-obstacle behind trials which also commonly ended in finger-first contact. Subjects avoided colliding with the wine glass under both views when on the right (finger side) of the workspace by producing slower and straighter reaches, with this and the behind obstacle location also resulting in 'safer' (i.e. narrower) peak grip apertures and longer deceleration times than when the goal object was alone or the obstacle was on its thumb side. But monocular reach paths were more variable and deceleration times were selectively prolonged on finger-side and behind obstacle trials, with this latter condition further resulting in selectively increased grip closure times and corrections. Binocular vision thus provided added advantages for collision avoidance, known to require intact dorsal cortical stream processing mechanisms, particularly when the target of the grasp and potential obstacle to it were fairly closely separated in depth. Different accounts of the altered monocular gaze behaviour converged on the conclusion that additional perceptual and/or attentional resources are likely engaged compared to when continuous binocular depth information is available. Implications for people lacking binocular stereopsis are briefly considered
First- and second-order contributions to depth perception in anti-correlated random dot stereograms.
The binocular energy model of neural responses predicts that depth from binocular disparity might be perceived in the reversed direction when the contrast of dots presented to one eye is reversed. While reversed-depth has been found using anti-correlated random-dot stereograms (ACRDS) the findings are inconsistent across studies. The mixed findings may be accounted for by the presence of a gap between the target and surround, or as a result of overlap of dots around the vertical edges of the stimuli. To test this, we assessed whether (1) the gap size (0, 19.2 or 38.4 arc min) (2) the correlation of dots or (3) the border orientation (circular target, or horizontal or vertical edge) affected the perception of depth. Reversed-depth from ACRDS (circular no-gap condition) was seen by a minority of participants, but this effect reduced as the gap size increased. Depth was mostly perceived in the correct direction for ACRDS edge stimuli, with the effect increasing with the gap size. The inconsistency across conditions can be accounted for by the relative reliability of first- and second-order depth detection mechanisms, and the coarse spatial resolution of the latter
The Time Course of Segmentation and Cue-Selectivity in the Human Visual Cortex
Texture discontinuities are a fundamental cue by which the visual system segments objects from their background. The neural mechanisms supporting texture-based segmentation are therefore critical to visual perception and cognition. In the present experiment we employ an EEG source-imaging approach in order to study the time course of texture-based segmentation in the human brain. Visual Evoked Potentials were recorded to four types of stimuli in which periodic temporal modulation of a central 3° figure region could either support figure-ground segmentation, or have identical local texture modulations but not produce changes in global image segmentation. The image discontinuities were defined either by orientation or phase differences across image regions. Evoked responses to these four stimuli were analyzed both at the scalp and on the cortical surface in retinotopic and functional regions-of-interest (ROIs) defined separately using fMRI on a subject-by-subject basis. Texture segmentation (tsVEP: segmenting versus non-segmenting) and cue-specific (csVEP: orientation versus phase) responses exhibited distinctive patterns of activity. Alternations between uniform and segmented images produced highly asymmetric responses that were larger after transitions from the uniform to the segmented state. Texture modulations that signaled the appearance of a figure evoked a pattern of increased activity starting at ∼143 ms that was larger in V1 and LOC ROIs, relative to identical modulations that didn't signal figure-ground segmentation. This segmentation-related activity occurred after an initial response phase that did not depend on the global segmentation structure of the image. The two cue types evoked similar tsVEPs up to 230 ms when they differed in the V4 and LOC ROIs. The evolution of the response proceeded largely in the feed-forward direction, with only weak evidence for feedback-related activity
Sparse EEG/MEG source estimation via a group lasso
This work was supported by EY018875, National Institutes of Health; EY015790, National Institutes of Health; DMS-1007719, National Science Foundation; and RO1-EB001988-15, National Institutes of Health.Non-invasive recordings of human brain activity through electroencephalography (EEG) or magnetoencelphalography (MEG) are of value for both basic science and clinical applications in sensory, cognitive, and affective neuroscience. Here we introduce a new approach to estimating the intra-cranial sources of EEG/MEG activity measured from extra-cranial sensors. The approach is based on the group lasso, a sparse-prior inverse that has been adapted to take advantage of functionally-defined regions of interest for the definition of physiologically meaningful groups within a functionally-based common space. Detailed simulations using realistic source-geometries and data from a human Visual Evoked Potential experiment demonstrate that the group-lasso method has improved performance over traditional â„“2 minimum-norm methods. In addition, we show that pooling source estimates across subjects over functionally defined regions of interest results in improvements in the accuracy of source estimates for both the group-lasso and minimum-norm approaches.Publisher PDFPeer reviewe