35 research outputs found
Optimality of Human Contour Integration
For processing and segmenting visual scenes, the brain is required to combine a multitude of features and sensory channels. It is neither known if these complex tasks involve optimal integration of information, nor according to which objectives computations might be performed. Here, we investigate if optimal inference can explain contour integration in human subjects. We performed experiments where observers detected contours of curvilinearly aligned edge configurations embedded into randomly oriented distractors. The key feature of our framework is to use a generative process for creating the contours, for which it is possible to derive a class of ideal detection models. This allowed us to compare human detection for contours with different statistical properties to the corresponding ideal detection models for the same stimuli. We then subjected the detection models to realistic constraints and required them to reproduce human decisions for every stimulus as well as possible. By independently varying the four model parameters, we identify a single detection model which quantitatively captures all correlations of human decision behaviour for more than 2000 stimuli from 42 contour ensembles with greatly varying statistical properties. This model reveals specific interactions between edges closely matching independent findings from physiology and psychophysics. These interactions imply a statistics of contours for which edge stimuli are indeed optimally integrated by the visual system, with the objective of inferring the presence of contours in cluttered scenes. The recurrent algorithm of our model makes testable predictions about the temporal dynamics of neuronal populations engaged in contour integration, and it suggests a strong directionality of the underlying functional anatomy
Low level constraints on dynamic contour path integration
Contour integration is a fundamental visual process. The constraints on integrating
discrete contour elements and the associated neural mechanisms have typically been
investigated using static contour paths. However, in our dynamic natural environment
objects and scenes vary over space and time. With the aim of investigating the
parameters affecting spatiotemporal contour path integration, we measured human
contrast detection performance of a briefly presented foveal target embedded in
dynamic collinear stimulus sequences (comprising five short 'predictor' bars appearing
consecutively towards the fovea, followed by the 'target' bar) in four experiments. The
data showed that participants' target detection performance was relatively unchanged
when individual contour elements were separated by up to 2° spatial gap or 200ms
temporal gap. Randomising the luminance contrast or colour of the predictors, on the
other hand, had similar detrimental effect on grouping dynamic contour path and
subsequent target detection performance. Randomising the orientation of the
predictors reduced target detection performance greater than introducing misalignment
relative to the contour path. The results suggest that the visual system integrates
dynamic path elements to bias target detection even when the continuity of path is
disrupted in terms of spatial (2°), temporal (200ms), colour (over 10 colours) and
luminance (-25% to 25%) information. We discuss how the findings can be largely
reconciled within the functioning of V1 horizontal connections
Combining S-cone and luminance signals adversely affects discrimination of objects within backgrounds
The visual system processes objects embedded in complex scenes that vary in both luminance and
colour. In such scenes, colour contributes to the segmentation of objects from backgrounds, but does it
also affect perceptual organisation of object contours which are already defined by luminance signals,
or are these processes unaffected by colour’s presence? We investigated if luminance and chromatic
signals comparably sustain processing of objects embedded in backgrounds, by varying contrast along
the luminance dimension and along the two cone-opponent colour directions. In the first experiment
thresholds for object/non-object discrimination of Gaborised shapes were obtained in the presence
and absence of background clutter. Contrast of the component Gabors was modulated along single
colour/luminance dimensions or co-modulated along multiple dimensions simultaneously. Background
clutter elevated discrimination thresholds only for combined S-(L + M) and L + M signals. The second
experiment replicated and extended this finding by demonstrating that the effect was dependent on
the presence of relatively high S-(L + M) contrast. These results indicate that S-(L + M) signals impair
spatial vision when combined with luminance. Since S-(L + M) signals are characterised by relatively
large receptive fields, this is likely to be due to an increase in the size of the integration field over which
contour-defining information is summed
Contrast Adaptation Contributes to Contrast-Invariance of Orientation Tuning of Primate V1 Cells
BACKGROUND: Studies in rodents and carnivores have shown that orientation tuning width of single neurons does not change when stimulus contrast is modified. However, in these studies, stimuli were presented for a relatively long duration (e. g., 4 seconds), making it possible that contrast adaptation contributed to contrast-invariance of orientation tuning. Our first purpose was to determine, in marmoset area V1, whether orientation tuning is still contrast-invariant with the stimulation duration is comparable to that of a visual fixation. METHODOLOGY/PRINCIPAL FINDINGS: We performed extracellular recordings and examined orientation tuning of single-units using static sine-wave gratings that were flashed for 200 msec. Sixteen orientations and three contrast levels, representing low, medium and high values in the range of effective contrasts for each neuron, were randomly intermixed. Contrast adaptation being a slow phenomenon, cells did not have enough time to adapt to each contrast individually. With this stimulation protocol, we found that the tuning width obtained at intermediate contrast was reduced to 89% (median), and that at low contrast to 76%, of that obtained at high contrast. Therefore, when probed with briefly flashed stimuli, orientation tuning is not contrast-invariant in marmoset V1. Our second purpose was to determine whether contrast adaptation contributes to contrast-invariance of orientation tuning. Stationary gratings were presented, as previously, for 200 msec with randomly varying orientations, but the contrast was kept constant within stimulation blocks lasting >20 sec, allowing for adaptation to the single contrast in use. In these conditions, tuning widths obtained at low contrast were still significantly less than at high contrast (median 85%). However, tuning widths obtained with medium and high contrast stimuli no longer differed significantly. CONCLUSIONS/SIGNIFICANCE: Orientation tuning does not appear to be contrast-invariant when briefly flashed stimuli vary in both contrast and orientation, but contrast adaptation partially restores contrast-invariance of orientation tuning