16 research outputs found

    No local cancellation between directionally opposed first-order and second-order motion signals

    Get PDF
    AbstractDespite strong converging evidence that there are separate mechanisms for the processing of first-order and second-order motion, the issue remains controversial. Qian, Andersen and Adelson (J. Neurosci., 14 (1994), 7357–7366) have shown that first-order motion signals cancel if locally balanced. Here we show that this is also the case for second-order motion signals, but not for a mixture of first-order and second-order motion even when the visibility of the two types of stimulus is equated. Our motion sequence consisted of a dynamic binary noise carrier divided into horizontal strips of equal height, each of which was spatially modulated in either contrast or luminance by a 1.0 c/deg sinusoid. The modulation moved leftward or rightward (3.75 Hz) in alternate strips. The single-interval task was to identify the direction of motion of the central strip. Three conditions were tested: all second-order strips, all first-order strips, and spatially alternated first-order and second-order strips. In the first condition, a threshold strip height for the second-order strips was obtained at a contrast modulation depth of 100%. In the second condition, this height was used for the first-order strips, and a threshold was obtained in terms of luminance contrast. These two previously-obtained threshold values were used to equate visibility of the first-order and second-order components in the third condition. Direction identification, instead of being at threshold, was near-perfect for all observers. We argue that the first two conditions demonstrate local cancellation of motion signals, whereas in the third condition this does not occur. We attribute this non-cancellation to separate processing of first-order and second-order motion inputs

    Probabilistic Motion Estimation Based on Temporal Coherence

    Full text link
    We develop a theory for the temporal integration of visual motion motivated by psychophysical experiments. The theory proposes that input data are temporally grouped and used to predict and estimate the motion flows in the image sequence. This temporal grouping can be considered a generalization of the data association techniques used by engineers to study motion sequences. Our temporal-grouping theory is expressed in terms of the Bayesian generalization of standard Kalman filtering. To implement the theory we derive a parallel network which shares some properties of cortical networks. Computer simulations of this network demonstrate that our theory qualitatively accounts for psychophysical experiments on motion occlusion and motion outliers.Comment: 40 pages, 7 figure

    Integration across time determines path deviation discrimination for moving objects.

    Get PDF
    YesBackground: Human vision is vital in determining our interaction with the outside world. In this study we characterize our ability to judge changes in the direction of motion of objects-a common task which can allow us either to intercept moving objects, or else avoid them if they pose a threat. Methodology/Principal Findings: Observers were presented with objects which moved across a computer monitor on a linear path until the midline, at which point they changed their direction of motion, and observers were required to judge the direction of change. In keeping with the variety of objects we encounter in the real world, we varied characteristics of the moving stimuli such as velocity, extent of motion path and the object size. Furthermore, we compared performance for moving objects with the ability of observers to detect a deviation in a line which formed the static trace of the motion path, since it has been suggested that a form of static memory trace may form the basis for these types of judgment. The static line judgments were well described by a 'scale invariant' model in which any two stimuli which possess the same two-dimensional geometry (length/width) result in the same level of performance. Performance for the moving objects was entirely different. Irrespective of the path length, object size or velocity of motion, path deviation thresholds depended simply upon the duration of the motion path in seconds. Conclusions/Significance: Human vision has long been known to integrate information across space in order to solve spatial tasks such as judgment of orientation or position. Here we demonstrate an intriguing mechanism which integrates direction information across time in order to optimize the judgment of path deviation for moving objects.Wellcome Trust, Leverhulme Trust, NI

    Collinear motion strengthens local context in visual detection

    Get PDF
    Detection of elongated objects in the visual scene can be improved by additional elements flanking the object on the collinear axis. This is the collinear context effect (CE) and is represented in the long-range horizontal connection plexus in V1. The aim of this study was to test whether the visual collinear motion can improve the CE. In the three experiments of this study, the flank was presented with different types of motion. In particular, the collinear motion aligned with the longitudinal axis of the to-be-detected object: toward or away from it, and the orthogonal motion with a direction perpendicular to the collinear axis. Only collinear motion toward the target showed a robust and replicable empowerment of the CE. This dynamic modulation of the CE likely is implemented in the long-range horizontal connection plexus in V1, but, given that in addition it conveys the time information of motion, there must be a direct feedback in V1 from higher visual areas where motion perception is implemented, such as Middle Temporal (MT). Elongated visual objects moving along their longitudinal axis favor a propagation of activation in front of them via a network of interconnected units that allows the visual system to predict future positions of relevant items in the visual scene

    Performance Characterization of Watson Ahumada Motion Detector Using Random Dot Rotary Motion Stimuli

    Get PDF
    The performance of Watson & Ahumada's model of human visual motion sensing is compared against human psychophysical performance. The stimulus consists of random dots undergoing rotary motion, displayed in a circular annulus. The model matches psychophysical observer performance with respect to most parameters. It is able to replicate some key psychophysical findings such as invariance of observer performance to dot density in the display, and decrease of observer performance with frame duration of the display

    Identification of everyday objects on the basis of kinetic contours

    Get PDF
    Using kinetic contours derived from everyday objects, we investigated how motion affects object identification. In order not to be distinguishable when static, kinetic contours were made from random dot displays consisting of two regions, inside and outside the object contour. In Experiment 1, the dots were moving in only one of two regions. The objects were identified nearly equally well as soon as the dots either in the figure or in the background started to move. RTs decreased with increasing motion coherence levels and were shorter for complex, less compact objects than for simple, more compact objects. In Experiment 2, objects could be identified when the dots were moving both in the figure and in the background with speed and direction differences between the two. A linear increase in either the speed difference or the direction difference caused a linear decrease in RT for correct identification. In addition, the combination of speed and motion differences appeared to be super-additive

    Detecting motion trajectories: How do perception and action use visual information?

    Get PDF
    This item is only available electronically.Whenever a person moves to intercept an object, they engage in a complex set of predictions, about the object’s trajectory, and about the set of motions required to intercept it. However, the way that people use perceptual information to intercept rapidly moving objects is currently not well understood. This is because the problem is multifaceted, as there are delays in receptor transduction, neural conduction, processing and muscle activation. There is considerable as to how the two systems interact, there is some evidence that they do (Watamaniuk & Heinen, 2003). In order to assess the differences between trajectory prediction for perceptual judgments and pointing movements we examined participants using the same stimulus, a moving random dot cinematogram (Watamaniuk & Heinen, 1999; Williams & Sekuler, 1984), which was manipulated across conditions. We used a within subjects repeated measures design to compare participants’ performance on two tasks, a perceptual (two alternative forced-choice) task and a pointing task (N = 6). For both tasks we assessed participants’ precision in extrapolating the trajectory of the cinematogram, as well as their response latency. If the two systems use the same visual information, we would expect that precision for each task changes similarly across the conditions. We found similar patterns of error for both tasks, with lower durations and higher bandwidth motion signals displaying greater directional error. This provides further insight into how we use visual information to guide movement. In particular, it provides insight as to how differences in motion perception affects interceptive movements.Thesis (B.PsychSc(Hons)) -- University of Adelaide, School of Psychology, 202

    Entwicklung eines impulskodierenden neuronalen Netzes fuer die Segmentierung bewegter Szenen

    Get PDF

    Mental and sensorimotor extrapolation fare better than motion extrapolation in the offset condition

    Get PDF
    Evidence for motion extrapolation at motion offset is scarce. In contrast, there is abundant evidence that subjects mentally extrapolate the future trajectory of weak motion signals at motion offset. Further, pointing movements overshoot at motion offset. We believe that mental and sensorimotor extrapolation is sufficient to solve the problem of perceptual latencies. Both present the advantage of being much more flexible than motion extrapolatio
    corecore