16,055 research outputs found
Motion clouds: model-based stimulus synthesis of natural-like random textures for the study of motion perception
Choosing an appropriate set of stimuli is essential to characterize the
response of a sensory system to a particular functional dimension, such as the
eye movement following the motion of a visual scene. Here, we describe a
framework to generate random texture movies with controlled information
content, i.e., Motion Clouds. These stimuli are defined using a generative
model that is based on controlled experimental parametrization. We show that
Motion Clouds correspond to dense mixing of localized moving gratings with
random positions. Their global envelope is similar to natural-like stimulation
with an approximate full-field translation corresponding to a retinal slip. We
describe the construction of these stimuli mathematically and propose an
open-source Python-based implementation. Examples of the use of this framework
are shown. We also propose extensions to other modalities such as color vision,
touch, and audition
Optimal measurement of visual motion across spatial and temporal scales
Sensory systems use limited resources to mediate the perception of a great
variety of objects and events. Here a normative framework is presented for
exploring how the problem of efficient allocation of resources can be solved in
visual perception. Starting with a basic property of every measurement,
captured by Gabor's uncertainty relation about the location and frequency
content of signals, prescriptions are developed for optimal allocation of
sensors for reliable perception of visual motion. This study reveals that a
large-scale characteristic of human vision (the spatiotemporal contrast
sensitivity function) is similar to the optimal prescription, and it suggests
that some previously puzzling phenomena of visual sensitivity, adaptation, and
perceptual organization have simple principled explanations.Comment: 28 pages, 10 figures, 2 appendices; in press in Favorskaya MN and
Jain LC (Eds), Computer Vision in Advanced Control Systems using Conventional
and Intelligent Paradigms, Intelligent Systems Reference Library,
Springer-Verlag, Berli
Recommended from our members
Efficient spiking neural network model of pattern motion selectivity in visual cortex
Simulating large-scale models of biological motion perception is challenging, due to the required memory to store the network structure and the computational power needed to quickly solve the neuronal dynamics. A low-cost yet high-performance approach to simulating large-scale neural network models in real-time is to leverage the parallel processing capability of graphics processing units (GPUs). Based on this approach, we present a two-stage model of visual area MT that we believe to be the first large-scale spiking network to demonstrate pattern direction selectivity. In this model, component-direction- selective (CDS) cells in MT linearly combine inputs from V1 cells that have spatiotemporal receptive fields according to the motion energy model of Simoncelli and Heeger. Pattern-direction-selective (PDS) cells in MT are constructed by pooling over MT CDS cells with a wide range of preferred directions. Responses of our model neurons are comparable to electrophysiological results for grating and plaid stimuli as well as speed tuning. The behavioral response of the network in a motion discrimination task is in agreement with psychophysical data. Moreover, our implementation outperforms a previous implementation of the motion energy model by orders of magnitude in terms of computational speed and memory usage. The full network, which comprises 153,216 neurons and approximately 40 million synapses, processes 20 frames per second of a 40∈×∈40 input video in real-time using a single off-the-shelf GPU. To promote the use of this algorithm among neuroscientists and computer vision researchers, the source code for the simulator, the network, and analysis scripts are publicly available. © 2014 Springer Science+Business Media New York
Recommended from our members
Analysis of the visual spatiotemporal properties of American Sign Language.
Careful measurements of the temporal dynamics of speech have provided important insights into phonetic properties of spoken languages, which are important for understanding auditory perception. By contrast, analytic quantification of the visual properties of signed languages is still largely uncharted. Exposure to sign language is a unique experience that could shape and modify low-level visual processing for those who use it regularly (i.e., what we refer to as the Enhanced Exposure Hypothesis). The purpose of the current study was to characterize the visual spatiotemporal properties of American Sign Language (ASL) so that future studies can test the enhanced exposure hypothesis in signers, with the prediction that altered vision should be observed within, more so than outside, the range of properties found in ASL. Using an ultrasonic motion tracking system, we recorded the hand position in 3-dimensional space over time during sign language production of signs, sentences, and narratives. From these data, we calculated several metrics: hand position and eccentricity in space and hand motion speed. For individual signs, we also measured total distance travelled by the dominant hand and total duration of each sign. These metrics were found to fall within a selective range, suggesting that exposure to signs is a specific and unique visual experience, which might alter visual perceptual abilities in signers for visual information within the experienced range, even for non-language stimuli
Impaired perception of biological motion in Parkinson’s disease
OBJECTIVE: We examined biological motion perception in Parkinson’s disease (PD). Biological motion perception is related to one’s own motor function and depends on the integrity of brain areas affected in PD, including posterior superior temporal sulcus. If deficits in biological motion perception exist, they may be specific to perceiving natural/fast walking patterns that individuals with PD can no longer perform, and may correlate with disease-related motor dysfunction. METHOD: Twenty-six nondemented individuals with PD and 24 control participants viewed videos of point-light walkers and scrambled versions that served as foils, and indicated whether each video depicted a human walking. Point-light walkers varied by gait type (natural, parkinsonian) and speed (0.5, 1.0, 1.5 m/s). Participants also completed control tasks (object motion, coherent motion perception), a contrast sensitivity assessment, and a walking assessment. RESULTS: The PD group demonstrated significantly less sensitivity to biological motion than the control group (p < .001, Cohen’s d = 1.22), regardless of stimulus gait type or speed, with a less substantial deficit in object motion perception (p = .02, Cohen’s d = .68). There was no group difference in coherent motion perception. Although individuals with PD had slower walking speed and shorter stride length than control participants, gait parameters did not correlate with biological motion perception. Contrast sensitivity and coherent motion perception also did not correlate with biological motion perception. CONCLUSION: PD leads to a deficit in perceiving biological motion, which is independent of gait dysfunction and low-level vision changes, and may therefore arise from difficulty perceptually integrating form and motion cues in posterior superior temporal sulcus.Published versio
Basic gestures as spatiotemporal reference frames for repetitive dance/music patterns in samba and charleston
THE GOAL OF THE PRESENT STUDY IS TO GAIN BETTER insight into how dancers establish, through dancing, a spatiotemporal reference frame in synchrony with musical cues. With the aim of achieving this, repetitive dance patterns of samba and Charleston were recorded using a three-dimensional motion capture system. Geometric patterns then were extracted from each joint of the dancer's body. The method uses a body-centered reference frame and decomposes the movement into non-orthogonal periodicities that match periods of the musical meter. Musical cues (such as meter and loudness) as well as action-based cues (such as velocity) can be projected onto the patterns, thus providing spatiotemporal reference frames, or 'basic gestures,' for action-perception couplings. Conceptually speaking, the spatiotemporal reference frames control minimum effort points in action-perception couplings. They reside as memory patterns in the mental and/or motor domains, ready to be dynamically transformed in dance movements. The present study raises a number of hypotheses related to spatial cognition that may serve as guiding principles for future dance/music studies
Integrated 2-D Optical Flow Sensor
I present a new focal-plane analog VLSI sensor that estimates optical flow in two visual dimensions. The chip significantly improves previous approaches both with respect to the applied model of optical flow estimation as well as the actual hardware implementation. Its distributed computational architecture consists of an array of locally connected motion units that collectively solve for the unique optimal optical flow estimate. The novel gradient-based motion model assumes visual motion to be translational, smooth and biased. The model guarantees that the estimation problem is computationally well-posed regardless of the visual input. Model parameters can be globally adjusted, leading to a rich output behavior. Varying the smoothness strength, for example, can provide a continuous spectrum of motion estimates, ranging from normal to global optical flow. Unlike approaches that rely on the explicit matching of brightness edges in space or time, the applied gradient-based model assures spatiotemporal continuity on visual information. The non-linear coupling of the individual motion units improves the resulting optical flow estimate because it reduces spatial smoothing across large velocity differences. Extended measurements of a 30x30 array prototype sensor under real-world conditions demonstrate the validity of the model and the robustness and functionality of the implementation
- …