95,726 research outputs found
INVESTIGATING THE ROLES OF MECHANORECEPTIVE CHANNELS IN TACTILE APPARENT MOTION PERCEPTION: A VIBROTACTILE STUDY
Tactile apparent motion (TAM) is a perceptual phenomenon in which consecutive presentation of multiple tactile stimuli creates an illusion of motion. Employing a novel tactile display device, the Latero, allowed us to investigate this. The current study focused on the Rapidly Adapting (RA) channel and Slowly Adapting I (SAI) channel on the index finger. The experiment implemented vibrotactile masking stimuli to target the mechanoreceptive channels with the goal of gaining better insight into the involvement of mechanoreceptive channels in the perception of TAM. Masking stimuli were used because previous studies have used them to differentiate between different channels; a certain masking stimulus will impact a mechanoreceptive channel more than others. The experiment began by measuring participants’ threshold for TAM stimuli by varying the stimulus intensity in a two-choice task (left vs right); participants received test trials consisting of TAM stimuli with 25 Hz and 6 Hz testing for the RA and SAI channels, respectively. Next, participants performed a series of test trials with vibrotactile masking stimuli that preceded the TAM stimuli mentioned above. The vibrotactile masking stimulus varied in duration (4 seconds vs 8 seconds) and intensity (two times vs three times the intensity of the TAM stimuli). The results suggest that there was no difference in accuracy when testing for the RA and SAI channels. The results also showed that the introduction of the masking stimuli significantly lowered accuracy. Overall, neither the RA nor the SAI channel may be uniquely involved in TAM perception. However, further improvement on the current design may aid in isolating each channel to help better understand the channel’s role in TAM perception
Computing optical flow in the primate visual system
Computing motion on the basis of the time-varying image intensity is a difficult problem for both artificial and biological vision systems. We show how gradient models, a well-known class of motion algorithms, can be implemented within the magnocellular pathway of the primate's visual system. Our cooperative algorithm computes optical flow in two steps. In the first stage, assumed to be located in primary visual cortex, local motion is measured while spatial integration occurs in the second stage, assumed to be located in the middle temporal area (MT). The final optical flow is extracted in this second stage using population coding, such that the velocity is represented by the vector sum of neurons coding for motion in different directions. Our theory, relating the single-cell to the perceptual level, accounts for a number of psychophysical and electrophysiological observations and illusions
Neural Models of Motion Integration, Segmentation, and Probablistic Decision-Making
When brain mechanism carry out motion integration and segmentation processes that compute unambiguous global motion percepts from ambiguous local motion signals? Consider, for example, a deer running at variable speeds behind forest cover. The forest cover is an occluder that creates apertures through which fragments of the deer's motion signals are intermittently experienced. The brain coherently groups these fragments into a trackable percept of the deer in its trajectory. Form and motion processes are needed to accomplish this using feedforward and feedback interactions both within and across cortical processing streams. All the cortical areas V1, V2, MT, and MST are involved in these interactions. Figure-ground processes in the form stream through V2, such as the seperation of occluding boundaries of the forest cover from the boundaries of the deer, select the motion signals which determine global object motion percepts in the motion stream through MT. Sparse, but unambiguous, feauture tracking signals are amplified before they propogate across position and are intergrated with far more numerous ambiguous motion signals. Figure-ground and integration processes together determine the global percept. A neural model predicts the processing stages that embody these form and motion interactions. Model concepts and data are summarized about motion grouping across apertures in response to a wide variety of displays, and probabilistic decision making in parietal cortex in response to random dot displays.National Science Foundation (SBE-0354378); Office of Naval Research (N00014-01-1-0624
Contrast sensitivity of insect motion detectors to natural images
How do animals regulate self-movement despite large variation in the luminance contrast of the environment? Insects are capable of regulating flight speed based on the velocity of image motion, but the mechanisms for this are unclear. The Hassenstein–Reichardt correlator model and elaborations can accurately predict responses of motion detecting neurons under many conditions but fail to explain the apparent lack of spatial pattern and contrast dependence observed in freely flying bees and flies. To investigate this apparent discrepancy, we recorded intracellularly from horizontal-sensitive (HS) motion detecting neurons in the hoverfly while displaying moving images of natural environments. Contrary to results obtained with grating patterns, we show these neurons encode the velocity of natural images largely independently of the particular image used despite a threefold range of contrast. This invariance in response to natural images is observed in both strongly and minimally motion-adapted neurons but is sensitive to artificial manipulations in contrast. Current models of these cells account for some, but not all, of the observed insensitivity to image contrast. We conclude that fly visual processing may be matched to commonalities between natural scenes, enabling accurate estimates of velocity largely independent of the particular scene
Temporal Dynamics of Binocular Disparity Processing with Corticogeniculate Interactions
A neural model is developed to probe how corticogeniculate feedback may contribute to the dynamics of binocular vision. Feedforward and feedback interactions among retinal, lateral geniculate, and cortical simple and complex cells are used to simulate psychophysical and neurobiological data concerning the dynamics of binocular disparity processing, including correct registration of disparity in response to dynamically changing stimuli, binocular summation of weak stimuli, and fusion of anticorrelated stimuli when they are delayed, but not when they are simultaneous. The model exploits dynamic rebounds between opponent ON and OFF cells that are due to imbalances in habituative transmitter gates. It shows how corticogeniculate feedback can carry out a top-down matching process that inhibits incorrect disparity response and reduces persistence of previously correct responses to dynamically changing displays.Air Force Office of scientific Research (F49620-92-J-0499, F49620-92-J-0334, F49620-92-J-0225); Defense Advanced Research Projects Agency and the Office of Naval Research (N00014-95-1-0409, N00014-92-J-4015); Natioanl Science Foundation (IRI-97-20333); Office of Naval Research (N00014-95-0657
Neural Dynamics of Motion Grouping: From Aperture Ambiguity to Object Speed and Direction
A neural network model of visual motion perception and speed discrimination is developed to simulate data concerning the conditions under which components of moving stimuli cohere or not into a global direction of motion, as in barberpole and plaid patterns (both Type 1 and Type 2). The model also simulates how the perceived speed of lines moving in a prescribed direction depends upon their orientation, length, duration, and contrast. Motion direction and speed both emerge as part of an interactive motion grouping or segmentation process. The model proposes a solution to the global aperture problem by showing how information from feature tracking points, namely locations from which unambiguous motion directions can be computed, can propagate to ambiguous motion direction points, and capture the motion signals there. The model does this without computing intersections of constraints or parallel Fourier and non-Fourier pathways. Instead, the model uses orientationally-unselective cell responses to activate directionally-tuned transient cells. These transient cells, in turn, activate spatially short-range filters and competitive mechanisms over multiple spatial scales to generate speed-tuned and directionally-tuned cells. Spatially long-range filters and top-down feedback from grouping cells are then used to track motion of featural points and to select and propagate correct motion directions to ambiguous motion points. Top-down grouping can also prime the system to attend a particular motion direction. The model hereby links low-level automatic motion processing with attention-based motion processing. Homologs of model mechanisms have been used in models of other brain systems to simulate data about visual grouping, figure-ground separation, and speech perception. Earlier versions of the model have simulated data about short-range and long-range apparent motion, second-order motion, and the effects of parvocellular and magnocellular LGN lesions on motion perception.Office of Naval Research (N00014-920J-4015, N00014-91-J-4100, N00014-95-1-0657, N00014-95-1-0409, N00014-91-J-0597); Air Force Office of Scientific Research (F4620-92-J-0225, F49620-92-J-0499); National Science Foundation (IRI-90-00530
Computing motion in the primate's visual system
Computing motion on the basis of the time-varying image intensity is a difficult problem for both artificial and biological vision systems. We will show how one well-known gradient-based computer algorithm for estimating visual motion can be implemented within the primate's visual system. This relaxation algorithm computes the optical flow field by minimizing a variational functional of a form commonly encountered in early vision, and is performed in two steps. In the first stage, local motion is computed, while in the second stage spatial integration occurs. Neurons in the second stage represent the optical flow field via a population-coding scheme, such that the vector sum of all neurons at each location codes for the direction and magnitude of the velocity at that location. The resulting network maps onto the magnocellular pathway of the primate visual system, in particular onto cells in the primary visual cortex (V1) as well as onto cells in the middle temporal area (MT). Our algorithm mimics a number of psychophysical phenomena and illusions (perception of coherent plaids, motion capture, motion coherence) as well as electrophysiological recordings. Thus, a single unifying principle ‘the final optical flow should be as smooth as possible’ (except at isolated motion discontinuities) explains a large number of phenomena and links single-cell behavior with perception and computational theory
- …
