2,588 research outputs found

    Interaction of cortical networks mediating object motion detection by moving observers

    Full text link
    Published in final edited form as: Exp Brain Res. 2012 August ; 221(2): 177–189. doi:10.1007/s00221-012-3159-8.The task of parceling perceived visual motion into self- and object motion components is critical to safe and accurate visually guided navigation. In this paper, we used functional magnetic resonance imaging to determine the cortical areas functionally active in this task and the pattern connectivity among them to investigate the cortical regions of interest and networks that allow subjects to detect object motion separately from induced self-motion. Subjects were presented with nine textured objects during simulated forward self-motion and were asked to identify the target object, which had an additional, independent motion component toward or away from the observer. Cortical activation was distributed among occipital, intra-parietal and fronto-parietal areas. We performed a network analysis of connectivity data derived from partial correlation and multivariate Granger causality analyses among functionally active areas. This revealed four coarsely separated network clusters: bilateral V1 and V2; visually responsive occipito-temporal areas, including bilateral LO, V3A, KO (V3B) and hMT; bilateral VIP, DIPSM and right precuneus; and a cluster of higher, primarily left hemispheric regions, including the central sulcus, post-, pre- and sub-central sulci, pre-central gyrus, and FEF. We suggest that the visually responsive networks are involved in forming the representation of the visual stimulus, while the higher, left hemisphere cluster is involved in mediating the interpretation of the stimulus for action. Our main focus was on the relationships of activations during our task among the visually responsive areas. To determine the properties of the mechanism corresponding to the visual processing networks, we compared subjects’ psychophysical performance to a model of object motion detection based solely on relative motion among objects and found that it was inconsistent with observer performance. Our results support the use of scene context (e.g., eccentricity, depth) in the detection of object motion. We suggest that the cortical activation and visually responsive networks provide a potential substrate for this computation.This work was supported by NIH grant RO1NS064100 to L.M.V. We thank Victor Solo for discussions regarding models of functional connectivity and our subjects for participating in the psychophysical and fMRI experiments. This research was carried out in part at the Athinoula A. Martinos Center for Biomedical Imaging at the Massachusetts General Hospital, using resources provided by the Center for Functional Neuroimaging Technologies, P41RR14075, a P41 Regional Resource supported by the Biomedical Technology Program of the National Center for Research Resources (NCRR), National Institutes of Health. This work also involved the use of instrumentation supported by the NCRR Shared Instrumentation Grant Program and/or High-End Instrumentation Grant Program; specifically, grant number S10RR021110. (RO1NS064100 - NIH; National Center for Research Resources (NCRR), National Institutes of Health; S10RR021110 - NCRR)Accepted manuscrip

    Long-range coupling of prefrontal cortex and visual (MT) or polysensory (STP) cortical areas in motion perception

    Full text link
    To investigate how, where and when moving auditory cues interact with the perception of object-motion during self-motion, we conducted psychophysical, MEG, and fMRI experiments in which the subjects viewed nine textured objects during simulated forward self-motion. On each trial, one object was randomly assigned its own looming motion within the scene. Subjects reported which of four labeled objects had independent motion within the scene in two conditions: (1) visual information only and (2) with additional moving- auditory cue. In MEG, comparison of the two conditions showed: (i) MT activity is similar across conditions, (ii) late after the stimulus presentation there is additional activity in the auditory cue condition ventral to MT, (iii) with the auditory cue, the right auditory cortex (AC) shows early activity together with STS, (iv) these two activities have different time courses and the STS signals occur later in the epoch together with frontal activity in the right hemisphere, (v) for the visual-only condition activity in PPC (posterior parietal cortex) is stronger than in the auditory-cue condition. fMRI conducted for visual-only condition reveals activations in a network of parietal and frontal areas and in MT. In addition, Dynamic Granger Causality analysis showed for auditory cues a strong connection of the AC with STP but not with MT suggesting binding of visual and auditory information at STP. Also, while in the visual-only condition PFC is connected with MT, in the auditory-cue condition PFC is connected to STP (superior temporal polysensory) area. These results indicate that PFC allocates attention to the “object” as a whole, in STP to a moving visual-auditory object, and in MT to a moving visual object.Accepted manuscrip

    Two mechanisms for optic flow and scale change processing of looming

    Full text link
    Published in final edited form as: J Vis. ; 11(3): . doi:10.1167/11.3.5.The detection of looming, the motion of objects in depth, underlies many behavioral tasks, including the perception of self-motion and time-to-collision. A number of studies have demonstrated that one of the most important cues for looming detection is optic flow, the pattern of motion across the retina. Schrater et al. have suggested that changes in spatial frequency over time, or scale changes, may also support looming detection in the absence of optic flow (P. R. Schrater, D. C. Knill, & E. P. Simoncelli, 2001). Here we used an adaptation paradigm to determine whether the perception of looming from optic flow and scale changes is mediated by single or separate mechanisms. We show first that when the adaptation and test stimuli were the same (both optic flow or both scale change), observer performance was significantly impaired compared to a dynamic (non-motion, non-scale change) null adaptation control. Second, we found no evidence of cross-cue adaptation, either from optic flow to scale change, or vice versa. Taken together, our data suggest that optic flow and scale changes are processed by separate mechanisms, providing multiple pathways for the detection of looming.We thank Jonathan Victor and the anonymous reviewers of the paper for feedback and suggestions regarding the stimuli used here. This work was supported by NIH grant R01NS064100 to LMV. (R01NS064100 - NIH)Accepted manuscrip

    Different Motion Cues Are Used to Estimate Time-to-arrival for Frontoparallel and Loming Trajectories

    Get PDF
    Estimation of time-to-arrival for moving objects is critical to obstacle interception and avoidance, as well as to timing actions such as reaching and grasping moving objects. The source of motion information that conveys arrival time varies with the trajectory of the object raising the question of whether multiple context-dependent mechanisms are involved in this computation. To address this question we conducted a series of psychophysical studies to measure observers’ performance on time-to-arrival estimation when object trajectory was specified by angular motion (“gap closure” trajectories in the frontoparallel plane), looming (colliding trajectories, TTC) or both (passage courses, TTP). We measured performance of time-to-arrival judgments in the presence of irrelevant motion, in which a perpendicular motion vector was added to the object trajectory. Data were compared to models of expected performance based on the use of different components of optical information. Our results demonstrate that for gap closure, performance depended only on the angular motion, whereas for TTC and TTP, both angular and looming motion affected performance. This dissociation of inputs suggests that gap closures are mediated by a separate mechanism than that used for the detection of time-to-collision and time-to-passage. We show that existing models of TTC and TTP estimation make systematic errors in predicting subject performance, and suggest that a model which weights motion cues by their relative time-to-arrival provides a better account of performance

    Integration Mechanisms for Heading Perception

    Get PDF
    Previous studies of heading perception suggest that human observers employ spatiotemporal pooling to accommodate noise in optic flow stimuli. Here, we investigated how spatial and temporal integration mechanisms are used for judgments of heading through a psychophysical experiment involving three different types of noise. Furthermore, we developed two ideal observer models to study the components of the spatial information used by observers when performing the heading task. In the psychophysical experiment, we applied three types of direction noise to optic flow stimuli to differentiate the involvement of spatial and temporal integration mechanisms. The results indicate that temporal integration mechanisms play a role in heading perception, though their contribution is weaker than that of the spatial integration mechanisms. To elucidate how observers process spatial information to extract heading from a noisy optic flow field, we compared psychophysical performance in response to random-walk direction noise with that of two ideal observer models (IOMs). One model relied on 2D screen-projected flow information (2D-IOM), while the other used environmental, i.e., 3D, flow information (3D-IOM). The results suggest that human observers compensate for the loss of information during the 2D retinal projection of the visual scene for modest amounts of noise. This suggests the likelihood of a 3D reconstruction during heading perception, which breaks down under extreme levels of noise

    Cross-modal cue effects in motion processing

    Full text link
    The everyday environment brings to our sensory systems competing inputs from different modalities. The ability to filter these multisensory inputs in order to identify and efficiently utilize useful spatial cues is necessary to detect and process the relevant information. In the present study, we investigate how feature-based attention affects the detection of motion across sensory modalities. We were interested to determine how subjects use intramodal, cross-modal auditory, and combined audiovisual motion cues to attend to specific visual motion signals. The results showed that in most cases, both the visual and the auditory cues enhance feature-based orienting to a transparent visual motion pattern presented among distractor motion patterns. Whereas previous studies have shown cross-modal effects of spatial attention, our results demonstrate a spread of cross-modal feature-based attention cues, which have been matched for the detection threshold of the visual target. These effects were very robust in comparisons of the effects of valid vs. invalid cues, as well as in comparisons between cued and uncued valid trials. The effect of intramodal visual, cross-modal auditory, and bimodal cues also increased as a function of motion-cue salience. Our results suggest that orienting to visual motion patterns among distracters can be facilitated not only by intramodal priors, but also by feature-based cross-modal information from the auditory system.First author draf

    Deficit of temporal dynamics of detection of a moving object during egomotion in a stroke patient: a psychophysical and MEG study

    Full text link
    To investigate the temporal dynamics underlying object motion detection during egomotion, we used psychophysics and MEG with a motion discrimination task. The display contained nine spheres moving for 1 second, eight moved consistent with forward observer translation, and one (the target) with independent motion within the scene (approaching or receding). Observers's task was to detect the target. Seven healthy subjects (7HS) and patient PF with an infarct involving the left occipital-temporal cortex participated in both the psychophysical and MEG study. Psychophysical results showed that PF was severely impaired on this task. He was also impaired on the discrimination of radial motion (with even poorer performance on contraction) and 2D direction as well as on detecting motion discontinuity. We used anatomically constrained MEG and dynamic Granger causality to investigate the direction and dynamics of connectivity between the functional areas involved in the object-motion task and compared the results of 7HS and PF. The dynamics of the causal connections among the motion responsive cortical areas (MT, STS, IPS) during the first 200 ms of the stimulus was similar in all subjects. However, in the later part of the stimulus (>200 ms) PF did not show significant causal connections among these areas. Also the 7HS had a strong, probably attention modulatory connection, between MPFC and MT, which was completely absent in PF. In PF and the 7HS, analysis of onset latencies revealed two stages of activations: early after motion onset (200–400 ms) bilateral activations in MT, IPS, and STS, followed (>500 ms) by activity in the postcentral sulcus and middle prefrontal cortex (MPFC). We suggest that the interaction of these early and late onset areas is critical to object motion detection during self-motion, and disrupted connections among late onset areas may have contributed to the perceptual deficits of patient PF.Published versio

    Reorganization of retinotopic maps after occipital lobe infarction

    Full text link
    Published in final edited form as: J Cogn Neurosci. 2014 June ; 26(6): 1266–1282. doi:10.1162/jocn_a_00538.We studied patient JS, who had a right occipital infarct that encroached on visual areas V1, V2v, and VP. When tested psychophysically, he was very impaired at detecting the direction of motion in random dot displays where a variable proportion of dots moving in one direction (signal) were embedded in masking motion noise (noise dots). The impairment on this motion coherence task was especially marked when the display was presented to the upper left (affected) visual quadrant, contralateral to his lesion. However, with extensive training, by 11 months his threshold fell to the level of healthy participants. Training on the motion coherence task generalized to another motion task, the motion discontinuity task, on which he had to detect the presence of an edge that was defined by the difference in the direction of the coherently moving dots (signal) within the display. He was much better at this task at 8 than 3 months, and this improvement was associated with an increase in the activation of the human MT complex (hMT^+) and in the kinetic occipital region as shown by repeated fMRI scans. We also used fMRI to perform retinotopic mapping at 3, 8, and 11 months after the infarct. We quantified the retinotopy and areal shifts by measuring the distances between the center of mass of functionally defined areas, computed in spherical surface-based coordinates. The functionally defined retinotopic areas V1, V2v, V2d, and VP were initially smaller in the lesioned right hemisphere, but they increased in size between 3 and 11 months. This change was not found in the normal, left hemisphere of the patient or in either hemispheres of the healthy control participants. We were interested in whether practice on the motion coherence task promoted the changes in the retinotopic maps. We compared the results for patient JS with those from another patient (PF) who had a comparable lesion but had not been given such practice. We found similar changes in the maps in the lesioned hemisphere of PF. However, PF was only scanned at 3 and 7 months, and the biggest shifts in patient JS were found between 8 and 11 months. Thus, it is important to carry out a prospective study with a trained and untrained group so as to determine whether the patterns of reorganization that we have observed can be further promoted by training.This work was supported by NIH grant R01NS064100 to L. M. V. Lucia M. Vaina dedicates this article to Charlie Gross, who has been a long-time collaborator and friend. I met him at the INS meeting in Beaune (France), and since then we often discussed the relationship between several aspects of high-level visual processing described in his work in monkeys physiology and my work in neuropsychology. In particular, his pioneering study of biological motion in monkeys' superior temporal lobe has influenced my own work on biological motion and has led us to coauthor a paper on this topic. Working with Charlie was a uniquely enjoyable experience. Alan Cowey and I often spoke fondly about Charlie, a dear friend and close colleague to us both, whose work, exquisite sense of humor, and unbound zest of living we both deeply admired and loved. (R01NS064100 - NIH)Accepted manuscrip
    • …
    corecore