166,716 research outputs found

    Men and Women Exhibit a Differential Bias for Processing Movement versus Objects

    Get PDF
    Sex differences in many spatial and verbal tasks appear to reflect an inherent low-level processing bias for movement in males and objects in females. We explored this potential movement/object bias in men and women using a computer task that measured targeting performance and/or color recognition. The targeting task showed a ball moving vertically towards a horizontal line. Before reaching the line, the ball disappeared behind a masking screen, requiring the participant to imagine the movement vector and identify the intersection point. For the color recognition task, the ball briefly changed color before disappearing beneath the mask and participants were required only to identify the color shade. Results showed that targeting accuracy for slow and fast moving balls was significantly better in males compared to females. No sex difference was observed for color shade recognition. We also studied a third, dual attention task comprised of the first two, where the moving ball briefly changed color randomly just before passing beneath the masking screen. When the ball changed color, participants were required only to identify the color shade. If the ball didn't change color, participants estimated the intersection point. Participants in this dual attention condition were first tested with the targeting and color tasks alone and showed results that were similar to the previous groups tested on a single task. However, under the dual attention condition, male accuracy in targeting, as well as color shade recognition, declined significantly compared to their performance when the tasks were tested alone. No significant changes were found in female performance. Finally, reaction times for targeting and color choices in both sexes correlated highly with ball speed, but not accuracy. Overall, these results provide evidence of a sex-related bias in processing objects versus movement, which may reflect sex differences in bottom up versus top-down analytical strategies

    Cortical Dynamics of Contextually-Cued Attentive Visual Learning and Search: Spatial and Object Evidence Accumulation

    Full text link
    How do humans use predictive contextual information to facilitate visual search? How are consistently paired scenic objects and positions learned and used to more efficiently guide search in familiar scenes? For example, a certain combination of objects can define a context for a kitchen and trigger a more efficient search for a typical object, such as a sink, in that context. A neural model, ARTSCENE Search, is developed to illustrate the neural mechanisms of such memory-based contextual learning and guidance, and to explain challenging behavioral data on positive/negative, spatial/object, and local/distant global cueing effects during visual search. The model proposes how global scene layout at a first glance rapidly forms a hypothesis about the target location. This hypothesis is then incrementally refined by enhancing target-like objects in space as a scene is scanned with saccadic eye movements. The model clarifies the functional roles of neuroanatomical, neurophysiological, and neuroimaging data in visual search for a desired goal object. In particular, the model simulates the interactive dynamics of spatial and object contextual cueing in the cortical What and Where streams starting from early visual areas through medial temporal lobe to prefrontal cortex. After learning, model dorsolateral prefrontal cortical cells (area 46) prime possible target locations in posterior parietal cortex based on goalmodulated percepts of spatial scene gist represented in parahippocampal cortex, whereas model ventral prefrontal cortical cells (area 47/12) prime possible target object representations in inferior temporal cortex based on the history of viewed objects represented in perirhinal cortex. The model hereby predicts how the cortical What and Where streams cooperate during scene perception, learning, and memory to accumulate evidence over time to drive efficient visual search of familiar scenes.CELEST, an NSF Science of Learning Center (SBE-0354378); SyNAPSE program of Defense Advanced Research Projects Agency (HR0011-09-3-0001, HR0011-09-C-0011

    View-Invariant Object Category Learning, Recognition, and Search: How Spatial and Object Attention Are Coordinated Using Surface-Based Attentional Shrouds

    Full text link
    Air Force Office of Scientific Research (F49620-01-1-0397); National Science Foundation (SBE-0354378); Office of Naval Research (N00014-01-1-0624

    Learning RGB-D Salient Object Detection using background enclosure, depth contrast, and top-down features

    Full text link
    Recently, deep Convolutional Neural Networks (CNN) have demonstrated strong performance on RGB salient object detection. Although, depth information can help improve detection results, the exploration of CNNs for RGB-D salient object detection remains limited. Here we propose a novel deep CNN architecture for RGB-D salient object detection that exploits high-level, mid-level, and low level features. Further, we present novel depth features that capture the ideas of background enclosure and depth contrast that are suitable for a learned approach. We show improved results compared to state-of-the-art RGB-D salient object detection methods. We also show that the low-level and mid-level depth features both contribute to improvements in the results. Especially, F-Score of our method is 0.848 on RGBD1000 dataset, which is 10.7% better than the second place
    • …
    corecore