47 research outputs found

    Object Segmentation from Motion Discontinuities and Temporal Occlusions–A Biologically Inspired Model

    Get PDF
    BACKGROUND: Optic flow is an important cue for object detection. Humans are able to perceive objects in a scene using only kinetic boundaries, and can perform the task even when other shape cues are not provided. These kinetic boundaries are characterized by the presence of motion discontinuities in a local neighbourhood. In addition, temporal occlusions appear along the boundaries as the object in front covers the background and the objects that are spatially behind it. METHODOLOGY/PRINCIPAL FINDINGS: From a technical point of view, the detection of motion boundaries for segmentation based on optic flow is a difficult task. This is due to the problem that flow detected along such boundaries is generally not reliable. We propose a model derived from mechanisms found in visual areas V1, MT, and MSTl of human and primate cortex that achieves robust detection along motion boundaries. It includes two separate mechanisms for both the detection of motion discontinuities and of occlusion regions based on how neurons respond to spatial and temporal contrast, respectively. The mechanisms are embedded in a biologically inspired architecture that integrates information of different model components of the visual processing due to feedback connections. In particular, mutual interactions between the detection of motion discontinuities and temporal occlusions allow a considerable improvement of the kinetic boundary detection. CONCLUSIONS/SIGNIFICANCE: A new model is proposed that uses optic flow cues to detect motion discontinuities and object occlusion. We suggest that by combining these results for motion discontinuities and object occlusion, object segmentation within the model can be improved. This idea could also be applied in other models for object segmentation. In addition, we discuss how this model is related to neurophysiological findings. The model was successfully tested both with artificial and real sequences including self and object motion

    Neural Representations of Personally Familiar and Unfamiliar Faces in the Anterior Inferior Temporal Cortex of Monkeys

    Get PDF
    To investigate the neural representations of faces in primates, particularly in relation to their personal familiarity or unfamiliarity, neuronal activities were chronically recorded from the ventral portion of the anterior inferior temporal cortex (AITv) of macaque monkeys during the performance of a facial identification task using either personally familiar or unfamiliar faces as stimuli. By calculating the correlation coefficients between neuronal responses to the faces for all possible pairs of faces given in the task and then using the coefficients as neuronal population-based similarity measures between the faces in pairs, we analyzed the similarity/dissimilarity relationship between the faces, which were potentially represented by the activities of a population of the face-responsive neurons recorded in the area AITv. The results showed that, for personally familiar faces, different identities were represented by different patterns of activities of the population of AITv neurons irrespective of the view (e.g., front, 90° left, etc.), while different views were not represented independently of their facial identities, which was consistent with our previous report. In the case of personally unfamiliar faces, the faces possessing different identities but presented in the same frontal view were represented as similar, which contrasts with the results for personally familiar faces. These results, taken together, outline the neuronal representations of personally familiar and unfamiliar faces in the AITv neuronal population

    Neural Correlates of Face and Object Perception in an Awake Chimpanzee (Pan Troglodytes) Examined by Scalp-Surface Event-Related Potentials

    Get PDF
    BACKGROUND: The neural system of our closest living relative, the chimpanzee, is a topic of increasing research interest. However, electrophysiological examinations of neural activity during visual processing in awake chimpanzees are currently lacking. METHODOLOGY/PRINCIPAL FINDINGS: In the present report, skin-surface event-related brain potentials (ERPs) were measured while a fully awake chimpanzee observed photographs of faces and objects in two experiments. In Experiment 1, human faces and stimuli composed of scrambled face images were displayed. In Experiment 2, three types of pictures (faces, flowers, and cars) were presented. The waveforms evoked by face stimuli were distinguished from other stimulus types, as reflected by an enhanced early positivity appearing before 200 ms post stimulus, and an enhanced late negativity after 200 ms, around posterior and occipito-temporal sites. Face-sensitive activity was clearly observed in both experiments. However, in contrast to the robustly observed face-evoked N170 component in humans, we found that faces did not elicit a peak in the latency range of 150-200 ms in either experiment. CONCLUSIONS/SIGNIFICANCE: Although this pilot study examined a single subject and requires further examination, the observed scalp voltage patterns suggest that selective processing of faces in the chimpanzee brain can be detected by recording surface ERPs. In addition, this non-invasive method for examining an awake chimpanzee can be used to extend our knowledge of the characteristics of visual cognition in other primate species

    Combining Feature Selection and Integration—A Neural Model for MT Motion Selectivity

    Get PDF
    Background: The computation of pattern motion in visual area MT based on motion input from area V1 has been investigated in many experiments and models attempting to replicate the main mechanisms. Two different core conceptual approaches were developed to explain the findings. In integrationist models the key mechanism to achieve pattern selectivity is the nonlinear integration of V1 motion activity. In contrast, selectionist models focus on the motion computation at positions with 2D features. Methodology/Principal Findings: Recent experiments revealed that neither of the two concepts alone is sufficient to explain all experimental data and that most of the existing models cannot account for the complex behaviour found. MT pattern selectivity changes over time for stimuli like type II plaids from vector average to the direction computed with an intersection of constraint rule or by feature tracking. Also, the spatial arrangement of the stimulus within the receptive field of a MT cell plays a crucial role. We propose a recurrent neural model showing how feature integration and selection can be combined into one common architecture to explain these findings. The key features of the model are the computation of 1D and 2D motion in model area V1 subpopulations that are integrated in model MT cells using feedforward and feedback processing. Our results are also in line with findings concerning the solution of the aperture problem. Conclusions/Significance: We propose a new neural model for MT pattern computation and motion disambiguation that i

    The visual back-up to the VOR: ocular tracking systems with ultra-short latencies and the role of the MST area

    No full text

    Monkey hippocampal neurons related to spatial and nonspatial functions

    No full text
    corecore