5 research outputs found

    A Solution of the Figure-ground Problem for Biological Vision

    Full text link
    A neural network model of 3-D visual perception and figure-ground separation by visual cortex is introduced. The theory provides a unified explanation of how a 2-D image may generate a 3-D percept; how figures pop-out from cluttered backgrounds; how spatially sparse disparity cues can generate continuous surface representations at different perceived depths; how representations of occluded regions can be completed and recognized without usually being seen; how occluded regions can sometimes be seen during percepts of transparency; how high spatial frequency parts of an image may appear closer than low spatial frequency parts; how sharp targets are detected better against a figure and blurred targets are detector better against a background; how low spatial frequency parts of an image may be fused while high spatial frequency parts are rivalrous; how sparse blue cones can generate vivid blue surface percepts; how 3-D neon color spreading, visual phantoms, and tissue contrast percepts are generated; how conjunctions of color-and-depth may rapidly pop-out during visual search. These explanations arise derived from an ecological analysis of how monocularly viewed parts of an image inherit the appropriate depth from contiguous binocularly viewed parts, as during DaVinci stereopsis. The model predicts the functional role and ordering of multiple interactions within and between the two parvocellular processing streams that join LGN to prestriate area V4. Interactions from cells representing larger scales and disparities to cells representing smaller scales and disparities are of particular importance.Air Force Office of Scientific Research (90-0175); Defense Advanced Research Projects Agency (90-0083); Office of Naval Research (N00014-92-J-4015, N00014-91-J-4100

    Neural Dynamics of Motion Perception: Direction Fields, Apertures, and Resonant Grouping

    Full text link
    A neural network model of global motion segmentation by visual cortex is described. Called the Motion Boundary Contour System (BCS), the model clarifies how ambiguous local movements on a complex moving shape are actively reorganized into a coherent global motion signal. Unlike many previous researchers, we analyse how a coherent motion signal is imparted to all regions of a moving figure, not only to regions at which unambiguous motion signals exist. The model hereby suggests a solution to the global aperture problem. The Motion BCS describes how preprocessing of motion signals by a Motion Oriented Contrast Filter (MOC Filter) is joined to long-range cooperative grouping mechanisms in a Motion Cooperative-Competitive Loop (MOCC Loop) to control phenomena such as motion capture. The Motion BCS is computed in parallel with the Static BCS of Grossberg and Mingolla (1985a, 1985b, 1987). Homologous properties of the Motion BCS and the Static BCS, specialized to process movement directions and static orientations, respectively, support a unified explanation of many data about static form perception and motion form perception that have heretofore been unexplained or treated separately. Predictions about microscopic computational differences of the parallel cortical streams V1 --> MT and V1 --> V2 --> MT are made, notably the magnocellular thick stripe and parvocellular interstripe streams. It is shown how the Motion BCS can compute motion directions that may be synthesized from multiple orientations with opposite directions-of-contrast. Interactions of model simple cells, complex cells, hypercomplex cells, and bipole cells are described, with special emphasis given to new functional roles in direction disambiguation for endstopping at multiple processing stages and to the dynamic interplay of spatially short-range and long-range interactions.Air Force Office of Scientific Research (90-0175); Defense Advanced Research Projects Agency (90-0083); Office of Naval Research (N00014-91-J-4100

    Cortical Dynamics of 3-D Vision and Figure-Ground Pop-Out

    Full text link
    Air Force Office of Scientific Research (90-0175); Defense Advanced Research Projects Agency (90-0083); Office of Naval Research (N00014-91-J-4100

    Cortical Dynamics of 3-D Figure-Ground Perception of 2-D Pictures

    Full text link
    This article develops the FACADE theory of 3-D vision and figure-ground separation to explain data concerning how 2-D pictures give rise to 3-D percepts of occluding and occluded objects. These percepts include pop-out of occluding figures and amodal completion of occluded figures in response to line drawings, to Bregman-Kanizsa displays in which the relative contrasts of occluding and occluded surfaces are reversed, to White displays from which either transparent or opaque occlusion percepts can obtain, to Egusa and Kanizsa square displays in which brighter regions look closer, and to Kanizsa stratification displays in which bistable reversals of occluding and occluded surfaces occurs, and in which real contours and illusory contours compete to alter the reversal percept. The model describes how changes in contrast can alter a percept without a change in geometry, and conversely. More generally it shows how geometrical and contrastive properties of a picture can either cooperate or compete when forming the boundaries and surface representations that subserve conscious percepts. Spatially long-range cooperation and spatially short-range competition work together to separate the boundaries of occluding figures from their occluded neighbors. This boundary ownership process is sensitive to image T-junctions at which occluded figures contact occluding figures, but there are no explicit T-junction detectors in the network. Rather, the contextual balance of boundary cooperation and competition strengthens some boundaries while breaking others. These boundaries control the filling-in of color within multiple, depth-sensitive surface respresentations. Feedback between surface and boundary representations strengthens consistent boundaries while inhibiting inconsistent ones. It is suggested how both the boundary and the surface representations of occluded objects may be amodally completed, even while the surface representations of unocclucled objects become visible through modal completion. Distinct functional roles for conscious modal and amodal representations in object recognition, spatial attention, and reaching behaviors are discussed. Model interactions are interpreted in terms of visual, temporal, and parietal cortex. Model concepts provide a mechanistic neural explanation and revision of such Gestalt principles as good continuation, stratification, and non-accidental solution.Office of Naval Research (N00014-91-J-4100, N00014-95-I-0409, N00014-95-I-0657, N00014-92-J-11015

    Bio-Inspired Computer Vision: Towards a Synergistic Approach of Artificial and Biological Vision

    Get PDF
    To appear in CVIUStudies in biological vision have always been a great source of inspiration for design of computer vision algorithms. In the past, several successful methods were designed with varying degrees of correspondence with biological vision studies, ranging from purely functional inspiration to methods that utilise models that were primarily developed for explaining biological observations. Even though it seems well recognised that computational models of biological vision can help in design of computer vision algorithms, it is a non-trivial exercise for a computer vision researcher to mine relevant information from biological vision literature as very few studies in biology are organised at a task level. In this paper we aim to bridge this gap by providing a computer vision task centric presentation of models primarily originating in biological vision studies. Not only do we revisit some of the main features of biological vision and discuss the foundations of existing computational studies modelling biological vision, but also we consider three classical computer vision tasks from a biological perspective: image sensing, segmentation and optical flow. Using this task-centric approach, we discuss well-known biological functional principles and compare them with approaches taken by computer vision. Based on this comparative analysis of computer and biological vision, we present some recent models in biological vision and highlight a few models that we think are promising for future investigations in computer vision. To this extent, this paper provides new insights and a starting point for investigators interested in the design of biology-based computer vision algorithms and pave a way for much needed interaction between the two communities leading to the development of synergistic models of artificial and biological vision
    corecore