3,697 research outputs found

    Cortical Dynamics of 3-D Surface Perception: Binocular and Half-Occluded Scenic Images

    Full text link
    Previous models of stereopsis have concentrated on the task of binocularly matching left and right eye primitives uniquely. A disparity smoothness constraint is often invoked to limit the number of possible matches. These approaches neglect the fact that surface discontinuities are both abundant in natural everyday scenes, and provide a useful cue for scene segmentation. da Vinci stereopsis refers to the more general problem of dealing with surface discontinuities and their associated unmatched monocular regions within binocular scenes. This study develops a mathematical realization of a neural network theory of biological vision, called FACADE Theory, that shows how early cortical stereopsis processes are related to later cortical processes of 3-D surface representation. The mathematical model demonstrates through computer simulation how the visual cortex may generate 3-D boundary segmentations and use them to control filling-in of 3-D surface properties in response to visual scenes. Model mechanisms correctly match disparate binocular regions while filling-in monocular regions with the correct depth within a binocularly viewed scene. This achievement required introduction of a new multiscale binocular filter for stereo matching which clarifies how cortical complex cells match image contours of like contrast polarity, while pooling signals from opposite contrast polarities. Competitive interactions among filter cells suggest how false binocular matches and unmatched monocular cues, which contain eye-of-origin information, arc automatically handled across multiple spatial scales. This network also helps to explain data concerning context-sensitive binocular matching. Pooling of signals from even-symmetric and odd-symmctric simple cells at complex cells helps to eliminate spurious activity peaks in matchable signals. Later stages of cortical processing by the blob and interblob streams, including refined concepts of cooperative boundary grouping and reciprocal stream interactions between boundary and surface representations, arc modeled to provide a complete simulation of the da Vinci stereopsis percept.Office of Naval Research (N00014-95-I-0409, N00014-85-1-0657, N00014-92-J-4015, N00014-91-J-4100); Airforce Office of Scientific Research (90-0175); National Science Foundation (IRI-90-00530); The James S. McDonnell Foundation (94-40

    Neural Dynamics of 3-D Surface Perception: Figure-Ground Separation and Lightness Perception

    Full text link
    This article develops the FACADE theory of three-dimensional (3-D) vision to simulate data concerning how two-dimensional (2-D) pictures give rise to 3-D percepts of occluded and occluding surfaces. The theory suggests how geometrical and contrastive properties of an image can either cooperate or compete when forming the boundary and surface representations that subserve conscious visual percepts. Spatially long-range cooperation and short-range competition work together to separate boundaries of occluding ligures from their occluded neighbors, thereby providing sensitivity to T-junctions without the need to assume that T-junction "detectors" exist. Both boundary and surface representations of occluded objects may be amodaly completed, while the surface representations of unoccluded objects become visible through modal processes. Computer simulations include Bregman-Kanizsa figure-ground separation, Kanizsa stratification, and various lightness percepts, including the Munker-White, Benary cross, and checkerboard percepts.Defense Advanced Research Projects Agency and Office of Naval Research (N00014-95-1-0409); National Science Foundation (IRI 94-01659, IRI 97-20333); Office of Naval Research (N00014-92-J-1309, N00014-95-1-0657

    Neural Models of Seeing and Thinking

    Full text link
    Air Force Office of Scientific Research (F49620-01-1-0397); Office of Naval Research (N00014-01-1-0624

    Computational multi-depth single-photon imaging

    Full text link
    We present an imaging framework that is able to accurately reconstruct multiple depths at individual pixels from single-photon observations. Our active imaging method models the single-photon detection statistics from multiple reflectors within a pixel, and it also exploits the fact that a multi-depth profile at each pixel can be expressed as a sparse signal. We interpret the multi-depth reconstruction problem as a sparse deconvolution problem using single-photon observations, create a convex problem through discretization and relaxation, and use a modified iterative shrinkage-thresholding algorithm to efficiently solve for the optimal multi-depth solution. We experimentally demonstrate that the proposed framework is able to accurately reconstruct the depth features of an object that is behind a partially-reflecting scatterer and 4 m away from the imager with root mean-square error of 11 cm, using only 19 signal photon detections per pixel in the presence of moderate background light. In terms of root mean-square error, this is a factor of 4.2 improvement over the conventional method of Gaussian-mixture fitting for multi-depth recovery.This material is based upon work supported in part by a Samsung Scholarship, the US National Science Foundation under Grant No. 1422034, and the MIT Lincoln Laboratory Advanced Concepts Committee. We thank Dheera Venkatraman for his assistance with the experiments. (Samsung Scholarship; 1422034 - US National Science Foundation; MIT Lincoln Laboratory Advanced Concepts Committee)Accepted manuscrip

    Coding of details in very low bit-rate video systems

    Get PDF
    In this paper, the importance of including small image features at the initial levels of a progressive second generation video coding scheme is presented. It is shown that a number of meaningful small features called details should be coded, even at very low data bit-rates, in order to match their perceptual significance to the human visual system. We propose a method for extracting, perceptually selecting and coding of visual details in a video sequence using morphological techniques. Its application in the framework of a multiresolution segmentation-based coding algorithm yields better results than pure segmentation techniques at higher compression ratios, if the selection step fits some main subjective requirements. Details are extracted and coded separately from the region structure and included in the reconstructed images in a later stage. The bet of considering the local background of a given detail for its perceptual selection breaks the concept ofPeer ReviewedPostprint (published version

    Cortical Computation of Stereo Disparity

    Full text link
    Our ability to see the world in depth is a major accomplishment of the brain. Previous models of how positionally disparate cues to the two eyes are binocularly matched limit possible matches by invoking uniqueness and continuity constraints. These approaches cannot explain data wherein uniqueness fails and changes in contrast alter depth percepts, or where surface discontinuities cause surfaces to be seen in depth although they are registered by only one eye (da Vinci stereopsis). A new stereopsis model explains these depth percepts by proposing how cortical complex cells binocularly filter their inputs and how monocular and binocular complex cells compete to determine the winning depth signals.Defense Advanced Research Projects Agency (N00014-92-J-4015); Air Force Office of Scientific Research (90-0175); Office of Naval Research (N00014-91-J-4100); James S. McDonnell Foundation (94-40); Defense Advanced Research Projects Agency and the Office of Naval Research (N00014-95-1-0409, N00014-95-1-0657

    Laminar Cortical Dynamics of Visual Form and Motion Interactions During Coherent Object Motion Perception

    Full text link
    How do visual form and motion processes cooperate to compute object motion when each process separately is insufficient? A 3D FORMOTION model specifies how 3D boundary representations, which separate figures from backgrounds within cortical area V2, capture motion signals at the appropriate depths in MT; how motion signals in MT disambiguate boundaries in V2 via MT-to-Vl-to-V2 feedback; how sparse feature tracking signals are amplified; and how a spatially anisotropic motion grouping process propagates across perceptual space via MT-MST feedback to integrate feature-tracking and ambiguous motion signals to determine a global object motion percept. Simulated data include: the degree of motion coherence of rotating shapes observed through apertures, the coherent vs. element motion percepts separated in depth during the chopsticks illusion, and the rigid vs. non-rigid appearance of rotating ellipses.Air Force Office of Scientific Research (F49620-01-1-0397); National Geospatial-Intelligence Agency (NMA201-01-1-2016); National Science Foundation (BCS-02-35398, SBE-0354378); Office of Naval Research (N00014-95-1-0409, N00014-01-1-0624

    Seeing Tree Structure from Vibration

    Full text link
    Humans recognize object structure from both their appearance and motion; often, motion helps to resolve ambiguities in object structure that arise when we observe object appearance only. There are particular scenarios, however, where neither appearance nor spatial-temporal motion signals are informative: occluding twigs may look connected and have almost identical movements, though they belong to different, possibly disconnected branches. We propose to tackle this problem through spectrum analysis of motion signals, because vibrations of disconnected branches, though visually similar, often have distinctive natural frequencies. We propose a novel formulation of tree structure based on a physics-based link model, and validate its effectiveness by theoretical analysis, numerical simulation, and empirical experiments. With this formulation, we use nonparametric Bayesian inference to reconstruct tree structure from both spectral vibration signals and appearance cues. Our model performs well in recognizing hierarchical tree structure from real-world videos of trees and vessels.Comment: ECCV 2018. The first two authors contributed equally to this work. Project page: http://tree.csail.mit.edu

    Visual Concepts and Compositional Voting

    Get PDF
    It is very attractive to formulate vision in terms of pattern theory \cite{Mumford2010pattern}, where patterns are defined hierarchically by compositions of elementary building blocks. But applying pattern theory to real world images is currently less successful than discriminative methods such as deep networks. Deep networks, however, are black-boxes which are hard to interpret and can easily be fooled by adding occluding objects. It is natural to wonder whether by better understanding deep networks we can extract building blocks which can be used to develop pattern theoretic models. This motivates us to study the internal representations of a deep network using vehicle images from the PASCAL3D+ dataset. We use clustering algorithms to study the population activities of the features and extract a set of visual concepts which we show are visually tight and correspond to semantic parts of vehicles. To analyze this we annotate these vehicles by their semantic parts to create a new dataset, VehicleSemanticParts, and evaluate visual concepts as unsupervised part detectors. We show that visual concepts perform fairly well but are outperformed by supervised discriminative methods such as Support Vector Machines (SVM). We next give a more detailed analysis of visual concepts and how they relate to semantic parts. Following this, we use the visual concepts as building blocks for a simple pattern theoretical model, which we call compositional voting. In this model several visual concepts combine to detect semantic parts. We show that this approach is significantly better than discriminative methods like SVM and deep networks trained specifically for semantic part detection. Finally, we return to studying occlusion by creating an annotated dataset with occlusion, called VehicleOcclusion, and show that compositional voting outperforms even deep networks when the amount of occlusion becomes large.Comment: It is accepted by Annals of Mathematical Sciences and Application

    Cortical Dynamics of 3-D Figure-Ground Perception of 2-D Pictures

    Full text link
    This article develops the FACADE theory of 3-D vision and figure-ground separation to explain data concerning how 2-D pictures give rise to 3-D percepts of occluding and occluded objects. These percepts include pop-out of occluding figures and amodal completion of occluded figures in response to line drawings, to Bregman-Kanizsa displays in which the relative contrasts of occluding and occluded surfaces are reversed, to White displays from which either transparent or opaque occlusion percepts can obtain, to Egusa and Kanizsa square displays in which brighter regions look closer, and to Kanizsa stratification displays in which bistable reversals of occluding and occluded surfaces occurs, and in which real contours and illusory contours compete to alter the reversal percept. The model describes how changes in contrast can alter a percept without a change in geometry, and conversely. More generally it shows how geometrical and contrastive properties of a picture can either cooperate or compete when forming the boundaries and surface representations that subserve conscious percepts. Spatially long-range cooperation and spatially short-range competition work together to separate the boundaries of occluding figures from their occluded neighbors. This boundary ownership process is sensitive to image T-junctions at which occluded figures contact occluding figures, but there are no explicit T-junction detectors in the network. Rather, the contextual balance of boundary cooperation and competition strengthens some boundaries while breaking others. These boundaries control the filling-in of color within multiple, depth-sensitive surface respresentations. Feedback between surface and boundary representations strengthens consistent boundaries while inhibiting inconsistent ones. It is suggested how both the boundary and the surface representations of occluded objects may be amodally completed, even while the surface representations of unocclucled objects become visible through modal completion. Distinct functional roles for conscious modal and amodal representations in object recognition, spatial attention, and reaching behaviors are discussed. Model interactions are interpreted in terms of visual, temporal, and parietal cortex. Model concepts provide a mechanistic neural explanation and revision of such Gestalt principles as good continuation, stratification, and non-accidental solution.Office of Naval Research (N00014-91-J-4100, N00014-95-I-0409, N00014-95-I-0657, N00014-92-J-11015
    • …
    corecore