23,812 research outputs found

    The effect of transparency on recognition of overlapping objects

    Get PDF
    Are overlapping objects easier to recognize when the objects are transparent or opaque? It is important to know whether the transparency of X-ray images of luggage contributes to the difficulty in searching those images for targets. Transparency provides extra information about objects that would normally be occluded but creates potentially ambiguous depth relations at the region of overlap. Two experiments investigated the threshold durations at which adult participants could accurately name pairs of overlapping objects that were opaque or transparent. In Experiment 1, the transparent displays included monocular cues to relative depth. Recognition of the back object was possible at shorter durations for transparent displays than for opaque displays. In Experiment 2, the transparent displays had no monocular depth cues. There was no difference in the duration at which the back object was recognized across transparent and opaque displays. The results of the two experiments suggest that transparent displays, even though less familiar than opaque displays, do not make object recognition more difficult, and possibly show a benefit. These findings call into question the importance of edge junctions in object recognitio

    Neural Dynamics of 3-D Surface Perception: Figure-Ground Separation and Lightness Perception

    Full text link
    This article develops the FACADE theory of three-dimensional (3-D) vision to simulate data concerning how two-dimensional (2-D) pictures give rise to 3-D percepts of occluded and occluding surfaces. The theory suggests how geometrical and contrastive properties of an image can either cooperate or compete when forming the boundary and surface representations that subserve conscious visual percepts. Spatially long-range cooperation and short-range competition work together to separate boundaries of occluding ligures from their occluded neighbors, thereby providing sensitivity to T-junctions without the need to assume that T-junction "detectors" exist. Both boundary and surface representations of occluded objects may be amodaly completed, while the surface representations of unoccluded objects become visible through modal processes. Computer simulations include Bregman-Kanizsa figure-ground separation, Kanizsa stratification, and various lightness percepts, including the Munker-White, Benary cross, and checkerboard percepts.Defense Advanced Research Projects Agency and Office of Naval Research (N00014-95-1-0409); National Science Foundation (IRI 94-01659, IRI 97-20333); Office of Naval Research (N00014-92-J-1309, N00014-95-1-0657

    CAD-model-based vision for space applications

    Get PDF
    A pose acquisition system operating in space must be able to perform well in a variety of different applications including automated guidance and inspections tasks with many different, but known objects. Since the space station is being designed with automation in mind, there will be CAD models of all the objects, including the station itself. The construction of vision models and procedures directly from the CAD models is the goal of this project. The system that is being designed and implementing must convert CAD models to vision models, predict visible features from a given view point from the vision models, construct view classes representing views of the objects, and use the view class model thus derived to rapidly determine the pose of the object from single images and/or stereo pairs

    Algorithmic Perception of Vertices in Sketched Drawings of Polyhedral Shapes

    Get PDF
    In this article, visual perception principles were used to build an artificial perception model aimed at developing an algorithm for detecting junctions in line drawings of polyhedral objects that are vectorized from hand-drawn sketches. The detection is performed in two dimensions (2D), before any 3D model is available and minimal information about the shape depicted by the sketch is used. The goal of this approach is to not only detect junctions in careful sketches created by skilled engineers and designers but also detect junctions when skilled people draw casually to quickly convey rough ideas. Current approaches for extracting junctions from digital images are mostly incomplete, as they simply merge endpoints that are near each other, thus ignoring the fact that different vertices may be represented by different (but close) junctions and that the endpoints of lines that depict edges that share a common vertex may not necessarily be close to each other, particularly in quickly sketched drawings. We describe and validate a new algorithm that uses these perceptual findings to merge tips of line segments into 2D junctions that are assumed to depict 3D vertices
    • …
    corecore