22,734 research outputs found

    Effects of Highlights on Gloss Perception

    Full text link
    The perception of a glossy surface in a static monochromatic image can occur when a bright highlight is embedded in a compatible context of shading and a bounding contour. Some images naturally give rise to the impression that a surface has a uniform reflectance, characteristic of a shiny object, even though the highlight may only cover a small portion of the surface. Nonetheless, an observer may adopt an attitude of scrutiny in viewing a glossy surface, whereby the impression of gloss is partial and nonuniform at image regions outside of a higlight. Using a rating scale and small probe points to indicate image locations, differential perception of gloss within a single object is investigate in the present study. Observers' gloss ratings are not uniform across the surface, but decrease as a function of distance from highlight. When, by design, the distance from a highlight is uncoupled from the luminance value at corresponding probe points, the decrease in rated gloss correlates more with the distance than with the luminance change. Experiments also indicate that gloss ratings change as a function of estimated surface distance, rather than as a function of image distance. Surface continuity affects gloss ratings, suggesting that apprehension of 3D surface structure is crucial for gloss perception.Air Force Office of Scientific Research (F49620-98-1-0108), Defense Advanced Research Projects Agency and the Office of Naval Research (N00014-95-1-0409), National Science Foundation (IIS-97-20333); Office of Naval Research (N00014-95-1-0657, N00014-01-1-0624); Whitaker Foundation (RG-99-0186

    MeshAdv: Adversarial Meshes for Visual Recognition

    Full text link
    Highly expressive models such as deep neural networks (DNNs) have been widely applied to various applications. However, recent studies show that DNNs are vulnerable to adversarial examples, which are carefully crafted inputs aiming to mislead the predictions. Currently, the majority of these studies have focused on perturbation added to image pixels, while such manipulation is not physically realistic. Some works have tried to overcome this limitation by attaching printable 2D patches or painting patterns onto surfaces, but can be potentially defended because 3D shape features are intact. In this paper, we propose meshAdv to generate "adversarial 3D meshes" from objects that have rich shape features but minimal textural variation. To manipulate the shape or texture of the objects, we make use of a differentiable renderer to compute accurate shading on the shape and propagate the gradient. Extensive experiments show that the generated 3D meshes are effective in attacking both classifiers and object detectors. We evaluate the attack under different viewpoints. In addition, we design a pipeline to perform black-box attack on a photorealistic renderer with unknown rendering parameters.Comment: Published in IEEE CVPR201

    On the Computational Modeling of Human Vision

    Full text link

    The Perception of Lightness in 3-D Curved Objects

    Full text link
    Lightness constancy requires the visual system to somehow "parse" the input scene into illumination and reflectance components. Experiments on the perception of lightness for 3-D curved objects show that human observers are able to perform such a decomposition for some scenes but not for others. Lightness constancy was quite good when a rich local gray level context was provided. Deviations occurred when both illumination and reflectance changed along the surface of the objects. Does the perception of a 3-D surface and illuminant layout help calibrate lightness judgements? Our results showed a small but consistent improvement between lightness matches on ellipsoid shapes compared to flat rectangle shapes under similar illumination conditions. Illumination change over 3-D forms is therefore taken into account in lightness perception.COPPE/UFRJ, Brazil; Air Force Office of Scientific Research (F49620-92-J-0334); Office of Naval Research (N00014-J-4100, N00014-94-1-0597

    Redefining A in RGBA: Towards a Standard for Graphical 3D Printing

    Full text link
    Advances in multimaterial 3D printing have the potential to reproduce various visual appearance attributes of an object in addition to its shape. Since many existing 3D file formats encode color and translucency by RGBA textures mapped to 3D shapes, RGBA information is particularly important for practical applications. In contrast to color (encoded by RGB), which is specified by the object's reflectance, selected viewing conditions and a standard observer, translucency (encoded by A) is neither linked to any measurable physical nor perceptual quantity. Thus, reproducing translucency encoded by A is open for interpretation. In this paper, we propose a rigorous definition for A suitable for use in graphical 3D printing, which is independent of the 3D printing hardware and software, and which links both optical material properties and perceptual uniformity for human observers. By deriving our definition from the absorption and scattering coefficients of virtual homogeneous reference materials with an isotropic phase function, we achieve two important properties. First, a simple adjustment of A is possible, which preserves the translucency appearance if an object is re-scaled for printing. Second, determining the value of A for a real (potentially non-homogeneous) material, can be achieved by minimizing a distance function between light transport measurements of this material and simulated measurements of the reference materials. Such measurements can be conducted by commercial spectrophotometers used in graphic arts. Finally, we conduct visual experiments employing the method of constant stimuli, and derive from them an embedding of A into a nearly perceptually uniform scale of translucency for the reference materials.Comment: 20 pages (incl. appendices), 20 figures. Version with higher quality images: https://cloud-ext.igd.fraunhofer.de/s/pAMH67XjstaNcrF (main article) and https://cloud-ext.igd.fraunhofer.de/s/4rR5bH3FMfNsS5q (appendix). Supplemental material including code: https://cloud-ext.igd.fraunhofer.de/s/9BrZaj5Uh5d0cOU/downloa

    A Similarity Measure for Material Appearance

    Get PDF
    We present a model to measure the similarity in appearance between different materials, which correlates with human similarity judgments. We first create a database of 9,000 rendered images depicting objects with varying materials, shape and illumination. We then gather data on perceived similarity from crowdsourced experiments; our analysis of over 114,840 answers suggests that indeed a shared perception of appearance similarity exists. We feed this data to a deep learning architecture with a novel loss function, which learns a feature space for materials that correlates with such perceived appearance similarity. Our evaluation shows that our model outperforms existing metrics. Last, we demonstrate several applications enabled by our metric, including appearance-based search for material suggestions, database visualization, clustering and summarization, and gamut mapping.Comment: 12 pages, 17 figure

    Form Perception

    Full text link
    National Science Foundation (SBE-0354378); Office of Naval Research (N00014-01-1-0624
    corecore