49 research outputs found

    A Similarity Measure for Material Appearance

    Get PDF
    We present a model to measure the similarity in appearance between different materials, which correlates with human similarity judgments. We first create a database of 9,000 rendered images depicting objects with varying materials, shape and illumination. We then gather data on perceived similarity from crowdsourced experiments; our analysis of over 114,840 answers suggests that indeed a shared perception of appearance similarity exists. We feed this data to a deep learning architecture with a novel loss function, which learns a feature space for materials that correlates with such perceived appearance similarity. Our evaluation shows that our model outperforms existing metrics. Last, we demonstrate several applications enabled by our metric, including appearance-based search for material suggestions, database visualization, clustering and summarization, and gamut mapping.Comment: 12 pages, 17 figure

    The joint role of geometry and illumination on material recognition

    Get PDF
    Observing and recognizing materials is a fundamental part of our daily life. Under typical viewing conditions, we are capable of effortlessly identifying the objects that surround us and recognizing the materials they are made of. Nevertheless, understanding the underlying perceptual processes that take place to accurately discern the visual properties of an object is a long-standing problem. In this work, we perform a comprehensive and systematic analysis of how the interplay of geometry, illumination, and their spatial frequencies affects human performance on material recognition tasks. We carry out large-scale behavioral experiments where participants are asked to recognize different reference materials among a pool of candidate samples. In the different experiments, we carefully sample the information in the frequency domain of the stimuli. From our analysis, we find significant first-order interactions between the geometry and the illumination, of both the reference and the candidates. In addition, we observe that simple image statistics and higher-order image histograms do not correlate with human performance. Therefore, we perform a high-level comparison of highly nonlinear statistics by training a deep neural network on material recognition tasks. Our results show that such models can accurately classify materials, which suggests that they are capable of defining a meaningful representation of material appearance from labeled proximal image data. Last, we find preliminary evidence that these highly nonlinear models and humans may use similar high-level factors for material recognition tasks

    Perceptual Modeling and Reproduction of Gloss

    Get PDF
    The reproduction of gloss on displays is generally not based on perception and as a consequence does not guarantee the best visualization of a real material. The reproduction is composed of four different steps: measurement, modeling, rendering, and display. The minimum number of measurements required to approximate a real material is unknown. The error metrics used to approximate measurements with analytical BRDF models are not based on perception, and the best visual approximation is not always obtained. Finally, the gloss perception difference between real objects and objects seen on displays has not sufficiently been studied and might be influencing the observer judgement. This thesis proposes a systematic, scalable, and perceptually based workflow to represent real materials on displays. First, the gloss perception difference between real objects and objects seen on displays was studied. Second, the perceptual performance of the error metrics currently in use was evaluated. Third, a projection into a perceptual gloss space was defined, enabling the computation of a perceptual gloss distance measure. Fourth, the uniformity of the gloss space was improved by defining a new gloss difference equation. Finally, a systematic, scalable, and perceptually based workflow was defined using cost-effective instruments

    Acquisition and modeling of material appearance

    Get PDF
    Thesis (Ph. D.)--Massachusetts Institute of Technology, Dept. of Electrical Engineering and Computer Science, 2006.Includes bibliographical references (p. 131-143).In computer graphics, the realistic rendering of synthetic scenes requires a precise description of surface geometry, lighting, and material appearance. While 3D geometry scanning and modeling have advanced significantly in recent years, measurement and modeling of accurate material appearance have remained critical challenges. Analytical models are the main tools to describe material appearance in most current applications. They provide compact and smooth approximations to real materials but lack the expressiveness to represent complex materials. Data-driven approaches based on exhaustive measurements are fully general but the measurement process is difficult and the storage requirement is very high. In this thesis, we propose the use of hybrid representations that are more compact and easier to acquire than exhaustive measurement, while preserving much generality of a data-driven approach. To represent complex bidirectional reflectance distribution functions (BRDFs), we present a new method to estimate a general microfacet distribution from measured data. We show that this representation is able to reproduce complex materials that are impossible to model with purely analytical models.(cont.) We also propose a new method that significantly reduces measurement cost and time of the bidirectional texture function (BTF) through a statistical characterization of texture appearance. Our reconstruction method combines naturally aligned images and alignment-insensitive statistics to produce visually plausible results. We demonstrate our acquisition system which is able to capture intricate materials like fabrics in less than ten minutes with commodity equipments. In addition, we present a method to facilitate effective user design in the space of material appearance. We introduce a metric in the space of reflectance which corresponds roughly to perceptual measures. The main idea of our approach is to evaluate reflectance differences in terms of their induced rendered images, instead of the reflectance function itself defined in the angular domains. With rendered images, we show that even a simple computational metric can provide good perceptual spacing and enable intuitive navigation of the reflectance space.by Wai Kit Addy Ngan.Ph.D

    Depth Recovery with Rectification using Single-Lens Prism based Stereovision System

    Get PDF
    Ph.DDOCTOR OF PHILOSOPH

    Towards Predictive Rendering in Virtual Reality

    Get PDF
    The strive for generating predictive images, i.e., images representing radiometrically correct renditions of reality, has been a longstanding problem in computer graphics. The exactness of such images is extremely important for Virtual Reality applications like Virtual Prototyping, where users need to make decisions impacting large investments based on the simulated images. Unfortunately, generation of predictive imagery is still an unsolved problem due to manifold reasons, especially if real-time restrictions apply. First, existing scenes used for rendering are not modeled accurately enough to create predictive images. Second, even with huge computational efforts existing rendering algorithms are not able to produce radiometrically correct images. Third, current display devices need to convert rendered images into some low-dimensional color space, which prohibits display of radiometrically correct images. Overcoming these limitations is the focus of current state-of-the-art research. This thesis also contributes to this task. First, it briefly introduces the necessary background and identifies the steps required for real-time predictive image generation. Then, existing techniques targeting these steps are presented and their limitations are pointed out. To solve some of the remaining problems, novel techniques are proposed. They cover various steps in the predictive image generation process, ranging from accurate scene modeling over efficient data representation to high-quality, real-time rendering. A special focus of this thesis lays on real-time generation of predictive images using bidirectional texture functions (BTFs), i.e., very accurate representations for spatially varying surface materials. The techniques proposed by this thesis enable efficient handling of BTFs by compressing the huge amount of data contained in this material representation, applying them to geometric surfaces using texture and BTF synthesis techniques, and rendering BTF covered objects in real-time. Further approaches proposed in this thesis target inclusion of real-time global illumination effects or more efficient rendering using novel level-of-detail representations for geometric objects. Finally, this thesis assesses the rendering quality achievable with BTF materials, indicating a significant increase in realism but also confirming the remainder of problems to be solved to achieve truly predictive image generation

    Mapping Spikes to Sensations

    Get PDF
    Single-unit recordings conducted during perceptual decision-making tasks have yielded tremendous insights into the neural coding of sensory stimuli. In such experiments, detection or discrimination behavior (the psychometric data) is observed in parallel with spike trains in sensory neurons (the neurometric data). Frequently, candidate neural codes for information read-out are pitted against each other by transforming the neurometric data in some way and asking which code’s performance most closely approximates the psychometric performance. The code that matches the psychometric performance best is retained as a viable candidate and the others are rejected. In following this strategy, psychometric data is often considered to provide an unbiased measure of perceptual sensitivity. It is rarely acknowledged that psychometric data result from a complex interplay of sensory and non-sensory processes and that neglect of these processes may result in misestimating psychophysical sensitivity. This again may lead to erroneous conclusions regarding the adequacy of candidate neural codes. In this review, we first discuss requirements on the neural data for a subsequent neurometric-psychometric comparison. We then focus on different psychophysical tasks for the assessment of detection and discrimination performance and the cognitive processes that may underlie their execution. We discuss further factors that may compromise psychometric performance and how they can be detected or avoided. We believe that these considerations point to shortcomings in our understanding of the processes underlying perceptual decisions, and therefore offer potential for future research

    Material Visualisation for Virtual Reality: The Perceptual Investigations

    Get PDF
    Material representation plays a significant role in design visualisation and evaluation. On one hand, the simulated material properties determine the appearance of product prototypes in digitally rendered scenes. On the other hand, those properties are perceived by the viewers in order to make important design decisions. As an approach to simulate a more realistic environment, Virtual Reality (VR) provides users a vivid impression of depth and embodies them into an immersive environment. However, the scientific understanding of material perception and its applications in VR is still fairly limited. This leads to this thesis’s research question on whether the material perception in VR is different from that in traditional 2D displays, as well as the potential of using VR as a design tool to facilitate material evaluation.       This thesis is initiated from studying the perceptual difference of rendered materials between VR and traditional 2D viewing modes. Firstly, through a pilot study, it is confirmed that users have different perceptual experiences of the same material in the two viewing modes. Following that initial finding, the research investigates in more details the perceptual difference with psychophysics methods, which help in quantifying the users’ perceptual responses. Using the perceptual scale as a measuring means, the research analyses the users’ judgment and recognition of the material properties under VR and traditional 2D display environments. In addition, the research also elicits the perceptual evaluation criteria to analyse the emotional aspects of materials. The six perceptual criteria are in semantic forms, including rigidity, formality, fineness, softness, modernity, and irregularity.       The results showed that VR could support users in making a more refined judgment of material properties. That is to say, the users perceive better the minute changes of material properties under immersive viewing conditions. In terms of emotional aspects, VR is advantageous in signifying the effects induced by visual textures, while the 2D viewing mode is more effective for expressing the characteristics of plain surfaces. This thesis has contributed to the deeper understanding of users’ perception of material appearances in Virtual Reality, which is critical in achieving an effective design visualisation using such a display medium
    corecore