37,360 research outputs found

    A study of user perceptions of the relationship between bump-mapped and non-bump-mapped materials, and lighting intensity in a real-time virtual environment

    Get PDF
    The video and computer games industry has taken full advantage of the human sense of vision by producing games that utilize complex high-resolution textures and materials, and lighting technique. This results to the creation of an almost life-like real-time 3D virtual environment that can immerse the end-users. One of the visual techniques used is real-time display of bump-mapped materials. However, this sense of visual phenomenon has yet to be fully utilized for 3D design visualization in the architecture and construction domain. Virtual environments developed in the architecture and construction domain are often basic and use low-resolution images, which under represent the real physical environment. Such virtual environment is seen as being non-realistic to the user resulting in a misconception of the actual potential of it as a tool for 3D design visualization. A study was conducted to evaluate whether subjects can see the difference between bump-mapped and nonbump-mapped materials in different lighting conditions. The study utilized a real-time 3D virtual environment that was created using a custom-developed software application tool called BuildITC4. BuildITC4 was developed based upon the C4Engine which is classified as a next-generation 3D Game Engine. A total of thirty-five subjects were exposed to the virtual environment and were asked to compare the various types of material in different lighting conditions. The number of lights activated, the lighting intensity, and the materials used in the virtual environment were all interactive and changeable in real-time. The goal is to study how subjects perceived bump-mapped and non-bump mapped materials, and how different lighting conditions affect realistic representation. Results from this study indicate that subjects could tell the difference between the bump-mapped and non-bump mapped materials, and how different material reacts to different lighting condition

    A Novel Framework for Highlight Reflectance Transformation Imaging

    Get PDF
    We propose a novel pipeline and related software tools for processing the multi-light image collections (MLICs) acquired in different application contexts to obtain shape and appearance information of captured surfaces, as well as to derive compact relightable representations of them. Our pipeline extends the popular Highlight Reflectance Transformation Imaging (H-RTI) framework, which is widely used in the Cultural Heritage domain. We support, in particular, perspective camera modeling, per-pixel interpolated light direction estimation, as well as light normalization correcting vignetting and uneven non-directional illumination. Furthermore, we propose two novel easy-to-use software tools to simplify all processing steps. The tools, in addition to support easy processing and encoding of pixel data, implement a variety of visualizations, as well as multiple reflectance-model-fitting options. Experimental tests on synthetic and real-world MLICs demonstrate the usefulness of the novel algorithmic framework and the potential benefits of the proposed tools for end-user applications.Terms: "European Union (EU)" & "Horizon 2020" / Action: H2020-EU.3.6.3. - Reflective societies - cultural heritage and European identity / Acronym: Scan4Reco / Grant number: 665091DSURF project (PRIN 2015) funded by the Italian Ministry of University and ResearchSardinian Regional Authorities under projects VIGEC and Vis&VideoLa

    The Iray Light Transport Simulation and Rendering System

    Full text link
    While ray tracing has become increasingly common and path tracing is well understood by now, a major challenge lies in crafting an easy-to-use and efficient system implementing these technologies. Following a purely physically-based paradigm while still allowing for artistic workflows, the Iray light transport simulation and rendering system allows for rendering complex scenes by the push of a button and thus makes accurate light transport simulation widely available. In this document we discuss the challenges and implementation choices that follow from our primary design decisions, demonstrating that such a rendering system can be made a practical, scalable, and efficient real-world application that has been adopted by various companies across many fields and is in use by many industry professionals today

    Single-image RGB Photometric Stereo With Spatially-varying Albedo

    Full text link
    We present a single-shot system to recover surface geometry of objects with spatially-varying albedos, from images captured under a calibrated RGB photometric stereo setup---with three light directions multiplexed across different color channels in the observed RGB image. Since the problem is ill-posed point-wise, we assume that the albedo map can be modeled as piece-wise constant with a restricted number of distinct albedo values. We show that under ideal conditions, the shape of a non-degenerate local constant albedo surface patch can theoretically be recovered exactly. Moreover, we present a practical and efficient algorithm that uses this model to robustly recover shape from real images. Our method first reasons about shape locally in a dense set of patches in the observed image, producing shape distributions for every patch. These local distributions are then combined to produce a single consistent surface normal map. We demonstrate the efficacy of the approach through experiments on both synthetic renderings as well as real captured images.Comment: 3DV 2016. Project page at http://www.ttic.edu/chakrabarti/rgbps

    Cross-Platform Presentation of Interactive Volumetric Imagery

    Get PDF
    Volume data is useful across many disciplines, not just medicine. Thus, it is very important that researchers have a simple and lightweight method of sharing and reproducing such volumetric data. In this paper, we explore some of the challenges associated with volume rendering, both from a classical sense and from the context of Web3D technologies. We describe and evaluate the pro- posed X3D Volume Rendering Component and its associated styles for their suitability in the visualization of several types of image data. Additionally, we examine the ability for a minimal X3D node set to capture provenance and semantic information from outside ontologies in metadata and integrate it with the scene graph
    • …
    corecore