24 research outputs found

    The CMS Integration Grid Testbed

    Get PDF
    The CMS Integration Grid Testbed (IGT) comprises USCMS Tier-1 and Tier-2 hardware at the following sites: the California Institute of Technology, Fermi National Accelerator Laboratory, the University of California at San Diego, and the University of Florida at Gainesville. The IGT runs jobs using the Globus Toolkit with a DAGMan and Condor-G front end. The virtual organization (VO) is managed using VO management scripts from the European Data Grid (EDG). Gridwide monitoring is accomplished using local tools such as Ganglia interfaced into the Globus Metadata Directory Service (MDS) and the agent based Mona Lisa. Domain specific software is packaged and installed using the Distrib ution After Release (DAR) tool of CMS, while middleware under the auspices of the Virtual Data Toolkit (VDT) is distributed using Pacman. During a continuo us two month span in Fall of 2002, over 1 million official CMS GEANT based Monte Carlo events were generated and returned to CERN for analysis while being demonstrated at SC2002. In this paper, we describe the process that led to one of the world's first continuously available, functioning grids.Comment: CHEP 2003 MOCT01

    Scalable And Heterogeneous Rendering Of Subsurface Scattering Materials

    Full text link
    In many natural materials, such as skin, minerals, and plastics, light scatters inside the material and gives them their distinctive appearance. The accurate reproduction of these materials requires new rendering algorithms that can simulate these subsurface interactions. Unfortunately, adding subsurface scattering dramatically increases the rendering cost. To achieve efficiency, recent approaches have used an approximate scattering model and have two signifficant limitations: they scale poorly to complex scenes and they are limited to homogeneous materials. This thesis proposes two new algorithms without these limitations. The ffirst is a scalable, subsurface renderer for homogeneous scattering. Using a canonical model of subsurface light paths, the new algorithm can judiciously determine a small set of important paths. By clustering the unimportant paths and approximating the contributions of these clusters, the new algorithm signifficantly reduces computation. In complex scenes, this new approach can achieve up to a three hundred fold speedup over the most efficient previous algorithms. The second is the ffirst, general, efficient and high-quality renderer for heterogeneous subsurface scattering. It based on a carefully derived formulation of the heterogeneous scattering problem using the diffusion equation and it solves that problem quickly and accurately using the finite element method. The new algorithm is designed for high-quality rendering applications producing, in minutes, images nearly identical to exact solutions produced in hours

    Pre-Processing Environment Maps for Dynamic Hardware Shadows

    Full text link
    Environment maps are a popular method of reproducing complex natural lighting. However, current methods for hardware environment map shadows depend on significant pre-computation and cannot support dynamic objects. This work presents a pre-process that decomposes an environment map into two components: a set of area lights and an ambient map. Once the map is split into these components, each is rendered with an appropriate mechanism. The area lights are rendered using an existing hardware-accelerated soft-shadow algorithm; for our implementation we use penumbra wedges. The ambient region is rendered using pre-integrated irradiance mapping. Using an NVidia 6800 on a standard desktop, we demonstrate high-quality environment map shadows for dynamic scenes at interactive rates

    Diffusion Formulation for Heterogeneous Subsurface Scattering

    Full text link
    Materials with visually important heterogeneous subsurface scattering, including marble, skin, leaves, and minerals, are common in the real world. However, general, accurate and efficient rendering of these materials is an open problem. In this short report, we describe the heterogeneous diffusion equation (DE) formulation that solves this problem. This formulation has two key results: an accurate model of the reduced intensity (RI) source, the diffusive source boundary condition (DSBC), and its associated render query function. Using there results, we can render subsurface scattering nearly as accurately as Monte Carlo (MC) algorithms. At the end of this report, we demonstrate this accuracy by comparing our new formulation to other methods used in previous work

    A radiative transfer framework for rendering materials with anisotropic structure

    Get PDF
    The radiative transfer framework that underlies all current rendering of volumes is limited to scattering media whose properties are invariant to rotation. Many systems allow for "anisotropic scattering," in the sense that scattered intensity depends on the scattering angle, but the standard equation assumes that the structure of the medium is isotropic. This limitation impedes physics-based rendering of volume models of cloth, hair, skin, and other important volumetric or translucent materials that do have anisotropic structure. This paper presents an end-to-end formulation of physics-based volume rendering of anisotropic scattering structures, allowing these materials to become full participants in global illumination simulations. We begin with a generalized radiative transfer equation, derived from scattering by oriented non-spherical particles. Within this framework, we propose a new volume scattering model analogous to the well-known family of microfacet surface reflection models; we derive an anisotropic diffusion approximation, including the weak form required for finite element solution and a way to compute the diffusion matrix from the parameters of the scattering model; and we also derive a new anisotropic dipole BSSRDF for anisotropic translucent materials. We demonstrate results from Monte Carlo, finite element, and dipole simulations. All these contributions are readily implemented in existing rendering systems for volumes and translucent materials, and they all reduce to the standard practice in the isotropic case.National Science Foundation (grants CCF-0347303 and CCF-0541105), Unilever Corporatio

    Abstract To appear SIGGRAPH 2006. Multidimensional Lightcuts

    No full text
    Multidimensional lightcuts is a new scalable method for efficiently rendering rich visual effects such as motion blur, participating media, depth of field, and spatial anti-aliasing in complex scenes. It introduces a flexible, general rendering framework that unifies the handling of such effects by discretizing the integrals into large sets of gather and light points and adaptively approximating the sum of all possible gather-light pair interactions. We create an implicit hierarchy, the product graph, over the gatherlight pairs to rapidly and accurately approximate the contribution from hundreds of millions of pairs per pixel while only evaluating a tiny fraction (e.g., 200–1,000). We build upon the techniques of the prior Lightcuts method for complex illumination at a point, however, by considering the complete pixel integrals, we achieve much greater efficiency and scalability. Our example results demonstrate efficient handling of volume scattering, camera focus, and motion of lights, cameras, and geometry. For example, enabling high quality motion blur with 256 × temporal sampling requires only a 6.7 × increase in shading cost in a scene with complex moving geometry, materials, and illumination
    corecore