5 research outputs found

    Perceptually-Driven Decision Theory for Interactive Realistic Rendering

    Get PDF
    this paper we introduce a new approach to realistic rendering at interactive rates on commodity graphics hardware. The approach uses efficient perceptual metrics within a decision theoretic framework to optimally order rendering operations, producing images of the highest visual quality within system constraints. We demonstrate the usefulness of this approach for various applications such as diffuse texture caching, environment map prioritization and radiosity mesh simplification. Although here we address the problem of realistic rendering at interactive rates, the perceptually-based decision theoretic methodology we introduce can be usefully applied in many areas of computer graphic

    A Perceptually-Based Texture Caching Algorithm for Hardware-Based Rendering

    No full text
    The performance of hardware-based interactive rendering systems is often constrained by polygon fill rates and texture map capacity, rather than polygon count alone. We present a new software texture caching algorithm that optimizes the use of texture memory in current graphics hardware by dynamically allocating more memory to the textures that have the greatest visual importance in the scene. The algorithm employs a resource allocation scheme that decides which resolution to use for each texture in board memory. The allocation scheme estimates the visual importance of textures using a perceptually-based metric that takes into account view point and vertex illumination as well as texture contrast and frequency content. This approach provides high frame rates while maximizing image quality

    A Perceptually-Based Texture Caching Algorithm for Hardware-Based Rendering

    No full text

    On expressing different concurrency paradigms on virtual execution environment

    Get PDF
    Virtual execution environments (VEE) such as the Java Virtual Machine (JVM) and the Microsoft Common Language Runtime (CLR) have been designed when the dominant computer architecture featured a Von-Neumann interface to programs: a single processor hiding all the complexity of parallel computations inside its design. Programs are expressed in an intermediate form that is executed by the VEE that defines an abstract computational model in which the concurrency model has been influenced by these design choices and it basically exposes the multi-threading model of the underlying operating system. Recently computer systems have introduced computational units in which concurrency is explicit and under program control. Relevant examples are the Graphical Processing Units (GPU such as Nvidia or AMD) and the Cell BE architecture which allow for explicit control of single processing unit, local memories and communication channels. Unfortunately programs designed for Virtual Machines cannot access to these resources since are not available through the abstractions provided by the VEE. A major redesign of VEEs seems to be necessary in order to bridge this gap. In this thesis we study the problem of exposing non-von Neumann computing resources within the Virtual Machine without need for a redesign of the whole execution infrastructure. In this work we express parallel computations relying on extensible meta-data and reflection to encode information. Meta-programming techniques are then used to rewrite the program into an equivalent one using the special purpose underlying architecture. We provide a case study in which this approach is applied to compiling Common Intermediate Language (CIL) methods to multi-core GPUs; we show that it is possible to access these non-standard computing resources without any change to the virtual machine design
    corecore