61 research outputs found
Feature-Based Textures
This paper introduces feature-based textures, a new image
representation that combines features and texture samples for high-quality texture mapping. Features identify boundaries within a texture where samples change discontinuously. They can be extracted from vector graphics representations, or explicity added to raster images to improve sharpness. Texture lookups are then interpolated from samples while respecting these boundaries. We present results from a software implementation of this technique demonstrating quality, efficiency and low memory overhead
Vector Graphics for Real-time 3D Rendering
Algorithms are presented that enable the use of vector graphics representations
of images in texture maps for 3D real time rendering.
Vector graphics images are resolution independent and
can be zoomed arbitrarily without losing detail
or crispness. Many important types of images, including text and
other symbolic information, are best represented in vector form. Vector
graphics textures can also be used as transparency mattes to augment
geometric detail in models via trim curves.
Spline curves are used to represent boundaries around regions
in standard vector graphics representations, such as PDF and SVG.
Antialiased rendering of such content can be obtained by thresholding
implicit representations of these curves.
The distance function is an especially useful implicit representation.
Accurate distance function computations would also allow the implementation
of special effects such as embossing.
Unfortunately, computing the true distance to higher order spline curves
is too expensive for real time rendering.
Therefore, normally either the distance is approximated
by normalizing some other implicit representation
or the spline curves are approximated with simpler primitives.
In this thesis, three methods for
rendering vector graphics textures in real time are introduced,
based on various approximations of the distance computation.
The first and simplest approach to the distance computation
approximates curves with line segments.
Unfortunately, approximation with line segments gives only C0 continuity.
In order to improve smoothness, spline curves can also be approximated
with circular arcs.
This approximation has C1 continuity and computing the distance
to a circular arc is only slightly more expensive than
computing the distance to a line segment.
Finally an iterative algorithm
is discussed that has good performance in practice and can compute the
distance to any parametrically differentiable curve
(including polynomial splines of any order)
robustly. This algorithm is demonstrated in the context of a system
capable of real-time rendering of SVG content in a texture map on a GPU.
Data structures and acceleration algorithms in the context of massively
parallel GPU architectures are also discussed.
These data structures and acceleration structures allow arbitrary vector
content (with space-variant complexity, and overlapping regions) to be
represented in a random-access texture
Ray Tracing Gems
This book is a must-have for anyone serious about rendering in real time. With the announcement of new ray tracing APIs and hardware to support them, developers can easily create real-time applications with ray tracing as a core component. As ray tracing on the GPU becomes faster, it will play a more central role in real-time rendering. Ray Tracing Gems provides key building blocks for developers of games, architectural applications, visualizations, and more. Experts in rendering share their knowledge by explaining everything from nitty-gritty techniques that will improve any ray tracer to mastery of the new capabilities of current and future hardware. What you'll learn: The latest ray tracing techniques for developing real-time applications in multiple domains Guidance, advice, and best practices for rendering applications with Microsoft DirectX Raytracing (DXR) How to implement high-performance graphics for interactive visualizations, games, simulations, and more Who this book is for: Developers who are looking to leverage the latest APIs and GPU technology for real-time rendering and ray tracing Students looking to learn about best practices in these areas Enthusiasts who want to understand and experiment with their new GPU
Recommended from our members
Hardware accelerated computer graphics algorithms
The advent of shaders in the latest generations of graphics hardware, which has made consumer level graphics hardware partially programmable, makes now an ideal time to investigate new graphical techniques and algorithms as well as attempting to improve upon existing ones.
This work looks at areas of current interest within the graphics community such as Texture Filtering, Bump Mapping and Depth of Field simulation. These are all areas which have enjoyed much interest over the history of computer graphics but which provide a great deal of scope for further investigation in the light of recent hardware advances.
A new hardware implementation of a texture filtering technique, aimed at consumer level hardware, is presented. This novel technique utilises Fourier space image filtering to reduce aliasing. Investigation shows that the technique provides reduced levels of aliasing along with comparable levels of detail to currently popular techniques. This adds to the community's knowledge by expanding the range of techniques available, as well as increasing the number of techniques which offer the potential for easy integration with current consumer level graphics hardware along with real-time performance.
Bump mapping is a long-standing and well understood technique. Variations and extensions of it have been popular in real-time 3D computer graphics for many years. A new hardware implementation of a technique termed Super Bump Mapping (SBM) is introduced. Expanding on the work of Cant and Langensiepen [1], the SBM technique adopts the novel approach of using normal maps which supply multiple vectors per texel. This allows the retention of much more detail and overcomes some of the aliasing deficiencies of standard bump mapping caused by the standard single vector approach and the non-linearity of the bump mapping process.
A novel depth of field algorithm is proposed, which is an extension of the authors previous work [2][3][4]. The technique is aimed at consumer level hardware and attempts to raise the bar for realism by providing support for the 'see-through' effect. This effect is a vital factor in the realistic appearance of simulated depth of field and has been overlooked in real time computer graphics due to the complexities of an accurate calculation. The implementation of this new algorithm on current consumer level hardware is investigated and it is concluded that while current hardware is not yet capable enough, future iterations will provide the necessary functional and performance increases
Decoupled Sampling for Real-Time Graphics Pipelines
We propose decoupled sampling, an approach that decouples shading from visibility sampling in order to enable motion blur and depth-of-field at reduced cost. More generally, it enables extensions of modern real-time graphics pipelines that provide controllable shading rates to trade off quality for performance. It can be thought of as a generalization of GPU-style multisample antialiasing (MSAA) to support unpredictable shading rates, with arbitrary mappings from visibility to shading samples as introduced by motion blur, depth-of-field, and adaptive shading. It is inspired by the Reyes architecture in offline rendering, but targets real-time pipelines by driving shading from visibility samples as in GPUs, and removes the need for micropolygon dicing or rasterization. Decoupled Sampling works by defining a many-to-one hash from visibility to shading samples, and using a buffer to memoize shading samples and exploit reuse across visibility samples. We present extensions of two modern GPU pipelines to support decoupled sampling: a GPU-style sort-last fragment architecture, and a Larrabee-style sort-middle pipeline. We study the architectural implications and derive end-to-end performance estimates on real applications through an instrumented functional simulator. We demonstrate high-quality motion blur and depth-of-field, as well as variable and adaptive shading rates
Decoupled Sampling for Graphics Pipelines
We propose a generalized approach to decoupling shading from visibility sampling in graphics pipelines, which we call decoupled sampling. Decoupled sampling enables stochastic supersampling of motion and defocus blur at reduced shading cost, as well as controllable or adaptive shading rates which trade off shading quality for performance. It can be thought of as a generalization of multisample antialiasing (MSAA) to support complex and dynamic mappings from visibility to shading samples, as introduced by motion and defocus blur and adaptive shading. It works by defining a many-to-one hash from visibility to shading samples, and using a buffer to memoize shading samples and exploit reuse across visibility samples. Decoupled sampling is inspired by the Reyes rendering architecture, but like traditional graphics pipelines, it shades fragments rather than micropolygon vertices, decoupling shading from the geometry sampling rate. Also unlike Reyes, decoupled sampling only shades fragments after precise computation of visibility, reducing overshading.
We present extensions of two modern graphics pipelines to support decoupled sampling: a GPU-style sort-last fragment architecture, and a Larrabee-style sort-middle pipeline. We study the architectural implications of decoupled sampling and blur, and derive end-to-end performance estimates on real applications through an instrumented functional simulator. We demonstrate high-quality motion and defocus blur, as well as variable and adaptive shading rates
Improving Filtering for Computer Graphics
When drawing images onto a computer screen, the information in the scene is typically
more detailed than can be displayed. Most objects, however, will not be close to the
camera, so details have to be filtered out, or anti-aliased, when the objects are drawn on
the screen. I describe new methods for filtering images and shapes with high fidelity while
using computational resources as efficiently as possible.
Vector graphics are everywhere, from drawing 3D polygons to 2D text and maps for
navigation software. Because of its numerous applications, having a fast, high-quality
rasterizer is important. I developed a method for analytically rasterizing shapes using
wavelets. This approach allows me to produce accurate 2D rasterizations of images and
3D voxelizations of objects, which is the first step in 3D printing. I later improved my
method to handle more filters. The resulting algorithm creates higher-quality images than
commercial software such as Adobe Acrobat and is several times faster than the most
highly optimized commercial products.
The quality of texture filtering also has a dramatic impact on the quality of a rendered
image. Textures are images that are applied to 3D surfaces, which typically cannot be
mapped to the 2D space of an image without introducing distortions. For situations in
which it is impossible to change the rendering pipeline, I developed a method for precomputing
image filters over 3D surfaces. If I can also change the pipeline, I show that it
is possible to improve the quality of texture sampling significantly in real-time rendering
while using the same memory bandwidth as used in traditional methods
Modelling and Visualisation of the Optical Properties of Cloth
Cloth and garment visualisations are widely used in fashion and interior design, entertaining, automotive and nautical industry and are indispensable elements of visual communication. Modern appearance models attempt to offer a complete solution for the visualisation of complex cloth properties. In the review part of the chapter, advanced methods that enable visualisation at micron resolution, methods used in three-dimensional (3D) visualisation workflow and methods used for research purposes are presented. Within the review, those methods offering a comprehensive approach and experiments on explicit clothes attributes that present specific optical phenomenon are analysed. The review of appearance models includes surface and image-based models, volumetric and explicit models. Each group is presented with the representative authors’ research group and the application and limitations of the methods. In the final part of the chapter, the visualisation of cloth specularity and porosity with an uneven surface is studied. The study and visualisation was performed using image data obtained with photography. The acquisition of structure information on a large scale namely enables the recording of structure irregularities that are very common on historical textiles, laces and also on artistic and experimental pieces of cloth. The contribution ends with the presentation of cloth visualised with the use of specular and alpha maps, which is the result of the image processing workflow
Festschrift zum 60. Geburtstag von Wolfgang Strasser
Die vorliegende Festschrift ist Prof. Dr.-Ing. Dr.-Ing. E.h. Wolfgang Straßer zu seinem 60. Geburtstag gewidmet. Eine Reihe von Wissenschaftlern auf dem Gebiet der Computergraphik, die alle aus der "Tübinger Schule" stammen, haben - zum Teil zusammen mit ihren Schülern - Aufsätze zu dieser Schrift beigetragen.
Die Beiträge reichen von der Objektrekonstruktion aus Bildmerkmalen über die physikalische Simulation bis hin zum Rendering und der Visualisierung, vom theoretisch ausgerichteten Aufsatz bis zur praktischen gegenwärtigen und zukünftigen Anwendung. Diese thematische Buntheit verdeutlicht auf anschauliche Weise die Breite und Vielfalt der Wissenschaft von der Computergraphik, wie sie am Lehrstuhl Straßer in Tübingen betrieben wird.
Schon allein an der Tatsache, daß im Bereich der Computergraphik zehn Professoren an Universitäten und Fachhochschulen aus Tübingen kommen, zeigt sich der prägende Einfluß Professor Straßers auf die Computergraphiklandschaft in Deutschland. Daß sich darunter mehrere Physiker und Mathematiker befinden, die in Tübingen für dieses Fach gewonnen werden konnten, ist vor allem seinem Engagement und seiner Ausstrahlung zu verdanken.
Neben der Hochachtung vor den wissenschaftlichen Leistungen von Professor Straßer hat sicherlich seine Persönlichkeit einen entscheidenden Anteil an der spontanten Bereischaft der Autoren, zu dieser Festschrift beizutragen. Mit außergewöhnlich großem persönlichen Einsatz fördert er Studenten, Doktoranden und Habilitanden, vermittelt aus seinen reichen internationalen Beziehungen Forschungskontakte und schafft so außerordentlich gute Voraussetzungen für selbständige wissenschafliche Arbeit.
Die Autoren wollen mit ihrem Beitrag Wolfgang Straßer eine Freude bereiten und verbinden mit ihrem Dank den Wunsch, auch weiterhin an seinem fachlich wie menschlich reichen und bereichernden Wirken teilhaben zu dürfen
Image synthesis based on a model of human vision
Modern computer graphics systems are able to construct renderings of such high quality that viewers are deceived into regarding the images as coming from a photographic source. Large amounts of computing resources are expended in this rendering process, using complex mathematical models of lighting and shading.
However, psychophysical experiments have revealed that viewers only regard certain informative regions within a presented image. Furthermore, it has been shown that these visually important regions contain low-level visual feature differences that attract the attention of the viewer.
This thesis will present a new approach to image synthesis that exploits these experimental findings by modulating the spatial quality of image regions by their visual importance. Efficiency gains are therefore reaped, without sacrificing much of the perceived quality of the image. Two tasks must be undertaken to achieve this goal. Firstly, the design of an appropriate region-based model of visual importance, and secondly, the modification of progressive rendering techniques to effect an importance-based rendering approach.
A rule-based fuzzy logic model is presented that computes, using spatial feature differences, the relative visual importance of regions in an image. This model improves upon previous work by incorporating threshold effects induced by global feature difference distributions and by using texture concentration measures.
A modified approach to progressive ray-tracing is also presented. This new approach uses the visual importance model to guide the progressive refinement of an image. In addition, this concept of visual importance has been incorporated into supersampling, texture mapping and computer animation techniques. Experimental results are presented, illustrating the efficiency gains reaped from using this method of progressive rendering.
This visual importance-based rendering approach is expected to have applications in the entertainment industry, where image fidelity may be sacrificed for efficiency purposes, as long as the overall visual impression of the scene is maintained. Different aspects of the approach should find many other applications in image compression, image retrieval, progressive data transmission and active robotic vision
- …