955,415 research outputs found
An Overview of Rendering from Volume Data --- including Surface and Volume Rendering
Volume rendering is a title often ambiguously used in science. One meaning often quoted is: `to render any three volume dimensional data set'; however, within this categorisation `surface rendering'' is contained. Surface rendering is a technique for visualising a geometric representation of a surface from a three dimensional volume data set. A more correct definition of Volume Rendering would only incorporate the direct visualisation of volumes, without the use of intermediate surface geometry representations. Hence we state: `Volume Rendering is the Direct Visualisation of any three dimensional Volume data set; without the use of an intermediate geometric representation for isosurfaces'; `Surface Rendering is the Visualisation of a surface, from a geometric approximation of an isosurface, within a Volume data set'; where an isosurface is a surface formed from a cross connection of data points, within a volume, of equal value or density. This paper is an overview of both Surface Rendering and Volume Rendering techniques. Surface Rendering mainly consists of contouring lines over data points and triangulations between contours. Volume rendering methods consist of ray casting techniques that allow the ray to be cast from the viewing plane into the object and the transparency, opacity and colour calculated for each cell; the rays are often cast until an opaque object is `hit' or the ray exits the volume
A survey of real-time crowd rendering
In this survey we review, classify and compare existing approaches for real-time crowd rendering. We first overview character animation techniques, as they are highly tied to crowd rendering performance, and then we analyze the state of the art in crowd rendering. We discuss different representations for level-of-detail (LoD) rendering of animated characters, including polygon-based, point-based, and image-based techniques, and review different criteria for runtime LoD selection. Besides LoD approaches, we review classic acceleration schemes, such as frustum culling and occlusion culling, and describe how they can be adapted to handle crowds of animated characters. We also discuss specific acceleration techniques for crowd rendering, such as primitive pseudo-instancing, palette skinning, and dynamic key-pose caching, which benefit from current graphics hardware. We also address other factors affecting performance and realism of crowds such as lighting, shadowing, clothing and variability. Finally we provide an exhaustive comparison of the most relevant approaches in the field.Peer ReviewedPostprint (author's final draft
Adaptive User Perspective Rendering for Handheld Augmented Reality
Handheld Augmented Reality commonly implements some variant of magic lens
rendering, which turns only a fraction of the user's real environment into AR
while the rest of the environment remains unaffected. Since handheld AR devices
are commonly equipped with video see-through capabilities, AR magic lens
applications often suffer from spatial distortions, because the AR environment
is presented from the perspective of the camera of the mobile device. Recent
approaches counteract this distortion based on estimations of the user's head
position, rendering the scene from the user's perspective. To this end,
approaches usually apply face-tracking algorithms on the front camera of the
mobile device. However, this demands high computational resources and therefore
commonly affects the performance of the application beyond the already high
computational load of AR applications. In this paper, we present a method to
reduce the computational demands for user perspective rendering by applying
lightweight optical flow tracking and an estimation of the user's motion before
head tracking is started. We demonstrate the suitability of our approach for
computationally limited mobile devices and we compare it to device perspective
rendering, to head tracked user perspective rendering, as well as to fixed
point of view user perspective rendering
Scalable Interactive Volume Rendering Using Off-the-shelf Components
This paper describes an application of a second generation implementation of the Sepia architecture (Sepia-2) to interactive volu-metric visualization of large rectilinear scalar fields. By employingpipelined associative blending operators in a sort-last configuration a demonstration system with 8 rendering computers sustains 24 to 28 frames per second while interactively rendering large data volumes (1024x256x256 voxels, and 512x512x512 voxels). We believe interactive performance at these frame rates and data sizes is unprecedented. We also believe these results can be extended to other types of structured and unstructured grids and a variety of GL rendering techniques including surface rendering and shadow map-ping. We show how to extend our single-stage crossbar demonstration system to multi-stage networks in order to support much larger data sizes and higher image resolutions. This requires solving a dynamic mapping problem for a class of blending operators that includes Porter-Duff compositing operators
Sketchy rendering for information visualization
We present and evaluate a framework for constructing sketchy style information visualizations that mimic data graphics drawn by hand. We provide an alternative renderer for the Processing graphics environment that redefines core drawing primitives including line, polygon and ellipse rendering. These primitives allow higher-level graphical features such as bar charts, line charts, treemaps and node-link diagrams to be drawn in a sketchy style with a specified degree of sketchiness. The framework is designed to be easily integrated into existing visualization implementations with minimal programming modification or design effort. We show examples of use for statistical graphics, conveying spatial imprecision and for enhancing aesthetic and narrative qualities of visual- ization. We evaluate user perception of sketchiness of areal features through a series of stimulus-response tests in order to assess users’ ability to place sketchiness on a ratio scale, and to estimate area. Results suggest relative area judgment is compromised by sketchy rendering and that its influence is dependent on the shape being rendered. They show that degree of sketchiness may be judged on an ordinal scale but that its judgement varies strongly between individuals. We evaluate higher-level impacts of sketchiness through user testing of scenarios that encourage user engagement with data visualization and willingness to critique visualization de- sign. Results suggest that where a visualization is clearly sketchy, engagement may be increased and that attitudes to participating in visualization annotation are more positive. The results of our work have implications for effective information visualization design that go beyond the traditional role of sketching as a tool for prototyping or its use for an indication of general uncertainty
Selective rendering for efficient ray traced stereoscopic images
Depth-related visual effects are a key feature of many virtual environments. In stereo-based systems, the depth effect can be produced by delivering frames of disparate image pairs, while in monocular environments, the viewer has to extract this depth information from a single image by examining details such as perspective and shadows. This paper investigates via a number of psychophysical experiments, whether we can reduce computational effort and still achieve perceptually high-quality rendering for stereo imagery. We examined selectively rendering the image pairs by exploiting the fusing capability and depth perception underlying human stereo vision. In ray-tracing-based global illumination systems, a higher image resolution introduces more computation to the rendering process since many more rays need to be traced. We first investigated whether we could utilise the human binocular fusing ability and significantly reduce the resolution of one of the image pairs and yet retain a high perceptual quality under stereo viewing condition. Secondly, we evaluated subjects' performance on a specific visual task that required accurate depth perception. We found that subjects required far fewer rendered depth cues in the stereo viewing environment to perform the task well. Avoiding rendering these detailed cues saved significant computational time. In fact it was possible to achieve a better task performance in the stereo viewing condition at a combined rendering time for the image pairs less than that required for the single monocular image. The outcome of this study suggests that we can produce more efficient stereo images for depth-related visual tasks by selective rendering and exploiting inherent features of human stereo vision
- …
