942 research outputs found
Three architectures for volume rendering
Volume rendering is a key technique in scientific visualization that lends itself to significant exploitable parallelism. The high computational demands of real-time volume rendering and continued technological advances in the area of VLSI give impetus to the development of special-purpose volume rendering architectures. This paper presents and characterizes three recently developed volume rendering engines which are based on the ray-casting method. A taxonomy of the algorithmic variants of ray-casting and details of each ray-casting architecture are discussed. The paper then compares the machine features and provides an outlook on future developments in the area of volume rendering hardware
Logarithmic perspective shadow maps
The shadow map algorithm is a popular approach for generating shadows for real-time applications. Shadow maps are flexible and easy to implement, but they are prone to aliasing artifacts. To reduce aliasing artifacts we introduce logarithmic perspective shadow maps (LogPSMs). LogPSMs are based on a novel shadow map parameterization that consists of a perspective projection and a logarithmic transformation. They can be used for both point and directional light sources to produce hard shadows. To establish the benefits of LogPSMs, we perform an in-depth analysis of shadow map aliasing error and the error characteristics of existing algorithms. Using this analysis we compute a parameterization that produces near-optimal perspective aliasing error. This parameterization has high arithmetical complexity which makes it less practical than existing methods. We show, however, that over all light positions, the simpler LogPSM parameterization produces the same maximum error as the near-optimal parameterization. We also show that compared with competing algorithms, LogPSMs produce significantly less aliasing error. Equivalently, for the same error as competing algorithms, LogPSMs require significantly less storage and bandwidth. We demonstrate difference in shadow quality achieved with LogPSMs on several models of varying complexity. LogPSMs are rendered using logarithmic rasterization. We show how current GPU architectures can be modified incrementally to perform logarithmic rasterization at current GPU fill rates. Specifically, we modify the rasterizer to support rendering to a nonuniform grid with the same watertight rasterization properties as current rasterizers. We also describe a novel depth compression scheme to handle the nonlinear primitives produced by logarithmic rasterization. Our proposed architecture enhancements align with current trends of decreasing cost for on-chip computation relative to off-chip bandwidth and storage. For only a modest increase in computation, logarithmic rasterization can greatly reduce shadow map bandwidth and storage costs
Virtual Reality Methods for Research in the Geosciences
In the presented work, I evaluate if and how Virtual Reality (VR) technologies can be used to support researchers working in the geosciences by providing immersive, collaborative visualization systems as well as virtual tools for data analysis. Technical challenges encountered in the development of theses systems are identified and solutions for these are provided.
To enable geologists to explore large digital terrain models (DTMs) in an immersive, explorative fashion within a VR environment, a suitable terrain rendering algorithm is required. For realistic perception of planetary curvature at large viewer altitudes, spherical rendering of the surface is necessary. Furthermore, rendering must sustain interactive frame rates of about 30 frames per second to avoid sensory confusion of the user. At the same time, the data structures used for visualization should also be suitable for efficiently computing spatial properties such as height profiles or volumes in order to implement virtual analysis tools. To address these requirements, I have developed a novel terrain rendering algorithm based on tiled quadtree hierarchies using the HEALPix parametrization of a sphere. For evaluation purposes, the system is applied to a 500 GiB dataset representing the surface of Mars.
Considering the current development of inexpensive remote surveillance equipment such as quadcopters, it seems inevitable that these devices will play a major role in future disaster management applications. Virtual reality installations in disaster management headquarters which provide an immersive visualization of near-live, three-dimensional situational data could then be a valuable asset for rapid, collaborative decision making. Most terrain visualization algorithms, however, require a computationally expensive pre-processing step to construct a terrain database.
To address this problem, I present an on-the-fly pre-processing system for cartographic data. The system consists of a frontend for rendering and interaction as well as a distributed processing backend executing on a small cluster which produces tiled data in the format required by the frontend on demand. The backend employs a CUDA based algorithm on graphics cards to perform efficient conversion from cartographic standard projections to the HEALPix-based grid used by the frontend.
Measurement of spatial properties is an important step in quantifying geological phenomena. When performing these tasks in a VR environment, a suitable input device and abstraction for the interaction (a “virtual tool”) must be provided. This tool should enable the user to precisely select the location of the measurement even under a perspective projection. Furthermore, the measurement process should be accurate to the resolution of the data available and should not have a large impact on the frame rate in order to not violate interactivity requirements.
I have implemented virtual tools based on the HEALPix data structure for measurement of height profiles as well as volumes. For interaction, a ray-based picking metaphor was employed, using a virtual selection ray extending from the user’s hand holding a VR interaction device. To provide maximum accuracy, the algorithms access the quad-tree terrain database at the highest available resolution level while at the same time maintaining interactivity in rendering.
Geological faults are cracks in the earth’s crust along which a differential movement of rock volumes can be observed. Quantifying the direction and magnitude of such translations is an essential requirement in understanding earth’s geological history. For this purpose, geologists traditionally use maps in top-down projection which are cut (e.g. using image editing software) along the suspected fault trace. The two resulting pieces of the map are then translated in parallel against each other until surface features which have been cut by the fault motion come back into alignment. The amount of translation applied is then used as a hypothesis for the magnitude of the fault action. In the scope of this work it is shown, however, that performing this study in a top-down perspective can lead to the acceptance of faulty reconstructions, since the three-dimensional structure of topography is not considered.
To address this problem, I present a novel terrain deformation algorithm which allows the user to trace a fault line directly within a 3D terrain visualization system and interactively deform the terrain model while inspecting the resulting reconstruction from arbitrary perspectives. I demonstrate that the application of 3D visualization allows for a more informed interpretation of fault reconstruction hypotheses. The algorithm is implemented on graphics cards and performs real-time geometric deformation of the terrain model, guaranteeing interactivity with respect to all parameters.
Paleoceanography is the study of the prehistoric evolution of the ocean. One of the key data sources used in this research are coring experiments which provide point samples of layered sediment depositions at the ocean floor. The samples obtained in these experiments document the time-varying sediment concentrations within the ocean water at the point of measurement. The task of recovering the ocean flow patterns based on these deposition records is a challenging inverse numerical problem, however.
To support domain scientists working on this problem, I have developed a VR visualization tool to aid in the verification of model parameters by providing simultaneous visualization of experimental data from coring as well as the resulting predicted flow field obtained from numerical simulation. Earth is visualized as a globe in the VR environment with coring data being presented using a billboard rendering technique while the
time-variant flow field is indicated using Line-Integral-Convolution (LIC). To study individual sediment transport pathways and their correlation with the depositional record, interactive particle injection and real-time advection is supported
Development and Application of Computer Graphics Techniques for the Visualization of Large Geo-Related Data-Sets
Ziel dieser Arbeit war es, Algorithmen zu entwickeln und zu verbessern, die
es gestatten, grosse geographische und andere geo-bezogene Datensätze
mithilfe computergraphischer Techniken visualisieren zu können.
Ein Schwerpunkt war dabei die Entwicklung neuer kamera-adaptiver Datenstrukturen fĂĽr
digitale Höhenmodelle und Rasterbilder.
In der Arbeit wird zunächst ein neuartiges Multiresolutionmodell für Höhenfelder
definiert. Dieses Modell braucht nur sehr wenig zusätzlichen Speicherplatz und ist geeignet,
interaktive Anpassungsraten zu gewährleisten.
Weiterhin werden Ansätze zur schnellen Bestimmung sichtbarer und verdeckter
Teile einer computergraphischen Szene diskutiert, um die Bewegung in grossen und
ausgedehnten Szenen wie Stadtmodellen oder Gebäuden zu beschleunigen.
Im Anschluss daran werden einige Problemstellungen im Zusammenhang mit Texture Mapping
erörtert, so werden zum Beispiel eine neue beobachterabhängige Datenstruktur
fĂĽr Texturdaten und ein neuer Ansatz zur Texturfilterung vorgestellt.
Die meisten dieser Algorithmen und Verfahren wurden in ein interaktives System zur
Geländevisualisierung integriert, das den Projektnamen 'FlyAway' hat und im letzten Kapitel
der Arbeit beschrieben wird
Recommended from our members
Holoscopic 3D imaging and display technology: Camera/ processing/ display
This thesis was submitted for the award of Doctor of Philosophy and was awarded by Brunel University LondonHoloscopic 3D imaging “Integral imaging” was first proposed by Lippmann in 1908. It has become an attractive technique for creating full colour 3D scene that exists in space. It promotes a single camera aperture for recording spatial information of a real scene and it uses a regularly spaced microlens arrays to simulate the principle of Fly’s eye technique, which creates physical duplicates of light field “true 3D-imaging technique”.
While stereoscopic and multiview 3D imaging systems which simulate human eye technique are widely available in the commercial market, holoscopic 3D imaging technology is still in the research phase. The aim of this research is to investigate spatial resolution of holoscopic 3D imaging and display technology, which includes holoscopic 3D camera, processing and display.
Smart microlens array architecture is proposed that doubles spatial resolution of holoscopic 3D camera horizontally by trading horizontal and vertical resolutions. In particular, it overcomes unbalanced pixel aspect ratio of unidirectional holoscopic 3D images. In addition, omnidirectional holoscopic 3D computer graphics rendering techniques are proposed that simplify the rendering complexity and facilitate holoscopic 3D content generation.
Holoscopic 3D image stitching algorithm is proposed that widens overall viewing angle of holoscopic 3D camera aperture and pre-processing of holoscopic 3D image filters are proposed for spatial data alignment and 3D image data processing. In addition, Dynamic hyperlinker tool is developed that offers interactive holoscopic 3D video content search-ability and browse-ability.
Novel pixel mapping techniques are proposed that improves spatial resolution and visual definition in space. For instance, 4D-DSPM enhances 3D pixels per inch from 44 3D-PPIs to 176 3D-PPIs horizontally and achieves spatial resolution of 1365 Ă— 384 3D-Pixels whereas the traditional spatial resolution is 341 Ă— 1536 3D-Pixels. In addition distributed pixel mapping is proposed that improves quality of holoscopic 3D scene in space by creating RGB-colour channel elemental images
- …