337 research outputs found
Utilizing image guided surgery for user interaction in medical augmented reality
The graphical overlay of additional medical information over the patient during a surgical procedure has long been considered one of the most promising applications of augmented reality. While many experimental systems for augmented reality in medicine have reached an advanced state and can deliver high-quality augmented video streams, they usually depend heavily on specialized dedicated hardware. Such dedicated system components, which originally have been designed for engineering applications or VR research, often are ill-suited for use in the clinical practice. We have described a novel medical augmented reality application, which is based almost exclusively on existing, commercially available, and certified medical equipment. In our system, a so-called image guided surgery device is used for tracking a webcam, which delivers the digital video stream of the physical scene that is augmented with the virtual information.
In this paper, we show how the capability of the image guided surgery system for tracking surgical instruments can be harnessed for user interaction. Our method enables the user to define points and freely drawn shapes in 3-d and provides selectable menu items, which can be located in immediate proximity to the patient. This eliminates the need for conventional touchscreen- or mouse-based user interaction without requiring additional dedicated hardware like dedicated tracking systems or specialized 3-d input devices. Thus the surgeon can directly interact with the system, without the help of additional personnel. We demonstrate our new input method with an application for creating operation plan sketches directly on the patient in an augmented view
Real-time cartoon-like stylization of AR video streams on the GPU
The ultimate goal of many applications of augmented reality is to immerse the user into the augmented scene, which is enriched with virtual models. In order to achieve this immersion, it is necessary to create the visual impression that the graphical objects are a natural part of the user’s environment. Producing this effect with conventional computer graphics algorithms is a complex task. Various rendering artifacts in the three-dimensional graphics create a noticeable visual discrepancy between the real background image and virtual objects.
We have recently proposed a novel approach to generating an augmented video stream. With this new method, the output images are a non-photorealistic reproduction of the augmented environment. Special stylization methods are applied to both the background camera image and the virtual objects. This way the visual realism of both the graphical foreground and the real background image is reduced, so that they are less distinguishable from each other.
Here, we present a new method for the cartoon-like stylization of augmented reality images, which uses a novel post-processing filter for cartoon-like color segmentation and high-contrast silhouettes. In order to make a fast postprocessing of rendered images possible, the programmability of modern graphics hardware is exploited. We describe an implementation of the algorithm using the OpenGL Shading Language. The system is capable of generating a stylized augmented video stream of high visual quality at real-time frame rates. As an example application, we demonstrate the visualization of dinosaur bone datasets in stylized augmented reality
Handling photographic imperfections and aliasing in augmented reality
In video see-through augmented reality, virtual objects are overlaid over images delivered by a digital video camera. One particular problem of this image mixing process is the fact that the visual appearance of the computer-generated graphics differs strongly from the real background image. In typical augmented reality systems, standard real-time rendering techniques are used for displaying virtual objects. These fast, but relatively simplistic methods create an artificial, almost "plastic-like" look for the graphical elements.
In this paper, methods for incorporating two particular camera image effects in virtual overlays are described. The first effect is camera image noise, which is contained in the data delivered by the CCD chip used for capturing the real scene. The second effect is motion blur, which is caused by the temporal integration of color intensities on the CCD chip during fast movements of the camera or observed objects, resulting in a blurred camera image. Graphical objects rendered with standard methods neither contain image noise nor motion blur. This is one of the factors which makes the virtual objects stand out from the camera image and contributes to the perceptual difference between real and virtual scene elements.
Here, approaches for mimicking both camera image noise and motion blur in the graphical representation of virtual objects are proposed. An algorithm for generating a realistic imitation of image noise based on a camera calibration step is described. A rendering method which produces motion blur according to the current camera movement is presented. As a by-product of the described rendering pipeline, it becomes possible to perform a smooth blending between virtual objects and the camera image at their boundary. An implementation of the new rendering methods for virtual objects
is described, which utilizes the programmability of modern graphics processing units (GPUs) and is capable of delivering real-time frame rates
A pointillism style for the non-photorealistic display of augmented reality scenes
The ultimate goal of augmented reality is to provide the user with a view of the surroundings enriched by virtual objects. Practically all augmented reality systems rely on standard real-time rendering methods for generating the images of virtual scene elements. Although such conventional computer graphics algorithms are fast, they often fail to produce sufficiently realistic renderings. The use of simple lighting and shading methods, as well as the lack of knowledge about actual lighting conditions in the real surroundings, cause virtual objects to appear artificial.
We have recently proposed a novel approach for generating augmented reality images. Our method is based on the idea of applying stylization techniques for reducing the visual realism of both the camera image and the virtual graphical objects. Special non-photorealistic image filters are applied to the camera video stream. The virtual scene elements are rendered using non-photorealistic rendering methods. Since both the camera image and the virtual objects are stylized in a corresponding way, they appear very similar. As a result, graphical objects can become indistinguishable from the real surroundings.
Here, we present a new method for the stylization of augmented reality images. This approach generates a painterly "brush stroke" rendering. The resulting stylized augmented reality video frames look similar to paintings created in the "pointillism" style. We describe the implementation of the camera image filter and the non-photorealistic renderer for virtual objects. These components have been newly designed or adapted for this purpose. They are fast enough for generating augmented reality images in real-time and are customizable. The results obtained using our approach are very promising and show that it improves immersion in augmented reality
Tighter bounding volumes for better occlusion culling performance
Bounding volumes are used in computer graphics to approximate the actual geometric shape of an object in a scene. The main intention is to reduce the costs associated with visibility or interference tests. The bounding volumes most commonly used have been axis-aligned bounding boxes and bounding spheres. In this paper, we propose the use of discrete orientation polytopes (\kdops) as bounding volumes for the specific use of visibility culling. Occlusion tests are computed more accurately using \kdops, but most importantly, they are also computed more efficiently. We illustrate this point through a series of experiments using a wide range of data models under varying viewing conditions. Although no bounding volume works the best in every situation, {\kdops} are often the best, and also work very well in those cases where they are not the best, therefore they provide good results without having to analyze applications and different bounding volumes
Reality Tooning: Fast Non-Photorealism for Augmented Video Streams (poster
Recently, we have proposed a novel approach to generating augmented video streams. The output images are a non-photorealistic reproduction of the augmented environment. Special stylization methods are applied to both the background camera image and the virtual objects. This way, the graphical foreground and the real background images are rendered in a similar style, so that they are less distinguishable from each other. Here, we present a new algorithm for the cartoon-like stylization of augmented reality images, which uses a novel post-processing filter for cartoon-like color segmentation and high-contrast silhouettes. In order to make a fast post-processing of rendered images possible, the programmability of modern graphics hardware is exploited. The system is capable of generatin
SignatureSpace: a multidimensional, exploratory approach for the analysis of volume data
The analysis of volumetric data is a crucial part in the visualization
pipeline, since it determines the features in a volume dataset and
henceforth, also its rendering parameters. Unfortunately, volume
analysis can also be a very tedious and difficult challenge.
To cope with this challenge, this paper describes a novel information
visualization driven, explorative approach that allows users
to perform an analysis in a comprehensive fashion. From the original
data volume, a variety of auxiliary data volumes, the signature
volumes, are computed, which are based on intensity, gradients, and
various other statistical metrics. Each of these signatures (or signatures
in short) is then unified into a multi-dimensional signature
space to create a comprehensive scope for the analysis. A mosaic of
visualization techniques ranging from parallel coordinates, to colormaps
and opacity modulation, is available to provide insight into
the structure and feature distribution of the volume dataset, and thus
enables a specification of complex multi-dimensional transfer functions
and segmentations
Large Model Visualization : Techniques and Applications
The size of datasets in scientific computing is rapidly
increasing. This increase is caused by a boost of processing power in
the past years, which in turn was invested in an increase of the
accuracy and the size of the models. A similar trend enabled a
significant improvement of medical scanners; more than 1000 slices of
a resolution of 512x512 can be generated by modern scanners in daily
practice. Even in computer-aided engineering typical models eas-ily
contain several million polygons. Unfortunately, the data complexity
is growing faster than the rendering performance of modern computer
systems. This is not only due to the slower growing graphics
performance of the graphics subsystems, but in particular because of
the significantly slower growing memory bandwidth for the transfer of
the geometry and image data from the main memory to the graphics
accelerator.
Large model visualization addresses this growing divide between data
complexity and rendering performance. Most methods focus on the
reduction of the geometric or pixel complexity, and hence also the
memory bandwidth requirements are reduced.
In this dissertation, we discuss new approaches from three different
research areas. All approaches target at the reduction of the
processing complexity to achieve an interactive visualization of large
datasets. In the second part, we introduce applications of the
presented ap-proaches. Specifically, we introduce the new VIVENDI
system for the interactive virtual endoscopy and other applications
from mechanical engineering, scientific computing, and architecture.The size of datasets in scientific computing is rapidly
increasing. This increase is caused by a boost of processing power in
the past years, which in turn was invested in an increase of the
accuracy and the size of the models. A similar trend enabled a
significant improvement of medical scanners; more than 1000 slices of
a resolution of 512x512 can be generated by modern scanners in daily
practice. Even in computer-aided engineering typical models eas-ily
contain several million polygons. Unfortunately, the data complexity
is growing faster than the rendering performance of modern computer
systems. This is not only due to the slower growing graphics
performance of the graphics subsystems, but in particular because of
the significantly slower growing memory bandwidth for the transfer of
the geometry and image data from the main memory to the graphics
accelerator.
Large model visualization addresses this growing divide between data
complexity and rendering performance. Most methods focus on the
reduction of the geometric or pixel complexity, and hence also the
memory bandwidth requirements are reduced.
In this dissertation, we discuss new approaches from three different
research areas. All approaches target at the reduction of the
processing complexity to achieve an interactive visualization of large
datasets. In the second part, we introduce applications of the
presented ap-proaches. Specifically, we introduce the new VIVENDI
system for the interactive virtual endoscopy and other applications
from mechanical engineering, scientific computing, and architecture
Visual Analysis of Microarray Data from Bioinformatics Applications
We present a new application designed for the visual exploration of microarray data.It is based on an extension and adaption of parallel coordinates to
support the visual exploration of large and high-dimensional datasets. In particular, we investigate the visual analysis of gene-expression data as generated by microarray experiments. We combine refined visual exploration with statistical methods to a visual analytics approach, which proved to be particularly successful in this application domain. We will demonstrate the usefulness on several multidimensional gene-expression datasets from different bioinformatics applications
Efficient multiple occlusion queries for scene graph systems
Image space occlusion culling is an useful approach to reduce the rendering load of large polygonal models. Like most large model techniques, it trades overhead costs with the rendering costs of the possibly occluded geometry. Meanwhile, modern graphics hardware supports occlusion culling. Unfortunately these hardware extensions consume fillrate and latency costs.
In this paper, we propose a new technique for scene graph traversal optimized for efficient use of occlusion queries. Our approach uses several Occupancy Maps to organize the scene graph traversal. During traversal hierarchical occlusion culling, view frustrum culling and rendering is performed.
The occlusion information is efficiently determined by asynchronous multiple occlusion queries with hardware-supported query functionality. To avoid redundant results, we arrange these multiple occlusion queries according to the information of several Occupancy Maps. Our presented technique is conservative and benefits from a partial depth order of the geometry
- …