114 research outputs found

    Real-time soft shadows using a single light sample

    Get PDF
    We present a real-time rendering algorithm that generates soft shadows of dynamic scenes using a single light sample. As a depth-map algorithm it can handle arbitrary shadowed surfaces. The shadow-casting surfaces, however, should satisfy a few geometric properties to prevent artifacts. Our algorithm is based on a bivariate attenuation function, whose result modulates the intensity of a light causing shadows. The first argument specifies the distance of the occluding point to the shadowed point; the second argument measures how deep the shadowed point is inside the shadow. The attenuation function can be implemented using dependent texture accesses; the complete implementation of the algorithm can be accelerated by today's graphics hardware. We outline the implementation, and discuss details of artifact prevention and filtering

    Interaction and locomotion techniques for the exploration of massive 3D point clouds in vr environments

    Get PDF
    Emerging virtual reality (VR) technology allows immersively exploring digital 3D content on standard consumer hardware. Using in-situ or remote sensing technology, such content can be automatically derived from real-world sites. External memory algorithms allow for the non-immersive exploration of the resulting 3D point clouds on a diverse set of devices with vastly different rendering capabilities. Applications for VR environments raise additional challenges for those algorithms as they are highly sensitive towards visual artifacts that are typical for point cloud depictions (i.e., overdraw and underdraw), while simultaneously requiring higher frame rates (i.e., around 90 fps instead of 30–60 fps). We present a rendering system for the immersive exploration and inspection of massive 3D point clouds on state-of-the-art VR devices. Based on a multi-pass rendering pipeline, we combine point-based and image-based rendering techniques to simultaneously improve the rendering performance and the visual quality. A set of interaction and locomotion techniques allows users to inspect a 3D point cloud in detail, for example by measuring distances and areas or by scaling and rotating visualized data sets. All rendering, interaction and locomotion techniques can be selected and configured dynamically, allowing to adapt the rendering system to different use cases. Tests on data sets with up to 2.6 billion points show the feasibility and scalability of our approach

    Multiscale Visual Comparison of Execution Traces

    Get PDF

    Multiscale Visual Comparison of Execution Traces

    Get PDF

    Multiscale Visual Comparison of Execution Traces

    Get PDF

    Analyzing feature implementation by visual exploration of architecturally-embedded call-graphs

    Get PDF
    ABSTRACT Maintenance, reengineering, and refactoring of large and complex software systems are commonly based on modifications and enhancements related to features. Before developers can modify feature functionality they have to locate the relevant code components and understand the components' interaction. In this paper, we present a prototype tool for analyzing feature implementation of large C/C++ software systems by visual exploration of dynamically extracted call relations between code components. The component interaction can be analyzed on various abstraction levels ranging from function interaction up to interaction of the system with shared libraries of the operating system. The user visually explores the component interaction within a multiview visualization system consisting of various textual and a graphical 3D landscape view. During exploration the 3D landscape view supports the user firstly in deciding early whether a call relation is essential for understanding the feature and, secondly, in finding starting points for fine-grained feature analysis using a top-down approach

    Large-Scale Evaluation of Topic Models and Dimensionality Reduction Methods for 2D Text Spatialization

    Full text link
    Topic models are a class of unsupervised learning algorithms for detecting the semantic structure within a text corpus. Together with a subsequent dimensionality reduction algorithm, topic models can be used for deriving spatializations for text corpora as two-dimensional scatter plots, reflecting semantic similarity between the documents and supporting corpus analysis. Although the choice of the topic model, the dimensionality reduction, and their underlying hyperparameters significantly impact the resulting layout, it is unknown which particular combinations result in high-quality layouts with respect to accuracy and perception metrics. To investigate the effectiveness of topic models and dimensionality reduction methods for the spatialization of corpora as two-dimensional scatter plots (or basis for landscape-type visualizations), we present a large-scale, benchmark-based computational evaluation. Our evaluation consists of (1) a set of corpora, (2) a set of layout algorithms that are combinations of topic models and dimensionality reductions, and (3) quality metrics for quantifying the resulting layout. The corpora are given as document-term matrices, and each document is assigned to a thematic class. The chosen metrics quantify the preservation of local and global properties and the perceptual effectiveness of the two-dimensional scatter plots. By evaluating the benchmark on a computing cluster, we derived a multivariate dataset with over 45 000 individual layouts and corresponding quality metrics. Based on the results, we propose guidelines for the effective design of text spatializations that are based on topic models and dimensionality reductions. As a main result, we show that interpretable topic models are beneficial for capturing the structure of text corpora. We furthermore recommend the use of t-SNE as a subsequent dimensionality reduction.Comment: To be published at IEEE VIS 2023 conferenc

    COMBINED VISUAL EXPLORATION OF 2D GROUND RADAR AND 3D POINT CLOUD DATA FOR ROAD ENVIRONMENTS

    Get PDF
    Ground-penetrating 2D radar scans are captured in road environments for examination of pavement condition and below-ground variations such as lowerings and developing pot-holes. 3D point clouds captured above ground provide a precise digital representation of the road’s surface and the surrounding environment. If both data sources are captured for the same area, a combined visualization is a valuable tool for infrastructure maintenance tasks. This paper presents visualization techniques developed for the combined visual exploration of the data captured in road environments. Main challenges are the positioning of the ground radar data within the 3D environment and the reduction of occlusion for individual data sets. By projecting the measured ground radar data onto the precise trajectory of the scan, it can be displayed within the context of the 3D point cloud representation of the road environment. We show that customizable overlay, filtering, and cropping techniques enable insightful data exploration. A 3D renderer combines both data sources. To enable an inspection of areas of interest, ground radar data can be elevated above ground level for better visibility. An interactive lens approach enables to visualize data sources that are currently occluded by others. The visualization techniques prove to be a valuable tool for ground layer anomaly inspection and were evaluated in a real-world data set. The combination of 2D ground radar scans with 3D point cloud data improves data interpretation by giving context information (e.g., about manholes in the street) that can be directly accessed during evaluation
    corecore