155,859 research outputs found

    Parallel graphics and visualization

    Get PDF
    Computer Graphics and Visualization are two fields that continue to evolve at a fast pace, always addressing new application areas and achieving better and faster results. The volume of data processed by such applications keeps getting larger and the illumination and light transport models used to generate pictorial representations of this data keep getting more sophisticated. Richer illumination and light transport models allow the generation of richer images that convey more information about the phenomenons or virtual worlds represented by the data and are more realistic and engaging to the user. The combination of large data sets, rich illumination models and large, sophisticated displays results in huge workloads that cannot be processed sequentially and still maintain acceptable response times. Parallel processing is thus an obvious approach to such problems, creating the field of Parallel Graphics and Visualization. The Eurographics Symposium on Parallel Graphics and Visualization (EGPGV) gathers together researchers from all over the world to foster research focused on theoretical and applied issues critical to parallel and distributed computing and its application to all aspects of computer graphics, virtual reality, scientific and engineering visualization. This special issue is a collection of five papers selected from those presented at the 7th EGPGV, which took place in Lugano, Switzerland, in May, 2007. The research presented in this symposium has evolved over the years, often reflecting the evolution of the underlying systems’ architectures. While papers presented in the first few events focused on Single Instruction Multiple Data and Massively Parallel Multi-Processing systems, in recent years the focus was mainly on Symmetric Multiprocessing machines and PC clusters, often also including the utilization of multiple Graphics Processing Units. The 2007 event witnessed the first papers addressing multicore processors, thus following the general trend of computer systems’ architecture. The paper by Wald, Ize and Parker discusses acceleration structures for interactive ray tracing of dynamic scenes. They propose the utilization of Bounding Volume Hierarchies (BVH), which for deformable scenes can be rapidly updated by adjusting the bounding primitives while maintaining the hierarchy. To avoid a significant performance penalty due to a large mismatch between the scene geometry and the tree topology the BVH is rebuilt asynchronously and concurrently with rendering. According to the authors, in the near future interactive ray tracers are expected to run on highly parallel multicore architectures. Thus, all results reported were obtained on an 8 processor dual core system, totalling 16 cores. Gribble, Brownlee and Parker propose two algorithms targeting highly parallel multicore architectures enabling interactive navigation and exploration of large particle data sets with global illumination effects. Rendering samples are lazily evaluated using Monte Carlo path tracing, while visualization occurs asynchronously by using Dynamic Luminance Textures that cache the renderer results. The combined utilization of particle based simulation methods and global illumination enables the effective communication of subtle changes in the three-dimensional structure of the data. All results were also obtained on a 16 cores architecture. The paper by Thomaszweski, Pabst and Blochinger analyzes parallel techniques for physically based simulation, in particular, the time integration and collision handling phases. The former is addressed using the conjugate gradient algorithm and static problem decomposition, while the latter exhibits a dynamic structure, thus requiring fully dynamic task decomposition. Their results were obtained using three different quad-core systems. Hong and Shen derive an efficient parallel algorithm for symmetry computation in volume data represented by regular grids. Sequential detection of symmetric features in volumetric data sets has a prohibitive cost, thus requiring efficient parallel algorithms and powerful parallel systems. The authors obtained the reported results on a PC cluster with Infiniband and 64 nodes, each being a dual processor, single core Opteron. Bettio, Gobbetti, Marton and Pintore describe a scalable multiresolution rendering system targeting massive triangle meshes and driving different sized light field displays. The larger light field display ð1:6 0:9m2Þ is based on a special arrangement of projectors and a holographic screen. It allows multiple freely moving viewers to see the scene from their respective points of view and enjoy continuous horizontal parallax without any specialized viewing devices. To drive this 35 Mbeams display they use a scalable parallel renderer, resorting to out of core and level of detail techniques, and running on a 15 nodes PC cluster

    Segue: Overviewing Evolution Patterns of Egocentric Networks by Interactive Construction of Spatial Layouts

    Full text link
    Getting the overall picture of how a large number of ego-networks evolve is a common yet challenging task. Existing techniques often require analysts to inspect the evolution patterns of ego-networks one after another. In this study, we explore an approach that allows analysts to interactively create spatial layouts in which each dot is a dynamic ego-network. These spatial layouts provide overviews of the evolution patterns of ego-networks, thereby revealing different global patterns such as trends, clusters and outliers in evolution patterns. To let analysts interactively construct interpretable spatial layouts, we propose a data transformation pipeline, with which analysts can adjust the spatial layouts and convert dynamic egonetworks into event sequences to aid interpretations of the spatial positions. Based on this transformation pipeline, we developed Segue, a visual analysis system that supports thorough exploration of the evolution patterns of ego-networks. Through two usage scenarios, we demonstrate how analysts can gain insights into the overall evolution patterns of a large collection of ego-networks by interactively creating different spatial layouts.Comment: Published at IEEE Conference on Visual Analytics Science and Technology (IEEE VAST 2018

    Inviwo -- A Visualization System with Usage Abstraction Levels

    Full text link
    The complexity of today's visualization applications demands specific visualization systems tailored for the development of these applications. Frequently, such systems utilize levels of abstraction to improve the application development process, for instance by providing a data flow network editor. Unfortunately, these abstractions result in several issues, which need to be circumvented through an abstraction-centered system design. Often, a high level of abstraction hides low level details, which makes it difficult to directly access the underlying computing platform, which would be important to achieve an optimal performance. Therefore, we propose a layer structure developed for modern and sustainable visualization systems allowing developers to interact with all contained abstraction levels. We refer to this interaction capabilities as usage abstraction levels, since we target application developers with various levels of experience. We formulate the requirements for such a system, derive the desired architecture, and present how the concepts have been exemplary realized within the Inviwo visualization system. Furthermore, we address several specific challenges that arise during the realization of such a layered architecture, such as communication between different computing platforms, performance centered encapsulation, as well as layer-independent development by supporting cross layer documentation and debugging capabilities

    Data Portraits and Intermediary Topics: Encouraging Exploration of Politically Diverse Profiles

    Full text link
    In micro-blogging platforms, people connect and interact with others. However, due to cognitive biases, they tend to interact with like-minded people and read agreeable information only. Many efforts to make people connect with those who think differently have not worked well. In this paper, we hypothesize, first, that previous approaches have not worked because they have been direct -- they have tried to explicitly connect people with those having opposing views on sensitive issues. Second, that neither recommendation or presentation of information by themselves are enough to encourage behavioral change. We propose a platform that mixes a recommender algorithm and a visualization-based user interface to explore recommendations. It recommends politically diverse profiles in terms of distance of latent topics, and displays those recommendations in a visual representation of each user's personal content. We performed an "in the wild" evaluation of this platform, and found that people explored more recommendations when using a biased algorithm instead of ours. In line with our hypothesis, we also found that the mixture of our recommender algorithm and our user interface, allowed politically interested users to exhibit an unbiased exploration of the recommended profiles. Finally, our results contribute insights in two aspects: first, which individual differences are important when designing platforms aimed at behavioral change; and second, which algorithms and user interfaces should be mixed to help users avoid cognitive mechanisms that lead to biased behavior.Comment: 12 pages, 7 figures. To be presented at ACM Intelligent User Interfaces 201

    Development of a handheld fiber-optic probe-based raman imaging instrumentation: raman chemlighter

    Get PDF
    Raman systems based on handheld fiber-optic probes offer advantages in terms of smaller sizes and easier access to the measurement sites, which are favorable for biomedical and clinical applications in the complex environment. However, there are several common drawbacks of applying probes for many applications: (1) The fixed working distance requires the user to maintain a certain working distance to acquire higher Raman signals; (2) The single-point-measurement ability restricts realizing a mapping or scanning procedure; (3) Lack of real-time data processing and a straightforward co-registering method to link the Raman information with the respective measurement position. The thesis proposed and experimentally demonstrated various approaches to overcome these drawbacks. A handheld fiber-optic Raman probe with an autofocus unit was presented to overcome the problem arising from using fixed-focus lenses, by using a liquid lens as the objective lens, which allows dynamical adjustment of the focal length of the probe. An implementation of a computer vision-based positional tracking to co-register the regular Raman spectroscopic measurements with the spatial location enables fast recording of a Raman image from a large tissue sample by combining positional tracking of the laser spot through brightfield images. The visualization of the Raman image has been extended to augmented and mixed reality and combined with a 3D reconstruction method and projector-based visualization to offer an intuitive and easily understandable way of presenting the Raman image. All these advances are substantial and highly beneficial to further drive the clinical translation of Raman spectroscopy as potential image-guided instrumentation

    Large High Resolution Displays for Co-Located Collaborative Intelligence Analysis

    Get PDF
    Large, high-resolution vertical displays carry the potential to increase the accuracy of collaborative sensemaking, given correctly designed visual analytics tools. From an exploratory user study using a fictional intelligence analysis task, we investigated how users interact with the display to construct spatial schemas and externalize information, as well as how they establish shared and private territories. We investigated the spatial strategies of users partitioned by tool type used (document- or entity-centric). We classified the types of territorial behavior exhibited in terms of how the users interacted with the display (integrated or independent workspaces). Next, we examined how territorial behavior impacted the common ground between the pairs of users. Finally, we recommend design guidelines for building co-located collaborative visual analytics tools specifically for use on large, high-resolution vertical displays

    Analyzing library collections with starfield visualizations

    Get PDF
    This paper presents a qualitative and formative study of the uses of a starfield-based visualization interface for analysis of library collections. The evaluation process has produced feedback that suggests ways to significantly improve starfield interfaces and the interaction process to improve their learnability and usability. The study also gave us clear indication of additional potential uses of starfield visualizations that can be exploited by further functionality and interface development. We report on resulting implications for the design and use of starfield visualizations that will impact their graphical interface features, their use for managing data quality and their potential for various forms of visual data mining. Although the current implementation and analysis focuses on the collection of a physical library, the most important contributions of our work will be in digital libraries, in which volume, complexity and dynamism of collections are increasing dramatically and tools are needed for visualization and analysis

    Obvious: a meta-toolkit to encapsulate information visualization toolkits. One toolkit to bind them all

    Get PDF
    This article describes “Obvious”: a meta-toolkit that abstracts and encapsulates information visualization toolkits implemented in the Java language. It intends to unify their use and postpone the choice of which concrete toolkit(s) to use later-on in the development of visual analytics applications. We also report on the lessons we have learned when wrapping popular toolkits with Obvious, namely Prefuse, the InfoVis Toolkit, partly Improvise, JUNG and other data management libraries. We show several examples on the uses of Obvious, how the different toolkits can be combined, for instance sharing their data models. We also show how Weka and RapidMiner, two popular machine-learning toolkits, have been wrapped with Obvious and can be used directly with all the other wrapped toolkits. We expect Obvious to start a co-evolution process: Obvious is meant to evolve when more components of Information Visualization systems will become consensual. It is also designed to help information visualization systems adhere to the best practices to provide a higher level of interoperability and leverage the domain of visual analytics
    • 

    corecore