5,407 research outputs found

    Understanding Next-Generation VR: Classifying Commodity Clusters for Immersive Virtual Reality

    Get PDF
    Commodity clusters offer the ability to deliver higher performance computer graphics at lower prices than traditional graphics supercomputers. Immersive virtual reality systems demand notoriously high computational requirements to deliver adequate real-time graphics, leading to the emergence of commodity clusters for immersive virtual reality. Such clusters deliver the graphics power needed by leveraging the combined power of several computers to meet the demands of real-time interactive immersive computer graphics.However, the field of commodity cluster-based virtual reality is still in early stages of development and the field is currently adhoc in nature and lacks order. There is no accepted means for comparing approaches and implementers are left with instinctual or trial-and-error means for selecting an approach.This paper provides a classification system that facilitates understanding not only of the nature of different clustering systems but also the interrelations between them. The system is built from a new model for generalized computer graphics applications, which is based on the flow of data through a sequence of operations over the entire context of the application. Prior models and classification systems have been too focused in context and application whereas the system described here provides a unified means for comparison of works within the field

    Cosmic cookery : making a stereoscopic 3D animated movie.

    Get PDF
    This paper describes our experience making a short stereoscopic movie visualizing the development of structure in the universe during the 13.7 billion years from the Big Bang to the present day. Aimed at a general audience for the Royal Society's 2005 Summer Science Exhibition, the movie illustrates how the latest cosmological theories based on dark matter and dark energy are capable of producing structures as complex as spiral galaxies and allows the viewer to directly compare observations from the real universe with theoretical results. 3D is an inherent feature of the cosmology data sets and stereoscopic visualization provides a natural way to present the images to the viewer, in addition to allowing researchers to visualize these vast, complex data sets. The presentation of the movie used passive, linearly polarized projection onto a 2m wide screen but it was also required to playback on a Sharp RD3D display and in anaglyph projection at venues without dedicated stereoscopic display equipment. Additionally lenticular prints were made from key images in the movie. We discuss the following technical challenges during the stereoscopic production process; 1) Controlling the depth presentation, 2) Editing the stereoscopic sequences, 3) Generating compressed movies in display speci¯c formats. We conclude that the generation of high quality stereoscopic movie content using desktop tools and equipment is feasible. This does require careful quality control and manual intervention but we believe these overheads are worthwhile when presenting inherently 3D data as the result is signi¯cantly increased impact and better understanding of complex 3D scenes

    Parallel Rendering and Large Data Visualization

    Full text link
    We are living in the big data age: An ever increasing amount of data is being produced through data acquisition and computer simulations. While large scale analysis and simulations have received significant attention for cloud and high-performance computing, software to efficiently visualise large data sets is struggling to keep up. Visualization has proven to be an efficient tool for understanding data, in particular visual analysis is a powerful tool to gain intuitive insight into the spatial structure and relations of 3D data sets. Large-scale visualization setups are becoming ever more affordable, and high-resolution tiled display walls are in reach even for small institutions. Virtual reality has arrived in the consumer space, making it accessible to a large audience. This thesis addresses these developments by advancing the field of parallel rendering. We formalise the design of system software for large data visualization through parallel rendering, provide a reference implementation of a parallel rendering framework, introduce novel algorithms to accelerate the rendering of large amounts of data, and validate this research and development with new applications for large data visualization. Applications built using our framework enable domain scientists and large data engineers to better extract meaning from their data, making it feasible to explore more data and enabling the use of high-fidelity visualization installations to see more detail of the data.Comment: PhD thesi

    Mobile graphics: SIGGRAPH Asia 2017 course

    Get PDF
    Peer ReviewedPostprint (published version

    Constructive 3D Visualization techniques on Mobile platform- Empirical Analysis

    Get PDF
    As per the concept of 3D visualization on mobile devices it is clear that it belongs to two approaches i.e. local and remote approach. According to the technological advances in mobile devices it is possible to handle some complex data locally and visualized it. But still it is a challenging task to manage real entities on mobile devices locally. Remote visualization plays a vital role for 3D visualization on mobile platform in which data comes from server. Remote approach for 3D visualization on mobile platform consist of various techniques, critical analysis of such techniques is focus into this paper. Also the main focus is on network aspects

    The Comparison of three 3D graphics raster processors and the design of another

    Get PDF
    There are a number of 3D graphics accelerator architectures on the market today. One of the largest issues concerning the design of a 3D accelerator is that of affordability for the home user while still delivering good performance. Three such architectures were analyzed: the Heresy architecture defined by Chiueh [2], the Talisman architecture defined by Torborg [7], and the Tayra architecture\u27s specification by White [9]. Portions of these three architectures were used to create a new architecture taking advantage of as many of their features as possible. The advantage of chunking will be analyzed, along with the advantages of a single cycle z-buffering algorithm. It was found that Fast Phong Shading is not suitable for implementation in this pipeline, and that the clipping algorithm should be eliminated in favor of a scissoring algorithm
    corecore