153 research outputs found

    Volume visualization of time-varying data using parallel, multiresolution and adaptive-resolution techniques

    Get PDF
    This paper presents a parallel rendering approach that allows high-quality visualization of large time-varying volume datasets. Multiresolution and adaptive-resolution techniques are also incorporated to improve the efficiency of the rendering. Three basic steps are needed to implement this kind of an application. First we divide the task through decomposition of data. This decomposition can be either temporal or spatial or a mix of both. After data has been divided, each of the data portions is rendered by a separate processor to create sub-images or frames. Finally these sub-images or frames are assembled together into a final image or animation. After developing this application, several experiments were performed to show that this approach indeed saves time when a reasonable number of processors are used. Also, we conclude that the optimal number of processors is dependent on the size of the dataset used

    Design of a Scenario-Based Immersive Experience Room

    Get PDF
    open1noopenKlopfenstein, Cuno LorenzKlopfenstein, CUNO LOREN

    Scalable Real-Time Rendering for Extremely Complex 3D Environments Using Multiple GPUs

    Get PDF
    In 3D visualization, real-time rendering of high-quality meshes in complex 3D environments is still one of the major challenges in computer graphics. New data acquisition techniques like 3D modeling and scanning have drastically increased the requirement for more complex models and the demand for higher display resolutions in recent years. Most of the existing acceleration techniques using a single GPU for rendering suffer from the limited GPU memory budget, the time-consuming sequential executions, and the finite display resolution. Recently, people have started building commodity workstations with multiple GPUs and multiple displays. As a result, more GPU memory is available across a distributed cluster of GPUs, more computational power is provided throughout the combination of multiple GPUs, and a higher display resolution can be achieved by connecting each GPU to a display monitor (resulting in a tiled large display configuration). However, using a multi-GPU workstation may not always give the desired rendering performance due to the imbalanced rendering workloads among GPUs and overheads caused by inter-GPU communication. In this dissertation, I contribute a multi-GPU multi-display parallel rendering approach for complex 3D environments. The approach has the capability to support a high-performance and high-quality rendering of static and dynamic 3D environments. A novel parallel load balancing algorithm is developed based on a screen partitioning strategy to dynamically balance the number of vertices and triangles rendered by each GPU. The overhead of inter-GPU communication is minimized by transferring only a small amount of image pixels rather than chunks of 3D primitives with a novel frame exchanging algorithm. The state-of-the-art parallel mesh simplification and GPU out-of-core techniques are integrated into the multi-GPU multi-display system to accelerate the rendering process

    Implementation of a sort-last volume rendering using 3D textures

    Get PDF
    La tesi consiste in una estensione della libreria grafica Aura (licenza gpl, sviluppata dalla Vrije Universiteit di Amsterdam) aggiungendo i componenti necessari ad effettuare un volume rendering distribuito secondo il paradigma sort-last ed usando le texture 3D. Il programma Ăš stato testato su un cluster di 9 PC

    A Survey of Software Frameworks for Cluster-Based Large High-Resolution Displays

    Full text link

    Adaptive remote visualization system with optimized network performance for large scale scientific data

    Get PDF
    This dissertation discusses algorithmic and implementation aspects of an automatically configurable remote visualization system, which optimally decomposes and adaptively maps the visualization pipeline to a wide-area network. The first node typically serves as a data server that generates or stores raw data sets and a remote client resides on the last node equipped with a display device ranging from a personal desktop to a powerwall. Intermediate nodes can be located anywhere on the network and often include workstations, clusters, or custom rendering engines. We employ a regression model-based network daemon to estimate the effective bandwidth and minimal delay of a transport path using active traffic measurement. Data processing time is predicted for various visualization algorithms using block partition and statistical technique. Based on the link measurements, node characteristics, and module properties, we strategically organize visualization pipeline modules such as filtering, geometry generation, rendering, and display into groups, and dynamically assign them to appropriate network nodes to achieve minimal total delay for post-processing or maximal frame rate for streaming applications. We propose polynomial-time algorithms using the dynamic programming method to compute the optimal solutions for the problems of pipeline decomposition and network mapping under different constraints. A parallel based remote visualization system, which comprises a logical group of autonomous nodes that cooperate to enable sharing, selection, and aggregation of various types of resources distributed over a network, is implemented and deployed at geographically distributed nodes for experimental testing. Our system is capable of handling a complete spectrum of remote visualization tasks expertly including post processing, computational steering and wireless sensor network monitoring. Visualization functionalities such as isosurface, ray casting, streamline, linear integral convolution (LIC) are supported in our system. The proposed decomposition and mapping scheme is generic and can be applied to other network-oriented computation applications whose computing components form a linear arrangement

    Computer Generation of Integral Images using Interpolative Shading Techniques

    Get PDF
    Research to produce artificial 3D images that duplicates the human stereovision has been ongoing for hundreds of years. What has taken millions of years to evolve in humans is proving elusive even for present day technological advancements. The difficulties are compounded when real-time generation is contemplated. The problem is one of depth. When perceiving the world around us it has been shown that the sense of depth is the result of many different factors. These can be described as monocular and binocular. Monocular depth cues include overlapping or occlusion, shading and shadows, texture etc. Another monocular cue is accommodation (and binocular to some extent) where the focal length of the crystalline lens is adjusted to view an image. The important binocular cues are convergence and parallax. Convergence allows the observer to judge distance by the difference in angle between the viewing axes of left and right eyes when both are focussing on a point. Parallax relates to the fact that each eye sees a slightly shifted view of the image. If a system can be produced that requires the observer to use all of these cues, as when viewing the real world, then the transition to and from viewing a 3D display will be seamless. However, for many 3D imaging techniques, which current work is primarily directed towards, this is not the case and raises a serious issue of viewer comfort. Researchers worldwide, in university and industry, are pursuing their approaches in the development of 3D systems, and physiological disturbances that can cause nausea in some observers will not be acceptable. The ideal 3D system would require, as minimum, accurate depth reproduction, multiviewer capability, and all-round seamless viewing. The necessity not to wear stereoscopic or polarising glasses would be ideal and lack of viewer fatigue essential. Finally, for whatever the use of the system, be it CAD, medical, scientific visualisation, remote inspection etc on the one hand, or consumer markets such as 3D video games and 3DTV on the other, the system has to be relatively inexpensive. Integral photography is a ‘real camera’ system that attempts to comply with this ideal; it was invented in 1908 but due to technological reasons was not capable of being a useful autostereoscopic system. However, more recently, along with advances in technology, it is becoming a more attractive proposition for those interested in developing a suitable system for 3DTV. The fast computer generation of integral images is the subject of this thesis; the adjective ‘fast’ being used to distinguish it from the much slower technique of ray tracing integral images. These two techniques are the standard in monoscopic computer graphics whereby ray tracing generates photo-realistic images and the fast forward geometric approach that uses interpolative shading techniques is the method used for real-time generation. Before this present work began it was not known if it was possible to create volumetric integral images using a similar fast approach as that employed by standard computer graphics, but it soon became apparent that it would be successful and hence a valuable contribution in this area. Presented herein is a full description of the development of two derived methods for producing rendered integral image animations using interpolative shading. The main body of the work is the development of code to put these methods into practice along with many observations and discoveries that the author came across during this task.The Defence and Research Agency (DERA), a contract (LAIRD) under the European Link/EPSRC photonics initiative, and DTI/EPSRC sponsorship within the PROMETHEUS project

    Modeling, design and optimization of computer-generated holograms with binary phases

    Get PDF
    L’hologramme gĂ©nĂ©rĂ© par ordinateur (HGO) a Ă©tĂ© dĂ©montrĂ© Ă  jouer un rĂŽle important depuis son invention par Lohmann dans les annĂ©es 1960 dans de nombreuses applications telles que l’ingĂ©nierie du front d'onde, l’éclairage structurĂ© et l’affichage optique, etc. Dans le travail de thĂšse ci-prĂ©sent, la modĂ©lisation, la conception et l’optimisation d’HGO avec des phases binaires sont Ă©tudiĂ©es. Nous avons examinĂ© un systĂšme pratique de projection d’image avec certaines spĂ©cifications de travail, par exemple, une distance de travail de 40 cm, une profondeur de champ de 10 cm et un angle de diffraction de 53 degrĂ© pour une longueur d’onde de travail de 632 nm, et ensuite conçu et optimisĂ© un hologramme de phase binaire en passant par une recherche directe binaire pour ce systĂšme d’image. L’hologramme a Ă©tĂ© fabriquĂ© par la lithographie Ă  faisceau d’électrons. Pour atteindre l’angle de diffraction requis, nous avons discutĂ© de l’architecture optique dans le systĂšme de projection d’image holographique. L’HGO conçu et le systĂšme de projection d’image holographique ont Ă©tĂ© validĂ©s expĂ©rimentalement par reconstruction optique. Étant donnĂ© que les pixels finiront par se regrouper pour former des ouvertures polygonales en hologramme, qui peut ĂȘtre vu clairement dans le processus de recherche directe binaire, nous avons proposĂ© une nouvelle approche pour la conception directe des ouvertures polygonales basĂ©e sur la disposition triangulaire en HGO de grande taille en pixels. La diffraction de l’ouverture a Ă©tĂ© calculĂ©e par la transformation analytique d’Abbe. L’image reconstruite peut ĂȘtre exprimĂ©e comme une addition cohĂ©rente de motifs de diffraction Ă  partir de tous les bords droits d’orientations et de longueurs diffĂ©rentes. Une optimisation en deux Ă©tapes comprenant l’algorithme gĂ©nĂ©tique avec la recherche locale de codage des phases binaires des ouvertures, suivie par la recherche directe de co-sommets flottants des ouvertures triangulaires Ă©lĂ©mentaires a Ă©tĂ© dĂ©veloppĂ©e. Nous avons en outre proposĂ© une disposition d’ouverture quadrilatĂ©rale, qui fournit plus de degrĂ©s de libertĂ© et peut former des ouvertures polygonales plus diverses en hologrammes. L’algorithme gĂ©nĂ©tique parallĂšle avec la recherche locale a Ă©tĂ© adoptĂ© dans une premiĂšre Ă©tape pour assigner des phases binaires, et la recherche directe a ensuite Ă©tĂ© utilisĂ©e pour optimiser des emplacements de co-sommets d'ouvertures quadrilatĂ©rales lors de la deuxiĂšme Ă©tape. Trois schĂ©mas diffĂ©rents pour l'algorithme en deux Ă©tapes ont Ă©tĂ© discutĂ©s pour fournir des moyens flexibles afin d’équilibrer la performance de l’optimisation et la durĂ©e nĂ©cessaire.The computer-generated hologram (CGH) has been demonstrated to play an important role, since its invention by Lohmann in 1960s, in many applications such as wavefront engineering, structured illumination and optical display, etc. In this thesis, the modeling, design and optimization of CGH with binary phases are studied. We considered a practical projection image system with certain working specification, e.g. working distance of 40 cm, depth of field of 10 cm and a diffraction angle of 53 degree for 632 nm working wavelength, and then designed and optimized a binary-phase hologram by direct binary search for this image system. The hologram was fabricated by E-beam lithography. To achieve the required diffraction angle, we discussed the optical architecture in holographic projection image system. The designed CGH and holographic projection image system were validated experimentally by optical reconstruction. Since the pixels will eventually cluster to form polygonal apertures in hologram, which can be seen clearly during the process of direct binary search, we proposed a new approach to directly design polygonal apertures based on triangular layout in CGH of a large number of pixels. The diffraction of aperture was calculated by analytical Abbe transform. The reconstructed image can be expressed as a coherent addition of diffraction patterns from all the straight edges of different orientations and lengths. A two-step optimization including genetic algorithm with local search for encoding binary phases of apertures, followed by direct search for floating covertices of the elementary triangular apertures was developed. We further proposed a quadrilateral aperture layout, which provides more degrees of freedom and can form more diverse polygonal apertures in holograms. The parallel genetic algorithm with local search was adopted to assign binary phases in the first step, and direct search was then used to optimize of locations of covertices of quadrilateral apertures in the second step. Three different schemes for the two-step algorithm were discussed to provide flexible ways to balance the optimization performance and time cost.RĂ©sumĂ© en espagno

    Toolkit support for interactive projected displays

    Get PDF
    Interactive projected displays are an emerging class of computer interface with the potential to transform interactions with surfaces in physical environments. They distinguish themselves from other visual output technologies, for instance LCD screens, by overlaying content onto the physical world. They can appear, disappear, and reconfigure themselves to suit a range of application scenarios, physical settings, and user needs. These properties have attracted significant academic research interest, yet the surrounding technical challenges and lack of application developer tools limit adoption to those with advanced technical skills. These barriers prevent people with different expertise from engaging, iteratively evaluating deployments, and thus building a strong community understanding of the technology in context. We argue that creating and deploying interactive projected displays should take hours, not weeks. This thesis addresses these difficulties through the construction of a toolkit that effectively facilitates user innovation with interactive projected displays. The toolkit’s design is informed by a review of related work and a series of in-depth research probes that study different application scenarios. These findings result in toolkit requirements that are then integrated into a cohesive design and implementation. This implementation is evaluated to determine its strengths, limitations, and effectiveness at facilitating the development of applied interactive projected displays. The toolkit is released to support users in the real-world and its adoption studied. The findings describe a range of real application scenarios, case studies, and increase academic understanding of applied interactive projected display toolkits. By significantly lowering the complexity, time, and skills required to develop and deploy interactive projected displays, a diverse community of over 2,000 individual users have applied the toolkit to their own projects. Widespread adoption beyond the computer-science academic community will continue to stimulate an exciting new wave of interactive projected display applications that transfer computing functionality into physical spaces

    Visualization challenges in distributed heterogeneous computing environments

    Get PDF
    Large-scale computing environments are important for many aspects of modern life. They drive scientific research in biology and physics, facilitate industrial rapid prototyping, and provide information relevant to everyday life such as weather forecasts. Their computational power grows steadily to provide faster response times and to satisfy the demand for higher complexity in simulation models as well as more details and higher resolutions in visualizations. For some years now, the prevailing trend for these large systems is the utilization of additional processors, like graphics processing units. These heterogeneous systems, that employ more than one kind of processor, are becoming increasingly widespread since they provide many benefits, like higher performance or increased energy efficiency. At the same time, they are more challenging and complex to use because the various processing units differ in their architecture and programming model. This heterogeneity is often addressed by abstraction but existing approaches often entail restrictions or are not universally applicable. As these systems also grow in size and complexity, they become more prone to errors and failures. Therefore, developers and users become more interested in resilience besides traditional aspects, like performance and usability. While fault tolerance is well researched in general, it is mostly dismissed in distributed visualization or not adapted to its special requirements. Finally, analysis and tuning of these systems and their software is required to assess their status and to improve their performance. The available tools and methods to capture and evaluate the necessary information are often isolated from the context or not designed for interactive use cases. These problems are amplified in heterogeneous computing environments, since more data is available and required for the analysis. Additionally, real-time feedback is required in distributed visualization to correlate user interactions to performance characteristics and to decide on the validity and correctness of the data and its visualization. This thesis presents contributions to all of these aspects. Two approaches to abstraction are explored for general purpose computing on graphics processing units and visualization in heterogeneous computing environments. The first approach hides details of different processing units and allows using them in a unified manner. The second approach employs per-pixel linked lists as a generic framework for compositing and simplifying order-independent transparency for distributed visualization. Traditional methods for fault tolerance in high performance computing systems are discussed in the context of distributed visualization. On this basis, strategies for fault-tolerant distributed visualization are derived and organized in a taxonomy. Example implementations of these strategies, their trade-offs, and resulting implications are discussed. For analysis, local graph exploration and tuning of volume visualization are evaluated. Challenges in dense graphs like visual clutter, ambiguity, and inclusion of additional attributes are tackled in node-link diagrams using a lens metaphor as well as supplementary views. An exploratory approach for performance analysis and tuning of parallel volume visualization on a large, high-resolution display is evaluated. This thesis takes a broader look at the issues of distributed visualization on large displays and heterogeneous computing environments for the first time. While the presented approaches all solve individual challenges and are successfully employed in this context, their joint utility form a solid basis for future research in this young field. In its entirety, this thesis presents building blocks for robust distributed visualization on current and future heterogeneous visualization environments.Große Rechenumgebungen sind fĂŒr viele Aspekte des modernen Lebens wichtig. Sie treiben wissenschaftliche Forschung in Biologie und Physik, ermöglichen die rasche Entwicklung von Prototypen in der Industrie und stellen wichtige Informationen fĂŒr das tĂ€gliche Leben, beispielsweise Wettervorhersagen, bereit. Ihre Rechenleistung steigt stetig, um Resultate schneller zu berechnen und dem Wunsch nach komplexeren Simulationsmodellen sowie höheren Auflösungen in der Visualisierung nachzukommen. Seit einigen Jahren ist die Nutzung von zusĂ€tzlichen Prozessoren, z.B. Grafikprozessoren, der vorherrschende Trend fĂŒr diese Systeme. Diese heterogenen Systeme, welche mehr als eine Art von Prozessor verwenden, finden zunehmend mehr Verbreitung, da sie viele VorzĂŒge, wie höhere Leistung oder erhöhte Energieeffizienz, bieten. Gleichzeitig sind diese jedoch aufwendiger und komplexer in der Nutzung, da die verschiedenen Prozessoren sich in Architektur und Programmiermodel unterscheiden. Diese HeterogenitĂ€t wird oft durch Abstraktion angegangen, aber bisherige AnsĂ€tze sind hĂ€ufig nicht universal anwendbar oder bringen EinschrĂ€nkungen mit sich. Diese Systeme werden zusĂ€tzlich anfĂ€lliger fĂŒr Fehler und AusfĂ€lle, da ihre GrĂ¶ĂŸe und KomplexitĂ€t zunimmt. Entwickler sind daher neben traditionellen Aspekten, wie Leistung und Bedienbarkeit, zunehmend an WiderstandfĂ€higkeit gegenĂŒber Fehlern und AusfĂ€llen interessiert. Obwohl Fehlertoleranz im Allgemeinen gut untersucht ist, wird diese in der verteilten Visualisierung oft ignoriert oder nicht auf die speziellen UmstĂ€nde dieses Feldes angepasst. Analyse und Optimierung dieser Systeme und ihrer Software ist notwendig, um deren Zustand einzuschĂ€tzen und ihre Leistung zu verbessern. Die verfĂŒgbaren Werkzeuge und Methoden, um die erforderlichen Informationen zu sammeln und auszuwerten, sind oft vom Kontext entkoppelt oder nicht fĂŒr interaktive Szenarien ausgelegt. Diese Probleme sind in heterogenen Rechenumgebungen verstĂ€rkt, da dort mehr Daten fĂŒr die Analyse verfĂŒgbar und notwendig sind. FĂŒr verteilte Visualisierung ist zusĂ€tzlich RĂŒckmeldung in Echtzeit notwendig, um Interaktionen der Benutzer mit Leistungscharakteristika zu korrelieren und um die GĂŒltigkeit und Korrektheit der Daten und ihrer Visualisierung zu entscheiden. Diese Dissertation prĂ€sentiert BeitrĂ€ge fĂŒr all diese Aspekte. ZunĂ€chst werden zwei AnsĂ€tze zur Abstraktion im Kontext von generischen Berechnungen auf Grafikprozessoren und Visualisierung in heterogenen Umgebungen untersucht. Der erste Ansatz verbirgt Details verschiedener Prozessoren und ermöglicht deren Nutzung ĂŒber einheitliche Schnittstellen. Der zweite Ansatz verwendet pro-Pixel verkettete Listen (per-pixel linked lists) zur Kombination von Pixelfarben und zur Vereinfachung von ordnungsunabhĂ€ngiger Transparenz in verteilter Visualisierung. Übliche Fehlertoleranz-Methoden im Hochleistungsrechnen werden im Kontext der verteilten Visualisierung diskutiert. Auf dieser Grundlage werden Strategien fĂŒr fehlertolerante verteilte Visualisierung abgeleitet und in einer Taxonomie organisiert. Beispielhafte Umsetzungen dieser Strategien, ihre Kompromisse und ZugestĂ€ndnisse, und die daraus resultierenden Implikationen werden diskutiert. Zur Analyse werden lokale Exploration von Graphen und die Optimierung von Volumenvisualisierung untersucht. Herausforderungen in dichten Graphen wie visuelle Überladung, AmbiguitĂ€t und Einbindung zusĂ€tzlicher Attribute werden in Knoten-Kanten Diagrammen mit einer Linsenmetapher sowie ergĂ€nzenden Ansichten der Daten angegangen. Ein explorativer Ansatz zur Leistungsanalyse und Optimierung paralleler Volumenvisualisierung auf einer großen, hochaufgelösten Anzeige wird untersucht. Diese Dissertation betrachtet zum ersten Mal Fragen der verteilten Visualisierung auf großen Anzeigen und heterogenen Rechenumgebungen in einem grĂ¶ĂŸeren Kontext. WĂ€hrend jeder vorgestellte Ansatz individuelle Herausforderungen löst und erfolgreich in diesem Zusammenhang eingesetzt wurde, bilden alle gemeinsam eine solide Basis fĂŒr kĂŒnftige Forschung in diesem jungen Feld. In ihrer Gesamtheit prĂ€sentiert diese Dissertation Bausteine fĂŒr robuste verteilte Visualisierung auf aktuellen und kĂŒnftigen heterogenen Visualisierungsumgebungen
    • 

    corecore