4,313 research outputs found

    Visualization challenges in distributed heterogeneous computing environments

    Get PDF
    Large-scale computing environments are important for many aspects of modern life. They drive scientific research in biology and physics, facilitate industrial rapid prototyping, and provide information relevant to everyday life such as weather forecasts. Their computational power grows steadily to provide faster response times and to satisfy the demand for higher complexity in simulation models as well as more details and higher resolutions in visualizations. For some years now, the prevailing trend for these large systems is the utilization of additional processors, like graphics processing units. These heterogeneous systems, that employ more than one kind of processor, are becoming increasingly widespread since they provide many benefits, like higher performance or increased energy efficiency. At the same time, they are more challenging and complex to use because the various processing units differ in their architecture and programming model. This heterogeneity is often addressed by abstraction but existing approaches often entail restrictions or are not universally applicable. As these systems also grow in size and complexity, they become more prone to errors and failures. Therefore, developers and users become more interested in resilience besides traditional aspects, like performance and usability. While fault tolerance is well researched in general, it is mostly dismissed in distributed visualization or not adapted to its special requirements. Finally, analysis and tuning of these systems and their software is required to assess their status and to improve their performance. The available tools and methods to capture and evaluate the necessary information are often isolated from the context or not designed for interactive use cases. These problems are amplified in heterogeneous computing environments, since more data is available and required for the analysis. Additionally, real-time feedback is required in distributed visualization to correlate user interactions to performance characteristics and to decide on the validity and correctness of the data and its visualization. This thesis presents contributions to all of these aspects. Two approaches to abstraction are explored for general purpose computing on graphics processing units and visualization in heterogeneous computing environments. The first approach hides details of different processing units and allows using them in a unified manner. The second approach employs per-pixel linked lists as a generic framework for compositing and simplifying order-independent transparency for distributed visualization. Traditional methods for fault tolerance in high performance computing systems are discussed in the context of distributed visualization. On this basis, strategies for fault-tolerant distributed visualization are derived and organized in a taxonomy. Example implementations of these strategies, their trade-offs, and resulting implications are discussed. For analysis, local graph exploration and tuning of volume visualization are evaluated. Challenges in dense graphs like visual clutter, ambiguity, and inclusion of additional attributes are tackled in node-link diagrams using a lens metaphor as well as supplementary views. An exploratory approach for performance analysis and tuning of parallel volume visualization on a large, high-resolution display is evaluated. This thesis takes a broader look at the issues of distributed visualization on large displays and heterogeneous computing environments for the first time. While the presented approaches all solve individual challenges and are successfully employed in this context, their joint utility form a solid basis for future research in this young field. In its entirety, this thesis presents building blocks for robust distributed visualization on current and future heterogeneous visualization environments.Große Rechenumgebungen sind fĂŒr viele Aspekte des modernen Lebens wichtig. Sie treiben wissenschaftliche Forschung in Biologie und Physik, ermöglichen die rasche Entwicklung von Prototypen in der Industrie und stellen wichtige Informationen fĂŒr das tĂ€gliche Leben, beispielsweise Wettervorhersagen, bereit. Ihre Rechenleistung steigt stetig, um Resultate schneller zu berechnen und dem Wunsch nach komplexeren Simulationsmodellen sowie höheren Auflösungen in der Visualisierung nachzukommen. Seit einigen Jahren ist die Nutzung von zusĂ€tzlichen Prozessoren, z.B. Grafikprozessoren, der vorherrschende Trend fĂŒr diese Systeme. Diese heterogenen Systeme, welche mehr als eine Art von Prozessor verwenden, finden zunehmend mehr Verbreitung, da sie viele VorzĂŒge, wie höhere Leistung oder erhöhte Energieeffizienz, bieten. Gleichzeitig sind diese jedoch aufwendiger und komplexer in der Nutzung, da die verschiedenen Prozessoren sich in Architektur und Programmiermodel unterscheiden. Diese HeterogenitĂ€t wird oft durch Abstraktion angegangen, aber bisherige AnsĂ€tze sind hĂ€ufig nicht universal anwendbar oder bringen EinschrĂ€nkungen mit sich. Diese Systeme werden zusĂ€tzlich anfĂ€lliger fĂŒr Fehler und AusfĂ€lle, da ihre GrĂ¶ĂŸe und KomplexitĂ€t zunimmt. Entwickler sind daher neben traditionellen Aspekten, wie Leistung und Bedienbarkeit, zunehmend an WiderstandfĂ€higkeit gegenĂŒber Fehlern und AusfĂ€llen interessiert. Obwohl Fehlertoleranz im Allgemeinen gut untersucht ist, wird diese in der verteilten Visualisierung oft ignoriert oder nicht auf die speziellen UmstĂ€nde dieses Feldes angepasst. Analyse und Optimierung dieser Systeme und ihrer Software ist notwendig, um deren Zustand einzuschĂ€tzen und ihre Leistung zu verbessern. Die verfĂŒgbaren Werkzeuge und Methoden, um die erforderlichen Informationen zu sammeln und auszuwerten, sind oft vom Kontext entkoppelt oder nicht fĂŒr interaktive Szenarien ausgelegt. Diese Probleme sind in heterogenen Rechenumgebungen verstĂ€rkt, da dort mehr Daten fĂŒr die Analyse verfĂŒgbar und notwendig sind. FĂŒr verteilte Visualisierung ist zusĂ€tzlich RĂŒckmeldung in Echtzeit notwendig, um Interaktionen der Benutzer mit Leistungscharakteristika zu korrelieren und um die GĂŒltigkeit und Korrektheit der Daten und ihrer Visualisierung zu entscheiden. Diese Dissertation prĂ€sentiert BeitrĂ€ge fĂŒr all diese Aspekte. ZunĂ€chst werden zwei AnsĂ€tze zur Abstraktion im Kontext von generischen Berechnungen auf Grafikprozessoren und Visualisierung in heterogenen Umgebungen untersucht. Der erste Ansatz verbirgt Details verschiedener Prozessoren und ermöglicht deren Nutzung ĂŒber einheitliche Schnittstellen. Der zweite Ansatz verwendet pro-Pixel verkettete Listen (per-pixel linked lists) zur Kombination von Pixelfarben und zur Vereinfachung von ordnungsunabhĂ€ngiger Transparenz in verteilter Visualisierung. Übliche Fehlertoleranz-Methoden im Hochleistungsrechnen werden im Kontext der verteilten Visualisierung diskutiert. Auf dieser Grundlage werden Strategien fĂŒr fehlertolerante verteilte Visualisierung abgeleitet und in einer Taxonomie organisiert. Beispielhafte Umsetzungen dieser Strategien, ihre Kompromisse und ZugestĂ€ndnisse, und die daraus resultierenden Implikationen werden diskutiert. Zur Analyse werden lokale Exploration von Graphen und die Optimierung von Volumenvisualisierung untersucht. Herausforderungen in dichten Graphen wie visuelle Überladung, AmbiguitĂ€t und Einbindung zusĂ€tzlicher Attribute werden in Knoten-Kanten Diagrammen mit einer Linsenmetapher sowie ergĂ€nzenden Ansichten der Daten angegangen. Ein explorativer Ansatz zur Leistungsanalyse und Optimierung paralleler Volumenvisualisierung auf einer großen, hochaufgelösten Anzeige wird untersucht. Diese Dissertation betrachtet zum ersten Mal Fragen der verteilten Visualisierung auf großen Anzeigen und heterogenen Rechenumgebungen in einem grĂ¶ĂŸeren Kontext. WĂ€hrend jeder vorgestellte Ansatz individuelle Herausforderungen löst und erfolgreich in diesem Zusammenhang eingesetzt wurde, bilden alle gemeinsam eine solide Basis fĂŒr kĂŒnftige Forschung in diesem jungen Feld. In ihrer Gesamtheit prĂ€sentiert diese Dissertation Bausteine fĂŒr robuste verteilte Visualisierung auf aktuellen und kĂŒnftigen heterogenen Visualisierungsumgebungen

    Sketchy rendering for information visualization

    Get PDF
    We present and evaluate a framework for constructing sketchy style information visualizations that mimic data graphics drawn by hand. We provide an alternative renderer for the Processing graphics environment that redefines core drawing primitives including line, polygon and ellipse rendering. These primitives allow higher-level graphical features such as bar charts, line charts, treemaps and node-link diagrams to be drawn in a sketchy style with a specified degree of sketchiness. The framework is designed to be easily integrated into existing visualization implementations with minimal programming modification or design effort. We show examples of use for statistical graphics, conveying spatial imprecision and for enhancing aesthetic and narrative qualities of visual- ization. We evaluate user perception of sketchiness of areal features through a series of stimulus-response tests in order to assess users’ ability to place sketchiness on a ratio scale, and to estimate area. Results suggest relative area judgment is compromised by sketchy rendering and that its influence is dependent on the shape being rendered. They show that degree of sketchiness may be judged on an ordinal scale but that its judgement varies strongly between individuals. We evaluate higher-level impacts of sketchiness through user testing of scenarios that encourage user engagement with data visualization and willingness to critique visualization de- sign. Results suggest that where a visualization is clearly sketchy, engagement may be increased and that attitudes to participating in visualization annotation are more positive. The results of our work have implications for effective information visualization design that go beyond the traditional role of sketching as a tool for prototyping or its use for an indication of general uncertainty

    Visualization and inspection of the geometry of particle packings

    Get PDF
    Gegenstand dieser Dissertation ist die Entwicklung von effizienten Verfahren zur Visualisierung und Inspektion der Geometrie von Partikelmischungen. Um das Verhalten der Simulation fĂŒr die Partikelmischung besser zu verstehen und zu ĂŒberwachen, sollten nicht nur die Partikel selbst, sondern auch spezielle von den Partikeln gebildete Bereiche, die den Simulationsfortschritt und die rĂ€umliche Verteilung von Hotspots anzeigen können, visualisiert werden können. Dies sollte auch bei großen Packungen mit Millionen von Partikeln zumindest mit einer interaktiven Darstellungsgeschwindigkeit möglich sein. . Da die Simulation auf der Grafikkarte (GPU) durchgefĂŒhrt wird, sollten die Visualisierungstechniken die Daten des GPU-Speichers vollstĂ€ndig nutzen. Um die QualitĂ€t von trockenen Partikelmischungen wie Beton zu verbessern, wurde der KorngrĂ¶ĂŸenverteilung große Aufmerksamkeit gewidmet, die die RaumfĂŒllungsrate hauptsĂ€chlich beeinflusst und daher zwei der wichtigsten Eigenschaften des Betons bestimmt: die strukturelle Robustheit und die Haltbarkeit. Anhand der KorngrĂ¶ĂŸenverteilung kann die RaumfĂŒllungsrate durch Computersimulationen bestimmt werden, die analytischen AnsĂ€tzen in der Praxis wegen der breiten GrĂ¶ĂŸenverteilung der Partikel oft ĂŒberlegen sind. Eine der weit verbreiteten Simulationsmethoden ist das Collective Rearrangement, bei dem die Partikel zunĂ€chst an zufĂ€lligen Positionen innerhalb eines BehĂ€lters platziert werden. SpĂ€ter werden Überlappungen zwischen Partikeln aufgelöst, indem ĂŒberlappende Partikel voneinander weggedrĂŒckt werden. Durch geschickte Anpassung der BehĂ€ltergrĂ¶ĂŸe wĂ€hrend der Simulation, kann die Collective Rearrangement-Methode am Ende eine ziemlich dichte Partikelpackung generieren. Es ist jedoch sehr schwierig, den gesamten Simulationsprozess ohne ein interaktives Visualisierungstool zu optimieren oder dort Fehler zu finden. Ausgehend von der etablierten rasterisierungsbasierten Methode zum Darstellen einer großen Menge von Kugeln, bietet diese Dissertation zunĂ€chst schnelle und pixelgenaue Methoden zur neuartigen Visualisierung der Überlappungen und FreirĂ€ume zwischen kugelförmigen Partikeln innerhalb eines BehĂ€lters.. Die auf Rasterisierung basierenden Verfahren funktionieren gut fĂŒr kleinere Partikelpackungen bis ca. eine Million Kugeln. Bei grĂ¶ĂŸeren Packungen entstehen Probleme durch die lineare Laufzeit und den Speicherverbrauch. Zur Lösung dieses Problems werden neue Methoden mit Hilfe von Raytracing zusammen mit zwei neuen Arten von Bounding-Volume-Hierarchien (BVHs) bereitgestellt. Diese können den Raytracing-Prozess deutlich beschleunigen --- die erste kann die vorhandene Datenstruktur fĂŒr die Simulation wiederverwenden und die zweite ist speichereffizienter. Beide BVHs nutzen die Idee des Loose Octree und sind die ersten ihrer Art, die die GrĂ¶ĂŸe von Primitiven fĂŒr interaktives Raytracing mit hĂ€ufig aktualisierten Beschleunigungsdatenstrukturen berĂŒcksichtigen. DarĂŒber hinaus können die Visualisierungstechniken in dieser Dissertation auch angepasst werden, um Eigenschaften wie das Volumen bestimmter Bereiche zu berechnen. All diese Visualisierungstechniken werden dann auf den Fall nicht-sphĂ€rischer Partikel erweitert, bei denen ein nicht-sphĂ€risches Partikel durch ein starres System von Kugeln angenĂ€hert wird, um die vorhandene kugelbasierte Simulation wiederverwenden zu können. Dazu wird auch eine neue GPU-basierte Methode zum effizienten FĂŒllen eines nicht-kugelförmigen Partikels mit polydispersen ĂŒberlappenden Kugeln vorgestellt, so dass ein Partikel mit weniger Kugeln gefĂŒllt werden kann, ohne die RaumfĂŒllungsrate zu beeintrĂ€chtigen. Dies erleichtert sowohl die Simulation als auch die Visualisierung. Basierend auf den Arbeiten in dieser Dissertation können ausgefeiltere Algorithmen entwickelt werden, um großskalige nicht-sphĂ€rische Partikelmischungen effizienter zu visualisieren. Weiterhin kann in Zukunft Hardware-Raytracing neuerer Grafikkarten anstelle des in dieser Dissertation eingesetzten Software-Raytracing verwendet werden. Die neuen Techniken können auch als Grundlage fĂŒr die interaktive Visualisierung anderer partikelbasierter Simulationen verwendet werden, bei denen spezielle Bereiche wie FreirĂ€ume oder Überlappungen zwischen Partikeln relevant sind.The aim of this dissertation is to find efficient techniques for visualizing and inspecting the geometry of particle packings. Simulations of such packings are used e.g. in material sciences to predict properties of granular materials. To better understand and supervise the behavior of these simulations, not only the particles themselves but also special areas formed by the particles that can show the progress of the simulation and spatial distribution of hot spots, should be visualized. This should be possible with a frame rate that allows interaction even for large scale packings with millions of particles. Moreover, given the simulation is conducted in the GPU, the visualization techniques should take full use of the data in the GPU memory. To improve the performance of granular materials like concrete, considerable attention has been paid to the particle size distribution, which is the main determinant for the space filling rate and therefore affects two of the most important properties of the concrete: the structural robustness and the durability. Given the particle size distribution, the space filling rate can be determined by computer simulations, which are often superior to analytical approaches due to irregularities of particles and the wide range of size distribution in practice. One of the widely adopted simulation methods is the collective rearrangement, for which particles are first placed at random positions inside a container, later overlaps between particles will be resolved by letting overlapped particles push away from each other to fill empty space in the container. By cleverly adjusting the size of the container according to the process of the simulation, the collective rearrangement method could get a pretty dense particle packing in the end. However, it is very hard to fine-tune or debug the whole simulation process without an interactive visualization tool. Starting from the well-established rasterization-based method to render spheres, this dissertation first provides new fast and pixel-accurate methods to visualize the overlaps and free spaces between spherical particles inside a container. The rasterization-based techniques perform well for small scale particle packings but deteriorate for large scale packings due to the large memory requirements that are hard to be approximated correctly in advance. To address this problem, new methods based on ray tracing are provided along with two new kinds of bounding volume hierarchies (BVHs) to accelerate the ray tracing process --- the first one can reuse the existing data structure for simulation and the second one is more memory efficient. Both BVHs utilize the idea of loose octree and are the first of their kind to consider the size of primitives for interactive ray tracing with frequently updated acceleration structures. Moreover, the visualization techniques provided in this dissertation can also be adjusted to calculate properties such as volumes of the specific areas. All these visualization techniques are then extended to non-spherical particles, where a non-spherical particle is approximated by a rigid system of spheres to reuse the existing simulation. To this end a new GPU-based method is presented to fill a non-spherical particle with polydisperse possibly overlapping spheres efficiently, so that a particle can be filled with fewer spheres without sacrificing the space filling rate. This eases both simulation and visualization. Based on approaches presented in this dissertation, more sophisticated algorithms can be developed to visualize large scale non-spherical particle mixtures more efficiently. Besides, one can try to exploit the hardware ray tracing of more recent graphic cards instead of maintaining the software ray tracing as in this dissertation. The new techniques can also become the basis for interactively visualizing other particle-based simulations, where special areas such as free space or overlaps between particles are of interest

    MeasureIt-ARCH: A Tool for Facilitating Architectural Design in the Open Source Software Blender

    Get PDF
    This thesis discusses the design and synthesis of MeasureIt-ARCH, a GNU GPL licensed software add-on developed by the author in order to add functionality to the Open Source 3D modeling software Blender that facilitates the creation of architectural drawings. MeasureIt-ARCH adds to Blender simple tools to dimension and annotate 3D models, as well as basic support for the definition and drawing of line work. These tools for the creation of dimensions, annotations and line work are designed to be used in tandem with Blender's existing modelling and rendering tool set. While the drawings that MeasureIt-ARCH produces are fundamentally conventional, as are the majority of the techniques that MeasureIt-ARCH employs to create them, MeasureIt-ARCH does provide two simple and relatively novel methods in its drawing systems. MeasureIt-ARCH provides a new method for the placement of dimension elements in 3D space that draws on the dimension's three dimensional context and surrounding geometry order to determine a placement that optimizes legibility. This dimension placement method does not depend on a 2D work plane, a convention that is common in industry standard Computer Aided Design software. MeasureIt-ARCH also implements a new approach for drawing silhouette lines that operates by transforming the silhouetted models geometry in 4D 'Clip Space'. The hope of this work is that MeasureIt-ARCH might be a small step towards creating an Open Source design pipeline for Architects. A step towards creating architectural drawings that can be shared, read, and modified by anyone, within a platform that is itself free to be changed and improved. The creation of MeasureIt-ARCH is motivated by two goals. First, the work aims to create a basic functioning Open Source platform for the creation of architectural drawings within Blender that is publicly and freely available for use. Second, MeasureIt-ARCH's development served as an opportunity to engage in an interdisciplinary act of craft, providing the author an opportunity to explore the act of digital tool making and gain a basic competency in this intersection between Architecture and Computer Science. To achieve these goals, MeasureIt-ARCH's development draws on references from the history of line drawing and dimensioning within Architecture and Computer Science. On the Architectural side, we make use of the history of architectural drawing and dimensioning conventions as described by Mario Carpo, Alberto PĂ©rez GĂłmez and others, as well as more contemporary frameworks for the classification of architectural software, such as Mark Bew and Mervyn Richard's BIM Levels framework, in order to help determine the scope of MeasureIt-ARCH's feature set. When crafting MeasureIt-ARCH, precedent works from the field of Computer Science that implement methods for producing line drawings from 3D models helped inform the author’s approach to line drawing. In particular this work draws on the overview of line drawing methods produced by BĂ©nard Pierre and Aaron Hertzmann, Arthur Appel's method for line drawing using 'Quantitative Invisibility', the techniques employed in the Freestyle line drawing system created by Grabli et al. as well as other to help inform MeasureIt-ARCH's simple drawing tools. Beyond discussing MeasureIt-ARCH's development and its motivations, this thesis also provides three small speculative discussions about the implications that an Open Source design tool might have on the architectural profession. We investigate MeasureIt-ARCH's use for small scale architectural projects in a practical setting, using it's tool set to produce conceptual design and renovation drawings for cottages at the Lodge at Pine Cove. We provide a demonstration of how MeasureIt-ARCH and Blender can integrate with external systems and other Blender add-ons to produce a proof of concept, dynamic data visualization of the Noosphere installation at the Futurium center in Berlin by the Living Architecture Systems Group. Finally, we discuss the tool's potential to facilitate greater engagement with the Open Source Architecture (OSArc) movement by illustrating a case study of the work done by Alastair Parvin and Clayton Prest on the WikiHouse project, and by highlighting the challenges that face OSArc projects as they try to produce Open Source Architecture without an Open Source design software

    Data Compression in the Petascale Astronomy Era: a GERLUMPH case study

    Full text link
    As the volume of data grows, astronomers are increasingly faced with choices on what data to keep -- and what to throw away. Recent work evaluating the JPEG2000 (ISO/IEC 15444) standards as a future data format standard in astronomy has shown promising results on observational data. However, there is still a need to evaluate its potential on other type of astronomical data, such as from numerical simulations. GERLUMPH (the GPU-Enabled High Resolution cosmological MicroLensing parameter survey) represents an example of a data intensive project in theoretical astrophysics. In the next phase of processing, the ~27 terabyte GERLUMPH dataset is set to grow by a factor of 100 -- well beyond the current storage capabilities of the supercomputing facility on which it resides. In order to minimise bandwidth usage, file transfer time, and storage space, this work evaluates several data compression techniques. Specifically, we investigate off-the-shelf and custom lossless compression algorithms as well as the lossy JPEG2000 compression format. Results of lossless compression algorithms on GERLUMPH data products show small compression ratios (1.35:1 to 4.69:1 of input file size) varying with the nature of the input data. Our results suggest that JPEG2000 could be suitable for other numerical datasets stored as gridded data or volumetric data. When approaching lossy data compression, one should keep in mind the intended purposes of the data to be compressed, and evaluate the effect of the loss on future analysis. In our case study, lossy compression and a high compression ratio do not significantly compromise the intended use of the data for constraining quasar source profiles from cosmological microlensing.Comment: 15 pages, 9 figures, 5 tables. Published in the Special Issue of Astronomy & Computing on The future of astronomical data format

    Nonrigid reconstruction of 3D breast surfaces with a low-cost RGBD camera for surgical planning and aesthetic evaluation

    Get PDF
    Accounting for 26% of all new cancer cases worldwide, breast cancer remains the most common form of cancer in women. Although early breast cancer has a favourable long-term prognosis, roughly a third of patients suffer from a suboptimal aesthetic outcome despite breast conserving cancer treatment. Clinical-quality 3D modelling of the breast surface therefore assumes an increasingly important role in advancing treatment planning, prediction and evaluation of breast cosmesis. Yet, existing 3D torso scanners are expensive and either infrastructure-heavy or subject to motion artefacts. In this paper we employ a single consumer-grade RGBD camera with an ICP-based registration approach to jointly align all points from a sequence of depth images non-rigidly. Subtle body deformation due to postural sway and respiration is successfully mitigated leading to a higher geometric accuracy through regularised locally affine transformations. We present results from 6 clinical cases where our method compares well with the gold standard and outperforms a previous approach. We show that our method produces better reconstructions qualitatively by visual assessment and quantitatively by consistently obtaining lower landmark error scores and yielding more accurate breast volume estimates

    Using per-pixel linked lists for transparency effects in remote-rendering

    Get PDF
    Modern graphic cards are highly versatile because they allow the programmer to load custom code to execute onto them. This can be used to construct a structure called a per-pixel linked list, which contains all fragments composing the scene. However, with the need to render more and more complex geometry, even the most powerful hardware reaches its limit fast. To overcome this problem, the geometry is rendered on multiple systems instead of one, and finally put together for rendering. This is called remote rendering and works well for opaque scenes. The goal of this thesis is to conquer rendering of transparent objects remotely using per-pixel linked lists. Since rendering those objects requires a step called blending, standard approaches are incapable of displaying them. Three different methods are shown, compared and analyzed for their usability and performance. First, limiting the amount of depth layers is discussed. Second, identifying regions of visual change is used to reduce the amount of data to be sent. Finally, a way for reusing the previously sent fragments for the current frame is studied in detail.Moderne Grafikkarten sind sehr flexibel aufgrund ihrer FĂ€higkeit, vom Programmierer verfassten Code auszufĂŒhren. Dies kann dazu genutzt werden, eine Datenstruktur names Per-Pixel Linked List zu erstellen. Diese enthĂ€lt alle Fragmente der zu rendernden Szene. Da aber aufgrund der immer komplexer werdenden Geometrie die Anforderungen stark steigen, erreichen selbst leistungsfĂ€hige Grafikkarten schnell ihre Grenzen. Um dieses Problem zu lösen bietet sich das sogenannte Remote Rendering an, bei welchem die Rechenlast auf mehrere Computer im Netzwerk verteilt und am Ende die einzelnen Zwischenbilder zu einem Gesamtbild vereint werden. FĂŒr opake Szenen gibt es bereits funktionierende Lösungen. Das Ziel dieser Arbeit ist die Behandlung transparenter Geometrie im Kontext des Remote Renderings mit Hilfe von Per-Pixel Linked Lists. Da das Rendern transparenter Objekte eine Operation namens Blending erfordert, sind die bisherigen Algorithmen meist nicht geeignet. Es werden drei verschiedene Methoden vorgestellt, analysiert und ihre Brauchbarkeit und Geschwindigkeit verglichen. Als erstes wird ein Verfahren, welches die Anzahl an Tiefenebenen limitiert, beleuchtet. Das zweite Verfahren versucht anhand optischer Differenzen zwischen dem aktuellen und dem vorausgegangenen Bild diejenigen Bereiche zu ermitteln, welche fĂŒr ein korrektes Endbild ĂŒbertragen werden mĂŒssen. Als letzte Technik wird ein Ansatz vorgestellt, welcher die Verwendung von bereits ĂŒbertragener Fragmente dazu benutzt, um Fragmente einzusparen indem diese wiederverwendet werden
    • 

    corecore