12 research outputs found

    Removing non visible objects in scenes of clusters of particles

    Get PDF
    El objeto de este trabajo es resolver, en parte, la problemática de la visualización de conglomerados de partículas resultantes de métodos numéricos en la ingeniería. Se han desarrollado dos métodos para la eliminación de partículas no visibles desde cualquier punto de observación de la cámara. Estos se distinguen según el tipo de información disponible: modelos de partículas con información del contorno (malla de superficie) y modelos sin información del contorno (métodos sin mallas). En ambos casos, los resultados son buenos, pues se logra eliminar gran cantidad de partículas que no influyen en la imagen final.The aim of this paper is to partially solve the problem of visualization of clusters of particles resulting from numerical methods in engineering. Two methods for removing non-visible particles from some view point of the camera were developed. These are divided depending on the type of information available: particle systems with contour information (surface mesh) and particle systems without contour information (methods without mesh). In both cases, the results are very good, it is achieved removing large amount of particles that do not affect the final image, permitting interact with the results of numerical methods.Peer Reviewe

    Evaluation of optimisation techniques for multiscopic rendering

    Get PDF
    A thesis submitted to the University of Bedfordshire in fulfilment of the requirements for the degree of Master of Science by ResearchThis project evaluates different performance optimisation techniques applied to stereoscopic and multiscopic rendering for interactive applications. The artefact features a robust plug-in package for the Unity game engine. The thesis provides background information for the performance optimisations, outlines all the findings, evaluates the optimisations and provides suggestions for future work. Scrum development methodology is used to develop the artefact and quantitative research methodology is used to evaluate the findings by measuring performance. This project concludes that the use of each performance optimisation has specific use case scenarios in which performance benefits. Foveated rendering provides greatest performance increase for both stereoscopic and multiscopic rendering but is also more computationally intensive as it requires an eye tracking solution. Dynamic resolution is very beneficial when overall frame rate smoothness is needed and frame drops are present. Depth optimisation is beneficial for vast open environments but can lead to decreased performance if used inappropriately

    Conservative From-Point Visibility.

    Get PDF
    Visibility determination has been an important part of the computer graphics research for several decades. First studies of the visibility were hidden line removal algorithms, and later hidden surface removal algorithms. Today’s visibility determination is mainly concentrated on conservative, object level visibility determination techniques. Conservative methods are used to accelerate the rendering process when some exact visibility determination algorithm is present. The Z-buffer is a typical exact visibility determination algorithm. The Z-buffer algorithm is implemented in practically every modern graphics chip. This thesis concentrates on a subset of conservative visibility determination techniques. These techniques are sometimes called from-point visibility algorithms. They attempt to estimate a set of visible objects as seen from the current viewpoint. These techniques are typically used with real-time graphics applications such as games and virtual environments. Concentration is on the view frustum culling and occlusion culling. View frustum culling discards objects that are outside of the viewable volume. Occlusion culling algorithms try to identify objects that are not visible because they are behind some other objects. Also spatial data structures behind the efficient implementations of view frustum culling and occlusion culling are reviewed. Spatial data structure techniques like maintaining of dynamic scenes and exploiting spatial and temporal coherences are reviewed.1. Introduction.............................................................................................................1 2. Visibility Problem...................................................................................................3 3. Scene Organization...............................................................................................10 3.1. Bounding Volume Hierarchies and Scene Graphs.................................10 3.2. Spatial Data Structures ...............................................................................13 3.3. Regular Grids...............................................................................................14 3.4. Quadtrees and Octrees ...............................................................................15 3.5. KD-Trees.......................................................................................................20 3.6. BSP-Trees......................................................................................................23 3.7. Exploiting Spatial and Temporal Coherence ..........................................27 3.8. Dynamic Scenes...........................................................................................30 3.9. Summary ......................................................................................................34 4. View Frustum Culling .........................................................................................35 4.1. View Frustum Construction ......................................................................36 4.2. View Frustum Test......................................................................................37 4.3. Hierarchical View Frustum Culling .........................................................41 4.4. Optimizations ..............................................................................................42 4.5. Summary ......................................................................................................44 5. Occlusion Culling .................................................................................................45 5.1. Fundamental Concepts...............................................................................45 5.2. Occluder Selection.......................................................................................46 5.3. Hardware Occlusion Queries....................................................................49 5.4. Object-Space Methods ................................................................................50 5.5. Image-Space Methods ................................................................................55 5.6. Summary ......................................................................................................64 6. Conclusion.............................................................................................................66 References .................................................................................................................... 7

    Fast and Accurate Visibility Preprocessing

    Get PDF
    Visibility culling is a means of accelerating the graphical rendering of geometric models. Invisible objects are efficiently culled to prevent their submission to the standard graphics pipeline. It is advantageous to preprocess scenes in order to determine invisible objects from all possible camera views. This information is typically saved to disk and may then be reused until the model geometry changes. Such preprocessing algorithms are therefore used for scenes that are primarily static. Currently, the standard approach to visibility preprocessing algorithms is to use a form of approximate solution, known as conservative culling. Such algorithms over-estimate the set of visible polygons. This compromise has been considered necessary in order to perform visibility preprocessing quickly. These algorithms attempt to satisfy the goals of both rapid preprocessing and rapid run-time rendering. We observe, however, that there is a need for algorithms with superior performance in preprocessing, as well as for algorithms that are more accurate. For most applications these features are not required simultaneously. In this thesis we present two novel visibility preprocessing algorithms, each of which is strongly biased toward one of these requirements. The first algorithm has the advantage of performance. It executes quickly by exploiting graphics hardware. The algorithm also has the features of output sensitivity (to what is visible), and a logarithmic dependency in the size of the camera space partition. These advantages come at the cost of image error. We present a heuristic guided adaptive sampling methodology that minimises this error. We further show how this algorithm may be parallelised and also present a natural extension of the algorithm to five dimensions for accelerating generalised ray shooting. The second algorithm has the advantage of accuracy. No over-estimation is performed, nor are any sacrifices made in terms of image quality. The cost is primarily that of time. Despite the relatively long computation, the algorithm is still tractable and on average scales slightly superlinearly with the input size. This algorithm also has the advantage of output sensitivity. This is the first known tractable exact solution to the general 3D from-region visibility problem. In order to solve the exact from-region visibility problem, we had to first solve a more general form of the standard stabbing problem. An efficient solution to this problem is presented independently

    Algoritmos de pre y postproceso para métodos numéricos de puntos, métodos de partículas y libres de mallas

    Get PDF
    In computational mechanics, scientific visualization provides researchers and engineers with tools for studying numerical data. The basis of each of these tools is comprised by scientific visualization techniques. This thesis deals with the necessary changes to conventional scientific visualization techniques in order to visualize the results obtained from the application of particle based methods and mesh-less methods. This is done taking into account the large amount of data that results from the application of these methods and the presence or absence of contour information. Moreover, it is developed a visualization technique for representing micro-cracks and discontinuities, which are the beginning of chains of structural failures. A mesh generation method is selected, given its provided facilities, and it is adapted to generate point clouds for representing volumes and surfaces. For each proposed technique we study the advantages of the data structures used, and show its contributions to computer graphics and to data analysis

    Hierarchical impostors for the flocking algorithm in three dimensional space

    Get PDF
    The availability of powerful and affordable 3D PC graphics boards has made rendering of rich immersive environments possible at interactive speeds. The scene update rate and the appropnate behaviour of objects withm the world are central to this immersive feeling. This thesis is concerned with the behaviour computations involved in the flocking algorithm, which has been used extensively to emulate the flocking behaviour of creatures found in nature. The mam contribution of this thesis is a new method for hierarchically combining portions of the flocks into groups to reduce the cost of the behavioural computation, allowing far larger flocks to be updated in real-time in the worl

    Street Surfaces and Boundaries from Depth Image Sequences Using Probabilistic Models

    Get PDF
    This thesis presents an approach for the detection and reconstruction of street surfaces and boundaries from depth image sequences. Active driver assistance systems which monitor and interpret the environment based on vehicle mounted sensors to support the driver embody a current research focus of the automotive industry. An essential task of these systems is the modeling of the vehicle's static environment. This comprises the determination of the vertical slope and curvature characteristics of the street surface as well as the robust detection of obstacles and, thus, the free drivable space (alias free-space). In this regard, obstacles of low height, e.g. curbs, are of special interest since they often embody the first geometric delimiter of the free-space. The usage of depth images acquired from stereo camera systems becomes more important in this context due to the high data rate and affordable price of the sensor. However, recent approaches for object detection are often limited to the detection of objects which are distinctive in height, such as cars and guardrails, or explicitly address the detection of particular object classes. These approaches are usually based on extremely restrictive assumptions, such as planar street surfaces, in order to deal with the high measurement noise. The main contribution of this thesis is the development, analysis and evaluation of an approach which detects the free-space in the immediate maneuvering area in front of the vehicle and explicitly models the free-space boundary by means of a spline curve. The approach considers in particular obstacles of low height (higher than 10 cm) without limitation on particular object classes. Furthermore, the approach has the ability to cope with various slope and curvature characteristics of the observed street surface and is able to reconstruct this surface by means of a flexible spline model. In order to allow for robust results despite the flexibility of the model and the high measurement noise, the approach employs probabilistic models for the preprocessing of the depth map data as well as for the detection of the drivable free-space. An elevation model is computed from the depth map considering the paths of the optical rays and the uncertainty of the depth measurements. Based on this elevation model, an iterative two step approach is performed which determines the drivable free-space by means of a Markov Random Field and estimates the spline parameters of the free-space boundary curve and the street surface. Outliers in the elevation data are explicitly modeled. The performance of the overall approach and the influence of key components are systematically evaluated within experiments on synthetic and real world test scenarios. The results demonstrate the ability of the approach to accurately model the boundary of the drivable free-space as well as the street surface even in complex scenarios with multiple obstacles or strong curvature of the street surface. The experiments further reveal the limitations of the approach, which are discussed in detail.Schätzung von Straßenoberflächen und -begrenzungen aus Sequenzen von Tiefenkarten unter Verwendung probabilistischer Modelle Diese Arbeit präsentiert ein Verfahren zur Detektion und Rekonstruktion von Straßenoberflächen und -begrenzungen auf der Basis von Tiefenkarten. Aktive Fahrerassistenzsysteme, welche mit der im Fahrzeug verbauten Sensorik die Umgebung erfassen, interpretieren und den Fahrer unterstützen, sind ein aktueller Forschungsschwerpunkt der Fahrzeugindustrie. Eine wesentliche Aufgabe dieser Systeme ist die Modellierung der statischen Fahrzeugumgebung. Dies beinhaltet die Bestimmung der vertikalen Neigungs- und Krümmungseigenschaften der Fahrbahn, sowie die robuste Detektion von Hindernissen und somit des befahrbaren Freiraumes. Hindernisse von geringer Höhe, wie z.B. Bordsteine, sind in diesem Zusammenhang von besonderem Interesse, da sie häufig die erste geometrische Begrenzung des Fahrbahnbereiches darstellen. In diesem Kontext gewinnt die Verwendung von Tiefenkarten aus Stereo-Kamera-Systemen wegen der hohen Datenrate und relativ geringen Kosten des Sensors zunehmend an Bedeutung. Aufgrund des starken Messrauschens beschränken sich herkömmliche Verfahren zur Hinderniserkennung jedoch meist auf erhabene Objekte wie Fahrzeuge oder Leitplanken, oder aber adressieren einzelne Objektklassen wie Bordsteine explizit. Dazu werden häufig extrem restriktive Annahmen verwendet wie z.B. planare Straßenoberflächen. Der Hauptbeitrag dieser Arbeit besteht in der Entwicklung, Analyse und Evaluation eines Verfahrens, welches den befahrbaren Freiraum im Nahbereich des Fahrzeugs detektiert und dessen Begrenzung mit Hilfe einer Spline-Kurve explizit modelliert. Das Verfahren berücksichtigt insbesondere Hindernisse geringer Höhe (größer als 10 cm) ohne Beschränkung auf bestimmte Objektklassen. Weiterhin ist das Verfahren in der Lage, mit verschiedenartigen Neigungs- und Krümmungseigenschaften der vor dem Fahrzeug liegenden Fahrbahnoberfläche umzugehen und diese durch Verwendung eines flexiblen Spline-Modells zu rekonstruieren. Um trotz der hohen Flexibilität des Modells und des hohen Messrauschens robuste Ergebnisse zu erzielen, verwendet das Verfahren probabilistische Modelle zur Vorverarbeitung der Eingabedaten und zur Detektion des befahrbaren Freiraumes. Aus den Tiefenkarten wird unter Berücksichtigung der Strahlengänge und Unsicherheiten der Tiefenmessungen ein Höhenmodell berechnet. In einem iterativen Zwei-Schritt-Verfahren werden anhand dieses Höhenmodells der befahrbare Freiraum mit Hilfe eines Markov-Zufallsfeldes bestimmt sowie die Parameter der begrenzenden Spline-Kurve und Straßenoberfläche geschätzt. Ausreißer in den Höhendaten werden dabei explizit modelliert. Die Leistungsfähigkeit des Gesamtverfahrens sowie der Einfluss zentraler Komponenten, wird im Rahmen von Experimenten auf synthetischen und realen Testszenen systematisch analysiert. Die Ergebnisse demonstrieren die Fähigkeit des Verfahrens, die Begrenzung des befahrbaren Freiraumes sowie die Fahrbahnoberfläche selbst in komplexen Szenarien mit multiplen Hindernissen oder starker Fahrbahnkrümmung akkurat zu modellieren. Weiterhin werden die Grenzen des Verfahrens aufgezeigt und detailliert untersucht

    Spatial CPU-GPU data structures for interactive rendering of large particle data

    Get PDF
    In this work, I investigate the interactive visualization of arbitrarily large particle data sets which ft into system memory, but not into GPU memory. With conventional rendering techniques, interactivity of visualizations is drastically reduced when rendering tens- or hundreds of millions of objects. At the same time, graphics hardware memory capabilities limit the size of data sets which can be placed in GPU memory for rendering. To circumvent these obstacles, a progressive rendering approach is employed, which gradually streams and renders all particle data to the GPU without reducing or altering the particle data itself. The particle data is rendered according to a visibility sorting derived from occlusion relations between different parts of the data set, leading to a rendering order of scene contents guided by importance for the rendered image. I analyze and compare possible implementation choices for rendering particles as opaque spheres in OpenGL, which forms the basis of the particle rendering application developed within this work. The application utilizes a multi-threaded architecture, where data preprocessing on a CPU-thread and a rendering algorithm on a GPU-thread ensure that the user can interact with the application at any time. In particular it is guaranteed that the user can explore the particle data interactively, by ensuring minimal latency from user input to seeing the effects of that input. This is achieved by favoring user inputs over completeness of the rendered image at all stages during rendering. At the same time the user is provided with an immediate feedback about interactions by re-projecting all currently visible particles to the next rendered image. The re-projection is realized with an on-GPU particle-cache of visible particles that is built during particle data streaming and rendering, and drawn upon user interaction using the most recent camera confguration according to user inputs. The combination of the developed techniques allows interactive exploration of particle data sets with up to 1.5 billion particles on a commodity computer.In dieser Arbeit wird die interaktive Visualisierung beliebig großer Partikeldaten untersucht, wobei die Partikeldaten im Arbeitsspeicher hinterlegt sind, aber nicht zwangsläufig in den Grafikspeicher passen. Mit üblichen Rendering Methoden büßen Visualisierungen drastisch an Interaktivität ein, wenn mehrere zehn- bis hunderte Millionen Objekte dargestellt werden. Gleichzeitig ist die Größe möglicher zu visualisierender Datensätze begrenzt durch den Videospeicher von Grafikkarten, auf dem zu visualisierende Daten vorliegen müssen. Um diese Einschränkungen zu umgehen, wird in dieser Arbeit ein progressiver Rendering Ansatz verfolgt, der sukzessive alle Partikeldaten zur Grafikkarte hochlädt und rendert, ohne die Partikeldaten zu reduzieren oder anderweitig zu verändern. Die Partikeldaten werden entsprechend einer vorgenommenen Sichtbarkeitssortierung gerendert, die aus gegenseitigen Verdeckungen verschiedener Teile des Partikeldatensatzes berechnet wird. Dies führt dazu, dass Teile der Szene nach ihrer Wichtigkeit für das aktuelle Bild sortiert und dargestellt werden. Es werden verschiedene Möglichkeiten analysiert und verglichen, Partikel als opake Kugeln in OpenGL zu rendern. Dies formt die Grundlage für die Partikel-Rendering Software, die in dieser Arbeit entwickelt wurde. Die Architektur der Rendering-Software benutzt mehrere Threads, sodass durch eine Daten-Vorverarbeitung auf einem CPUThread und durch Rendering-Algorithmen auf einem GPU-Thread sichergestellt ist, dass der Benutzer mit der Software jederzeit interagieren kann. Insbesondere ist sichergestellt, dass der Benutzer die Partikeldaten interaktiv untersuchen kann, indem die Latenz zwischen Benutzereingaben und dem Anzeigen der daraus resultierenden Veränderungen minimal gehalten wird. Dies wird erreicht indem der Verarbeitung von Benutzereingaben an allen Stellen des Rendering-Prozesses höhere Priorität eingeräumt wird als der Vollständigkeit des gerenderten Bildes. Gleichzeitig wird dem Benutzer eine sofortige Rückmeldung über getätigte Benutzereingaben gegeben, indem alle sichtbaren Partikel in das nächste gerenderte Bild neu projeziert werden. Diese Neu-Projektion wird durch einen GPU-seitigen Partikel-Cache aller aktuell sichtbaren Partikel realisiert, der während des sukzessiven Partikelstreamings und -renderns aufgebaut wird. Sobald der Benutzer eine Eingabe tätigt, wird der auf der GPU liegende Partikel-Cache unter der aktuellsten benutzerdefinierten Kameraposition neu gerendert. Die Kombination dieser entwickelten Methoden erlaubt ein interaktives Betrachten von Partikeldaten mit bis zu 1,5 Milliarden Partikeln auf einem handelsüblichen Computer

    Dynamic scene occlusion culling in architectural scenes based on dynamic bounding volume

    Get PDF
    Visibility algorithms have recently regained attention because of the ever increasing size of polygon datasets and more dynamic objects in a scene. Dynamic objects handling makes large polygon datasets impossible to display in real time with conventional approaches. Therefore, occlusion culling techniques are required for output-sensitive rendering. Most scenes are displayed with static objects and only a few use dynamic objects in their visualization. In this thesis, the aim of the research carried out, is to handle dynamic objects efficiently with faster frame rate display. This algorithm is implemented using portal occlusion culling and kD-tree, which is suitable for indoor and architectural scenes. An occlusion culling technique was developed for handling dynamic objects in static scenes using dynamic bounding volume. Dynamic objects are wrapped in bounding volumes and then inserted into spatial hierarchical data structure as a volume to avoid updating the structure of every dynamic object at each frame. Dynamic bounding volumes are created for each occluded dynamic object by using physical constraints of that object and are assigned with a validity period. These bounding volumes and validity periods are later inserted into kD-tree. The dynamic objects are ignored until the bounding volume is visible or the validity period has expired. After numerous tests and analysis have been done, dynamic bounding volume culling shows better performance than portal culling especially when there are many low speed dynamic objects in the scene. Dynamic bounding volume culling proved to be efficient in avoiding enormous calculations of dynamic object’s position thus improves the rendering speed

    Dynamic Scene Occlusion Culling using a Regular Grid

    No full text
    We present an output-sensitive occlusion culling algorithm for densely occluded dynamic scenes where both the viewpoint and objects move arbitrarily. Our method works on a regular grid that represents a volumetric discretization of the space and uses the opaque regions of the scene as virtual occluders. We introduce new techniques of efficient traversal of voxels, object discretization and occlusion computation that strength the benefits of using regular grids in dynamic scenes. The method also exploits temporal coherence and realizes occluder fusion in object-space. For each frame, the algorithm computes a conservative set of visible objects that greatly accelerates the visualization of complex dynamic scenes. We discuss the results of a 2D and 3D case implementation
    corecore