1,802 research outputs found

    Simulating 3D Radiation Transport, a modern approach to discretisation and an exploration of probabilistic methods

    Get PDF
    Light, or electromagnetic radiation in general, is a profound and invaluable resource to investigate our physical world. For centuries, it was the only and it still is the main source of information to study the Universe beyond our planet. With high-resolution spectroscopic imaging, we can identify numerous atoms and molecules, and can trace their physical and chemical environments in unprecedented detail. Furthermore, radiation plays an essential role in several physical and chemical processes, ranging from radiative pressure, heating, and cooling, to chemical photo-ionisation and photo-dissociation reactions. As a result, almost all astrophysical simulations require a radiative transfer model. Unfortunately, accurate radiative transfer is very computationally expensive. Therefore, in this thesis, we aim to improve the performance of radiative transfer solvers, with a particular emphasis on line radiative transfer. First, we review the classical work on accelerated lambda iterations and acceleration of convergence, and we propose a simple but effective improvement to the ubiquitously used Ng-acceleration scheme. Next, we present the radiative transfer library, Magritte: a formal solver with a ray-tracer that can handle structured and unstructured meshes as well as smoothed-particle data. To mitigate the computational cost, it is optimised to efficiently utilise multi-node and multi-core parallelism as well as GPU offloading. Furthermore, we demonstrate a heuristic algorithm that can reduce typical input models for radiative transfer by an order of magnitude, without significant loss of accuracy. This strongly suggests the existence of more efficient representations for radiative transfer models. To investigate this, we present a probabilistic numerical method for radiative transfer that naturally allows for uncertainty quantification, providing us with a mathematical framework to study the trade-off between computational speed and accuracy. Although we cannot yet construct optimal representations for radiative transfer problems, we point out several ways in which this method can lead to more rigorous optimisation

    Gap Processing for Adaptive Maximal Poisson-Disk Sampling

    Full text link
    In this paper, we study the generation of maximal Poisson-disk sets with varying radii. First, we present a geometric analysis of gaps in such disk sets. This analysis is the basis for maximal and adaptive sampling in Euclidean space and on manifolds. Second, we propose efficient algorithms and data structures to detect gaps and update gaps when disks are inserted, deleted, moved, or have their radius changed. We build on the concepts of the regular triangulation and the power diagram. Third, we will show how our analysis can make a contribution to the state-of-the-art in surface remeshing.Comment: 16 pages. ACM Transactions on Graphics, 201

    Lichttransportsimulation auf Spezialhardware

    Get PDF
    It cannot be denied that the developments in computer hardware and in computer algorithms strongly influence each other, with new instructions added to help with video processing, encryption, and in many other areas. At the same time, the current cap on single threaded performance and wide availability of multi-threaded processors has increased the focus on parallel algorithms. Both influences are extremely prominent in computer graphics, where the gaming and movie industries always strive for the best possible performance on the current, as well as future, hardware. In this thesis we examine the hardware-algorithm synergies in the context of ray tracing and Monte-Carlo algorithms. First, we focus on the very basic element of all such algorithms - the casting of rays through a scene, and propose a dedicated hardware unit to accelerate this common operation. Then, we examine existing and novel implementations of many Monte-Carlo rendering algorithms on massively parallel hardware, as full hardware utilization is essential for peak performance. Lastly, we present an algorithm for tackling complex interreflections of glossy materials, which is designed to utilize both powerful processing units present in almost all current computers: the Centeral Processing Unit (CPU) and the Graphics Processing Unit (GPU). These three pieces combined show that it is always important to look at hardware-algorithm mapping on all levels of abstraction: instruction, processor, and machine.Zweifelsohne beeinflussen sich Computerhardware und Computeralgorithmen gegenseitig in ihrer Entwicklung: Prozessoren bekommen neue Instruktionen, um zum Beispiel Videoverarbeitung, Verschlüsselung oder andere Anwendungen zu beschleunigen. Gleichzeitig verstärkt sich der Fokus auf parallele Algorithmen, bedingt durch die limitierte Leistung von für einzelne Threads und die inzwischen breite Verfügbarkeit von multi-threaded Prozessoren. Beide Einflüsse sind im Grafikbereich besonders stark , wo es z.B. für die Spiele- und Filmindustrie wichtig ist, die bestmögliche Leistung zu erreichen, sowohl auf derzeitiger und zukünftiger Hardware. In Rahmen dieser Arbeit untersuchen wir die Synergie von Hardware und Algorithmen anhand von Ray-Tracing- und Monte-Carlo-Algorithmen. Zuerst betrachten wir einen grundlegenden Hardware-Bausteins für alle diese Algorithmen, die Strahlenverfolgung in einer Szene, und präsentieren eine spezielle Hardware-Einheit zur deren Beschleunigung. Anschließend untersuchen wir existierende und neue Implementierungen verschiedener MonteCarlo-Algorithmen auf massiv-paralleler Hardware, wobei die maximale Auslastung der Hardware im Fokus steht. Abschließend stellen wir dann einen Algorithmus zur Berechnung von komplexen Beleuchtungseffekten bei glänzenden Materialien vor, der versucht, die heute fast überall vorhandene Kombination aus Hauptprozessor (CPU) und Grafikprozessor (GPU) optimal auszunutzen. Zusammen zeigen diese drei Aspekte der Arbeit, wie wichtig es ist, Hardware und Algorithmen auf allen Ebenen gleichzeitig zu betrachten: Auf den Ebenen einzelner Instruktionen, eines Prozessors bzw. eines gesamten Systems

    Methods for fast construction of bounding volume hierarchies

    Get PDF
    katedra počítačové grafiky a interakc

    Many-Light Real-Time Global Illumination using Sparse Voxel Octree

    Get PDF
    Global illumination (GI) rendering simulates the propagation of light through a 3D volume and its interaction with surfaces, dramatically increasing the fidelity of computer generated images. While off-line GI algorithms such as ray tracing and radiosity can generate physically accurate images, their rendering speeds are too slow for real-time applications. The many-light method is one of many novel emerging real-time global illumination algorithms. However, it requires many shadow maps to be generated for Virtual Point Light (VPL) visibility tests, which reduces its efficiency. Prior solutions restrict either the number or accuracy of shadow map updates, which may lower the accuracy of indirect illumination or prevent the rendering of fully dynamic scenes. In this thesis, we propose a hybrid real-time GI algorithm that utilizes an efficient Sparse Voxel Octree (SVO) ray marching algorithm for visibility tests instead of the shadow map generation step of the many-light algorithm. Our technique achieves high rendering fidelity at about 50 FPS, is highly scalable and can support thousands of VPLs generated on the fly. A survey of current real-time GI techniques as well as details of our implementation using OpenGL and Shader Model 5 are also presented

    Visibility computation through image generalization

    Get PDF
    This dissertation introduces the image generalization paradigm for computing visibility. The paradigm is based on the observation that an image is a powerful tool for computing visibility. An image can be rendered efficiently with the support of graphics hardware and each of the millions of pixels in the image reports a visible geometric primitive. However, the visibility solution computed by a conventional image is far from complete. A conventional image has a uniform sampling rate which can miss visible geometric primitives with a small screen footprint. A conventional image can only find geometric primitives to which there is direct line of sight from the center of projection (i.e. the eye) of the image; therefore, a conventional image cannot compute the set of geometric primitives that become visible as the viewpoint translates, or as time changes in a dynamic dataset. Finally, like any sample-based representation, a conventional image can only confirm that a geometric primitive is visible, but it cannot confirm that a geometric primitive is hidden, as that would require an infinite number of samples to confirm that the primitive is hidden at all of its points. ^ The image generalization paradigm overcomes the visibility computation limitations of conventional images. The paradigm has three elements. (1) Sampling pattern generalization entails adding sampling locations to the image plane where needed to find visible geometric primitives with a small footprint. (2) Visibility sample generalization entails replacing the conventional scalar visibility sample with a higher dimensional sample that records all geometric primitives visible at a sampling location as the viewpoint translates or as time changes in a dynamic dataset; the higher-dimensional visibility sample is computed exactly, by solving visibility event equations, and not through sampling. Another form of visibility sample generalization is to enhance a sample with its trajectory as the geometric primitive it samples moves in a dynamic dataset. (3) Ray geometry generalization redefines a camera ray as the set of 3D points that project at a given image location; this generalization supports rays that are not straight lines, and enables designing cameras with non-linear rays that circumvent occluders to gather samples not visible from a reference viewpoint. ^ The image generalization paradigm has been used to develop visibility algorithms for a variety of datasets, of visibility parameter domains, and of performance-accuracy tradeoff requirements. These include an aggressive from-point visibility algorithm that guarantees finding all geometric primitives with a visible fragment, no matter how small primitive\u27s image footprint, an efficient and robust exact from-point visibility algorithm that iterates between a sample-based and a continuous visibility analysis of the image plane to quickly converge to the exact solution, a from-rectangle visibility algorithm that uses 2D visibility samples to compute a visible set that is exact under viewpoint translation, a flexible pinhole camera that enables local modulations of the sampling rate over the image plane according to an input importance map, an animated depth image that not only stores color and depth per pixel but also a compact representation of pixel sample trajectories, and a curved ray camera that integrates seamlessly multiple viewpoints into a multiperspective image without the viewpoint transition distortion artifacts of prior art methods

    An unstable truth: How massive stars get their mass

    Get PDF
    The pressure exerted by massive stars' radiation fields is an important mechanism regulating their formation. Detailed simulation of massive star formation therefore requires an accurate treatment of radiation. However, all published simulations have either used a diffusion approximation of limited validity; have only been able to simulate a single star fixed in space, thereby suppressing potentially important instabilities; or did not provide adequate resolution at locations where instabilities may develop. To remedy this, we have developed a new, highly accurate radiation algorithm that properly treats the absorption of the direct radiation field from stars and the re-emission and processing by interstellar dust. We use our new tool to perform 3D radiation-hydrodynamic simulations of the collapse of massive pre-stellar cores with laminar and turbulent initial conditions and properly resolve regions where we expect instabilities to grow. We find that mass is channelled to the stellar system via gravitational and Rayleigh-Taylor (RT) instabilities, in agreement with previous results using stars capable of moving, but in disagreement with methods where the star is held fixed or with simulations that do not adequately resolve the development of RT instabilities. For laminar initial conditions, proper treatment of the direct radiation field produces later onset of instability, but does not suppress it entirely provided the edges of radiation-dominated bubbles are adequately resolved. Instabilities arise immediately for turbulent pre-stellar cores because the initial turbulence seeds the instabilities. Our results suggest that RT features should be present around accreting massive stars throughout their formatio

    Simulating the Formation of Massive Protostars: I. Radiative Feedback and Accretion Disks

    Full text link
    We present radiation hydrodynamic simulations of collapsing protostellar cores with initial masses of 30, 100, and 200 M⊙_{\odot}. We follow their gravitational collapse and the formation of a massive protostar and protostellar accretion disk. We employ a new hybrid radiative feedback method blending raytracing techniques with flux-limited diffusion for a more accurate treatment of the temperature and radiative force. In each case, the disk that forms becomes Toomre-unstable and develops spiral arms. This occurs between 0.35 and 0.55 freefall times and is accompanied by an increase in the accretion rate by a factor of 2-10. Although the disk becomes unstable, no other stars are formed. In the case of our 100 and 200 M⊙_{\odot} simulation, the star becomes highly super-Eddington and begins to drive bipolar outflow cavities that expand outwards. These radiatively-driven bubbles appear stable, and appear to be channeling gas back onto the protostellar accretion disk. Accretion proceeds strongly through the disk. After 81.4 kyr of evolution, our 30 M⊙_{\odot} simulation shows a star with a mass of 5.48 M⊙_{\odot} and a disk of mass 3.3 M⊙_{\odot}, while our 100 M⊙_{\odot} simulation forms a 28.8 M⊙_{\odot} mass star with a 15.8 M⊙_{\odot} disk over the course of 41.6 kyr, and our 200 M⊙_{\odot} simulation forms a 43.7 M⊙_{\odot} star with an 18 M⊙_{\odot} disk in 21.9 kyr. In the absence of magnetic fields or other forms of feedback, the masses of the stars in our simulation do not appear limited by their own luminosities.Comment: 24 pages, 14 figures. Accepted to The Astrophysical Journa
    • …
    corecore