160 research outputs found

    Robust object-based algorithms for direct shadow simulation

    Get PDF
    En informatique graphique, les algorithmes de générations d'ombres évaluent la quantité de lumière directement perçue par une environnement virtuel. Calculer précisément des ombres est cependant coûteux en temps de calcul. Dans cette dissertation, nous présentons un nouveau système basé objet robuste, qui permet de calculer des ombres réalistes sur des scènes dynamiques et ce en temps interactif. Nos contributions incluent notamment le développement de nouveaux algorithmes de génération d'ombres douces ainsi que leur mise en oeuvre efficace sur processeur graphique. Nous commençons par formaliser la problématique du calcul d'ombres directes. Tout d'abord, nous définissons ce que sont les ombres directes dans le contexte général du transport de la lumière. Nous étudions ensuite les techniques interactives qui génèrent des ombres directes. Suite à cette étude nous montrons que mêmes les algorithmes dit physiquement réalistes se reposent sur des approximations. Nous mettons également en avant, que malgré leur contraintes géométriques, les algorithmes d'ombres basées objet sont un bon point de départ pour résoudre notre problématique de génération efficace et robuste d'ombres directes. Basé sur cette observation, nous étudions alors le système basé objet existant et mettons en avant ses problèmes de robustesse. Nous proposons une nouvelle technique qui améliore la qualité des ombres générées par ce système en lui ajoutant une étape de mélange de pénombres. Malgré des propriétés et des résultats convaincants, les limitations théoriques et de mise en oeuvre limite la qualité générale et les performances de cet algorithme. Nous présentons ensuite un nouvel algorithme d'ombres basées objet. Cet algorithme combine l'efficacité de l'approche basée objet temps réel avec la précision de sa généralisation au rendu hors ligne. Notre algorithme repose sur l'évaluation locale du nombre d'objets entre deux points : la complexité de profondeur. Nous décrivons comment nous utilisons cet algorithme pour échantillonner la complexité de profondeur entre les surfaces visibles d'une scène et une source lumineuse. Nous générons ensuite des ombres à partir de cette information soit en modulant l'éclairage direct soit en intégrant numériquement l'équation d'illumination directe. Nous proposons ensuite une extension de notre algorithme afin qu'il puisse prendre en compte les ombres projetées par des objets semi-opaque. Finalement, nous présentons une mise en oeuvre efficace de notre système qui démontre que des ombres basées objet peuvent être générées de façon efficace et ce même sur une scène dynamique. En rendu temps réel, il est commun de représenter des objets très détaillés encombinant peu de triangles avec des textures qui représentent l'opacité binaire de l'objet. Les techniques de génération d'ombres basées objet ne traitent pas de tels triangles dit "perforés". De par leur nature, elles manipulent uniquement les géométries explicitement représentées par des primitives géométriques. Nous présentons une nouvel algorithme basé objet qui lève cette limitation. Nous soulignons que notre méthode peut être efficacement combinée avec les systèmes existants afin de proposer un système unifié basé objet qui génère des ombres à la fois pour des maillages classiques et des géométries perforées. La mise en oeuvre proposée montre finalement qu'une telle combinaison fournit une solution élégante, efficace et robuste à la problématique générale de l'éclairage direct et ce aussi bien pour des applications temps réel que des applications sensibles à la la précision du résultat.Direct shadow algorithms generate shadows by simulating the direct lighting interaction in a virtual environment. The main challenge with the accurate direct shadow problematic is its computational cost. In this dissertation, we develop a new robust object-based shadow framework that provides realistic shadows at interactive frame rate on dynamic scenes. Our contributions include new robust object-based soft shadow algorithms and efficient interactive implementations. We start, by formalizing the direct shadow problematic. Following the light transport problematic, we first formalize what are robust direct shadows. We then study existing interactive direct shadow techniques and outline that the real time direct shadow simulation remains an open problem. We show that even the so called physically plausible soft shadow algorithms still rely on approximations. Nevertheless we exhibit that, despite their geometric constraints, object-based approaches seems well suited when targeting accurate solutions. Starting from the previous analyze, we investigate the existing object-based shadow framework and discuss about its robustness issues. We propose a new technique that drastically improve the resulting shadow quality by improving this framework with a penumbra blending stage. We present a practical implementation of this approach. From the obtained results, we outline that, despite desirable properties, the inherent theoretical and implementation limitations reduce the overall quality and performances of the proposed algorithm. We then present a new object-based soft shadow algorithm. It merges the efficiency of the real time object-based shadows with the accuracy of its offline generalization. The proposed algorithm lies onto a new local evaluation of the number of occluders between twotwo points (\ie{} the depth complexity). We describe how we use this algorithm to sample the depth complexity between any visible receiver and the light source. From this information, we compute shadows by either modulate the direct lighting or numerically solve the direct illumination with an accuracy depending on the light sampling strategy. We then propose an extension of our algorithm in order to handle shadows cast by semi opaque occluders. We finally present an efficient implementation of this framework that demonstrates that object-based shadows can be efficiently used on complex dynamic environments. In real time rendering, it is common to represent highly detailed objects with few triangles and transmittance textures that encode their binary opacity. Object-based techniques do not handle such perforated triangles. Due to their nature, they can only evaluate the shadows cast by models whose their shape is explicitly defined by geometric primitives. We describe a new robust object-based algorithm that addresses this main limitation. We outline that this method can be efficiently combine with object-based frameworks in order to evaluate approximative shadows or simulate the direct illumination for both common meshes and perforated triangles. The proposed implementation shows that such combination provides a very strong and efficient direct lighting framework, well suited to many domains ranging from quality sensitive to performance critical applications

    Ray Tracing Gems

    Get PDF
    This book is a must-have for anyone serious about rendering in real time. With the announcement of new ray tracing APIs and hardware to support them, developers can easily create real-time applications with ray tracing as a core component. As ray tracing on the GPU becomes faster, it will play a more central role in real-time rendering. Ray Tracing Gems provides key building blocks for developers of games, architectural applications, visualizations, and more. Experts in rendering share their knowledge by explaining everything from nitty-gritty techniques that will improve any ray tracer to mastery of the new capabilities of current and future hardware. What you'll learn: The latest ray tracing techniques for developing real-time applications in multiple domains Guidance, advice, and best practices for rendering applications with Microsoft DirectX Raytracing (DXR) How to implement high-performance graphics for interactive visualizations, games, simulations, and more Who this book is for: Developers who are looking to leverage the latest APIs and GPU technology for real-time rendering and ray tracing Students looking to learn about best practices in these areas Enthusiasts who want to understand and experiment with their new GPU

    Efficient shadow map filtering

    Get PDF
    Schatten liefern dem menschlichen Auge wichtige Informationen, um die räumlichen Beziehungen in der Umgebung in der wir leben wahrzunehmen. Sie sind somit ein unverzichtbarer Bestandteil der realistischen Bildsynthese. Leider ist die Sichtbarkeitsberechnung ein rechenintensiver Prozess. Bildbasierte Methoden, wie zum Beispiel Shadow Maps, verhalten sich positiv gegenüber einer wachsenden Szenenkomplexität, produzieren aber Artefakte sowohl in der räumlichen, als auch in der temporalen Domäne, da sie nicht wie herkömmliche Bilder gefiltert werden können. Diese Dissertation präsentiert neue Echtzeit-Schattenverfahren die das effiziente Filtern von Shadow Maps ermöglichen, um die Bildqualität und das Kohärenzverhalten zu verbessern. Hierzu formulieren wir den Schattentest als eine Summe von Produkten, bei der die beiden Parameter der Schattenfunktion separiert werden. Shadow Maps werden dann in sogenannte Basis-Bilder transformiert, die im Gegensatz zu Shadow Maps linear gefiltert werden können. Die gefilterten Basis-Bilder sind äquivalent zu einem vorgefilterten Schattentest und werden verwendet, um geglättete Schattenkanten und realistische weiche Schatten zu berechnen.Shadows provide the human visual system with important cues to sense spatial relationships in the environment we live in. As such they are an indispensable part of realistic computerenerated imagery. Unfortunately, visibility determination is computationally expensive. Image-based simplifications to the problem such as Shadow Maps perform well with increased scene complexity but produce artifacts both in the spatial and temporal domain because they lack efficient filtering support. This dissertation presents novel real-time shadow algorithms to enable efficient filtering of Shadow Maps in order to increase the image quality and overall coherence characteristics. This is achieved by expressing the shadow test as a sum of products where the parameters of the shadow test are separated from each other. Ordinary Shadow Maps are then subject to a transformation into new so called basis-images which can, as opposed to Shadow Maps, be linearly filtered. The convolved basis images are equivalent to a pre-filtered shadow test and used to reconstruct anti-aliased as well as physically plausible all-frequency shadows

    Three--dimensional medical imaging: Algorithms and computer systems

    Get PDF
    This paper presents an introduction to the field of three-dimensional medical imaging It presents medical imaging terms and concepts, summarizes the basic operations performed in three-dimensional medical imaging, and describes sample algorithms for accomplishing these operations. The paper contains a synopsis of the architectures and algorithms used in eight machines to render three-dimensional medical images, with particular emphasis paid to their distinctive contributions. It compares the performance of the machines along several dimensions, including image resolution, elapsed time to form an image, imaging algorithms used in the machine, and the degree of parallelism used in the architecture. The paper concludes with general trends for future developments in this field and references on three-dimensional medical imaging

    Efficient shadow algorithms on graphics hardware

    Get PDF
    Thesis (S.M.)--Massachusetts Institute of Technology, Dept. of Electrical Engineering and Computer Science, February 2005.Includes bibliographical references (p. 85-92).Shadows are important to computer graphics because they add realism and help the viewer identify spatial relationships. Shadows are also useful story-telling devices. For instance, artists carefully choose the shape, softness, and placement of shadows to establish mood or character. Many shadow generation techniques developed over the years have been used successfully in offline movie production. It is still challenging, however, to compute high-quality shadows in real-time for dynamic scenes. This thesis presents two efficient shadow algorithms. Although these algorithms are designed to run in real-time on graphics hardware, they are also well-suited to offline rendering systems. First, we describe a hybrid algorithm for rendering hard shadows accurately and efficiently. Our method combines the strengths of two existing techniques, shadow maps and shadow volumes. We first use a shadow map to identify the pixels in the image that lie near shadow discontinuities. Then, we perform the shadow-volume computation only at these pixels to ensure accurate shadow edges. This approach simultaneously avoids the edge aliasing artifacts of standard shadow maps and avoids the high fillrate consumption of standard shadow volumes. The algorithm relies on a hardware mechanism that we call a computation mask for rapidly rejecting non-silhouette pixels during rasterization. Second, we present a method for the real-time rendering of soft shadows. Our approach builds on the shadow map algorithm by attaching geometric primitives that we call smoothies to the objects' silhouettes. The smoothies give rise to fake shadows that appear qualitatively like soft shadows, without the cost of densely sampling an area light source.(cont.) In particular, the softness of the shadow edges depends on the ratio of distances between the light source, the blockers, and the receivers. The soft shadow edges hide objectionable aliasing artifacts that are noticeable with ordinary shadow maps. Our algorithm computes shadows efficiently in image space and maps well to programmable graphics hardware.by Eric Chan.S.M

    Point based graphics rendering with unified scalability solutions.

    Get PDF
    Standard real-time 3D graphics rendering algorithms use brute force polygon rendering, with complexity linear in the number of polygons and little regard for limiting processing to data that contributes to the image. Modern hardware can now render smaller scenes to pixel levels of detail, relaxing surface connectivity requirements. Sub-linear scalability optimizations are typically self-contained, requiring specific data structures, without shared functions and data. A new point based rendering algorithm 'Canopy' is investigated that combines multiple typically sub-linear scalability solutions, using a small core of data structures. Specifically, locale management, hierarchical view volume culling, backface culling, occlusion culling, level of detail and depth ordering are addressed. To demonstrate versatility further, shadows and collision detection are examined. Polygon models are voxelized with interpolated attributes to provide points. A scene tree is constructed, based on a BSP tree of points, with compressed attributes. The scene tree is embedded in a compressed, partitioned, procedurally based scene graph architecture that mimics conventional systems with groups, instancing, inlines and basic read on demand rendering from backing store. Hierarchical scene tree refinement constructs an image tree image space equivalent, with object space scene node points projected, forming image node equivalents. An image graph of image nodes is maintained, describing image and object space occlusion relationships, hierarchically refined with front to back ordering to a specified threshold whilst occlusion culling with occluder fusion. Visible nodes at medium levels of detail are refined further to rasterization scales. Occlusion culling defines a set of visible nodes that can support caching for temporal coherence. Occlusion culling is approximate, possibly not suiting critical applications. Qualities and performance are tested against standard rendering. Although the algorithm has a 0(f) upper bound in the scene sizef, it is shown to practically scale sub-linearly. Scenes with several hundred billion polygons conventionally, are rendered at interactive frame rates with minimal graphics hardware support

    Advanced methods for relightable scene representations in image space

    Get PDF
    The realistic reproduction of visual appearance of real-world objects requires accurate computer graphics models that describe the optical interaction of a scene with its surroundings. Data-driven approaches that model the scene globally as a reflectance field function in eight parameters deliver high quality and work for most material combinations, but are costly to acquire and store. Image-space relighting, which constrains the application to create photos with a virtual, fix camera in freely chosen illumination, requires only a 4D data structure to provide full fidelity. This thesis contributes to image-space relighting on four accounts: (1) We investigate the acquisition of 4D reflectance fields in the context of sampling and propose a practical setup for pre-filtering of reflectance data during recording, and apply it in an adaptive sampling scheme. (2) We introduce a feature-driven image synthesis algorithm for the interpolation of coarsely sampled reflectance data in software to achieve highly realistic images. (3) We propose an implicit reflectance data representation, which uses a Bayesian approach to relight complex scenes from the example of much simpler reference objects. (4) Finally, we construct novel, passive devices out of optical components that render reflectance field data in real-time, shaping the incident illumination into the desired imageDie realistische Wiedergabe der visuellen Erscheinung einer realen Szene setzt genaue Modelle aus der Computergraphik für die Interaktion der Szene mit ihrer Umgebung voraus. Globale Ansätze, die das Verhalten der Szene insgesamt als Reflektanzfeldfunktion in acht Parametern modellieren, liefern hohe Qualität für viele Materialtypen, sind aber teuer aufzuzeichnen und zu speichern. Verfahren zur Neubeleuchtung im Bildraum schränken die Anwendbarkeit auf fest gewählte Kameras ein, ermöglichen aber die freie Wahl der Beleuchtung, und erfordern dadurch lediglich eine 4D - Datenstruktur für volle Wiedergabetreue. Diese Arbeit enthält vier Beiträge zu diesem Thema: (1) wir untersuchen die Aufzeichnung von 4D Reflektanzfeldern im Kontext der Abtasttheorie und schlagen einen praktischen Aufbau vor, der Reflektanzdaten bereits während der Messung vorfiltert. Wir verwenden ihn in einem adaptiven Abtastschema. (2) Wir führen einen merkmalgesteuerten Bildsynthesealgorithmus für die Interpolation von grob abgetasteten Reflektanzdaten ein. (3) Wir schlagen eine implizite Beschreibung von Reflektanzdaten vor, die mit einem Bayesschen Ansatz komplexe Szenen anhand des Beispiels eines viel einfacheren Referenzobjektes neu beleuchtet. (4) Unter der Verwendung optischer Komponenten schaffen wir passive Aufbauten zur Darstellung von Reflektanzfeldern in Echtzeit, indem wir einfallende Beleuchtung direkt in das gewünschte Bild umwandeln

    Perceptually-motivated, interactive rendering and editing of global illumination

    Get PDF
    This thesis proposes several new perceptually-motivated techniques to synthesize, edit and enhance depiction of three-dimensional virtual scenes. Finding algorithms that fit the perceptually economic middle ground between artistic depiction and full physical simulation is the challenge taken in this work. First, we will present three interactive global illumination rendering approaches that are inspired by perception to efficiently depict important light transport. Those methods have in common to compute global illumination in large and fully dynamic scenes allowing for light, geometry, and material changes at interactive or real-time rates. Further, this thesis proposes a tool to edit reflections, that allows to bend physical laws to match artistic goals by exploiting perception. Finally, this work contributes a post-processing operator that depicts high contrast scenes in the same way as artists do, by simulating it "seen'; through a dynamic virtual human eye in real-time.Diese Arbeit stellt eine Anzahl von Algorithmen zur Synthese, Bearbeitung und verbesserten Darstellung von virtuellen drei-dimensionalen Szenen vor. Die Herausforderung liegt dabei in der Suche nach Ausgewogenheit zwischen korrekter physikalischer Berechnung und der künstlerischen, durch die Gesetze der menschlichen Wahrnehmung motivierten Praxis. Zunächst werden drei Verfahren zur Bild-Synthese mit globaler Beleuchtung vorgestellt, deren Gemeinsamkeit in der effizienten Handhabung großer und dynamischer virtueller Szenen liegt, in denen sich Geometrie, Materialen und Licht frei verändern lassen. Darauffolgend wird ein Werkzeug zum Editieren von Reflektionen in virtuellen Szenen das die menschliche Wahrnehmung ausnutzt um künstlerische Vorgaben umzusetzen, vorgestellt. Die Arbeit schließt mit einem Filter am Ende der Verarbeitungskette, der den wahrgenommen Kontrast in einem Bild erhöht, indem er die Entstehung von Glanzeffekten im menschlichen Auge nachbildet

    Photovoltaic potential in building façades

    Get PDF
    Tese de doutoramento, Sistemas Sustentáveis de Energia, Universidade de Lisboa, Faculdade de Ciências, 2018Consistent reductions in the costs of photovoltaic (PV) systems have prompted interest in applications with less-than-optimum inclinations and orientations. That is the case of building façades, with plenty of free area for the deployment of solar systems. Lower sun heights benefit vertical façades, whereas rooftops are favoured when the sun is near the zenith, therefore the PV potential in urban environments can increase twofold when the contribution from building façades is added to that of the rooftops. This complementarity between façades and rooftops is helpful for a better match between electricity demand and supply. This thesis focuses on: i) the modelling of façade PV potential; ii) the optimization of façade PV yields; and iii) underlining the overall role that building façades will play in future solar cities. Digital surface and solar radiation modelling methodologies were reviewed. Special focus is given to the 3D LiDAR-based model SOL and the CAD/plugin models DIVA and LadyBug. Model SOL was validated against measurements from the BIPV system in the façade of the Solar XXI building (Lisbon), and used to evaluate façade PV potential in different urban sites in Lisbon and Geneva. The plugins DIVA and LadyBug helped assessing the potential for PV glare from façade integrated photovoltaics in distinct urban blocks. Technologies for PV integration in façades were also reviewed. Alternative façade designs, including louvers, geometric forms and balconies, were explored and optimized for the maximization of annual solar irradiation using DIVA. Partial shading impacts on rooftops and façades were addressed through SOL simulations and the interconnections between PV modules were optimized using a custom Multi-Objective Genetic Algorithm. The contribution of PV façades to the solar potential of two dissimilar neighbourhoods in Lisbon was quantified using SOL, considering local electricity consumption. Cost-efficient rooftop/façade PV mixes are proposed based on combined payback times. Impacts of larger scale PV deployment on the spare capacity of power distribution transformers were studied through LadyBug and SolarAnalyst simulations. A new empirical solar factor was proposed to account for PV potential in future upgrade interventions. The combined effect of aggregating building demand, photovoltaic generation and storage on the self-consumption of PV and net load variance was analysed using irradiation results from DIVA, metered distribution transformer loads and custom optimization algorithms. SOL is shown to be an accurate LiDAR-based model (nMBE ranging from around 7% to 51%, nMAE from 20% to 58% and nRMSE from 29% to 81%), being the isotropic diffuse radiation algorithm its current main limitation. In addition, building surface material properties should be regarded when handling façades, for both irradiance simulation and PV glare evaluation. The latter appears to be negligible in comparison to glare from typical glaze/mirror skins used in high-rises. Irradiation levels in the more sunlit façades reach about 50-60% of the rooftop levels. Latitude biases the potential towards the vertical surfaces, which can be enhanced when the proportion of diffuse radiation is high. Façade PV potential can be increased in about 30% if horizontal folded louvers becomes a more common design and in another 6 to 24% if the interconnection of PV modules are optimized. In 2030, a mix of PV systems featuring around 40% façade and 60% rooftop occupation is shown to comprehend a combined financial payback time of 10 years, if conventional module efficiencies reach 20%. This will trigger large-scale PV deployment that might overwhelm current grid assets and lead to electricity grid instability. This challenge can be resolved if the placement of PV modules is optimized to increase self-sufficiency while keeping low net load variance. Aggregated storage within solar communities might help resolving the conflicting interests between prosumers and grid, although the former can achieve self-sufficiency levels above 50% with storage capacities as small as 0.25kWh/kWpv. Business models ought to adapt in order to create conditions for both parts to share the added value of peak power reduction due to optimized solar façades.Fundação para a Ciência e a Tecnologia (FCT), SFRH/BD/52363/201
    corecore