2,089 research outputs found

    Artistic Path Space Editing of Physically Based Light Transport

    Get PDF
    Die Erzeugung realistischer Bilder ist ein wichtiges Ziel der Computergrafik, mit Anwendungen u.a. in der Spielfilmindustrie, Architektur und Medizin. Die physikalisch basierte Bildsynthese, welche in letzter Zeit anwendungsübergreifend weiten Anklang findet, bedient sich der numerischen Simulation des Lichttransports entlang durch die geometrische Optik vorgegebener Ausbreitungspfade; ein Modell, welches für übliche Szenen ausreicht, Photorealismus zu erzielen. Insgesamt gesehen ist heute das computergestützte Verfassen von Bildern und Animationen mit wohlgestalteter und theoretisch fundierter Schattierung stark vereinfacht. Allerdings ist bei der praktischen Umsetzung auch die Rücksichtnahme auf Details wie die Struktur des Ausgabegeräts wichtig und z.B. das Teilproblem der effizienten physikalisch basierten Bildsynthese in partizipierenden Medien ist noch weit davon entfernt, als gelöst zu gelten. Weiterhin ist die Bildsynthese als Teil eines weiteren Kontextes zu sehen: der effektiven Kommunikation von Ideen und Informationen. Seien es nun Form und Funktion eines Gebäudes, die medizinische Visualisierung einer Computertomografie oder aber die Stimmung einer Filmsequenz -- Botschaften in Form digitaler Bilder sind heutzutage omnipräsent. Leider hat die Verbreitung der -- auf Simulation ausgelegten -- Methodik der physikalisch basierten Bildsynthese generell zu einem Verlust intuitiver, feingestalteter und lokaler künstlerischer Kontrolle des finalen Bildinhalts geführt, welche in vorherigen, weniger strikten Paradigmen vorhanden war. Die Beiträge dieser Dissertation decken unterschiedliche Aspekte der Bildsynthese ab. Dies sind zunächst einmal die grundlegende Subpixel-Bildsynthese sowie effiziente Bildsyntheseverfahren für partizipierende Medien. Im Mittelpunkt der Arbeit stehen jedoch Ansätze zum effektiven visuellen Verständnis der Lichtausbreitung, die eine lokale künstlerische Einflussnahme ermöglichen und gleichzeitig auf globaler Ebene konsistente und glaubwürdige Ergebnisse erzielen. Hierbei ist die Kernidee, Visualisierung und Bearbeitung des Lichts direkt im alle möglichen Lichtpfade einschließenden "Pfadraum" durchzuführen. Dies steht im Gegensatz zu Verfahren nach Stand der Forschung, die entweder im Bildraum arbeiten oder auf bestimmte, isolierte Beleuchtungseffekte wie perfekte Spiegelungen, Schatten oder Kaustiken zugeschnitten sind. Die Erprobung der vorgestellten Verfahren hat gezeigt, dass mit ihnen real existierende Probleme der Bilderzeugung für Filmproduktionen gelöst werden können

    Huber Loss Reconstruction in Gradient-Domain Path Tracing

    Get PDF
    The focus of this thesis is to improve aspects related to the computational synthesis of photo-realistic images. Physically accurate images are generated by simulating the transportation of light between an observer and the light sources in a virtual environment. Path tracing is an algorithm that uses Monte Carlo methods to solve problems in the domain of light transport simulation, generating images by sampling light paths through the virtual scene. In this thesis we focus on the recently introduced gradient-domain path tracing algorithm. In addition to estimating the ordinary primal image, gradient-domain light transport algorithms also sample the horizontal and vertical gradients and solve a screened Poisson problem to reconstruct the final image. Using L2 loss for reconstruction produces an unbiased final image, but the results can often be visually unpleasing due to its sensitivity to extreme-value outliers in the sampled primal and gradient images. L1 loss can be used to suppress this sensitivity at the cost of introducing bias. We investigate the use of the Huber loss function in the reconstruction step of the gradient-domain path tracing algorithm. We show that using the Huber loss function for the gradient in the Poisson solver with a good choice of cut-off parameter can result in reduced sensitivity to outliers and consequently lower relative mean squared error than L1 or L2 when compared to ground-truth images. The main contribution of this thesis is a predictive multiplicative model for the cut-off parameter. The model takes as input pixel statistics, which can be computed on-line during sampling and predicts reconstruction parameters that on average outperforms reconstruction using L1 and L2

    Analyse de l'espace des chemins pour la composition des ombres et lumières

    Get PDF
    La réalisation des films d'animation 3D s'appuie de nos jours sur les techniques de rendu physiquement réaliste, qui simulent la propagation de la lumière dans chaque scène. Dans ce contexte, les graphistes 3D doivent jouer avec les effets de lumière pour accompagner la mise en scène, dérouler la narration du film, et transmettre son contenu émotionnel aux spectateurs. Cependant, les équations qui modélisent le comportement de la lumière laissent peu de place à l'expression artistique. De plus, l'édition de l'éclairage par essai-erreur est ralentie par les longs temps de rendu associés aux méthodes physiquement réalistes, ce qui rend fastidieux le travail des graphistes. Pour pallier ce problème, les studios d'animation ont souvent recours à la composition, où les graphistes retravaillent l'image en associant plusieurs calques issus du processus de rendu. Ces calques peuvent contenir des informations géométriques sur la scène, ou bien isoler un effet lumineux intéressant. L'avantage de la composition est de permettre une interaction en temps réel, basée sur les méthodes classiques d'édition en espace image. Notre contribution principale est la définition d'un nouveau type de calque pour la composition, le calque d'ombre. Un calque d'ombre contient la quantité d'énergie perdue dans la scène à cause du blocage des rayons lumineux par un objet choisi. Comparée aux outils existants, notre approche présente plusieurs avantages pour l'édition. D'abord, sa signification physique est simple à concevoir : lorsque l'on ajoute le calque d'ombre et l'image originale, toute ombre due à l'objet choisi disparaît. En comparaison, un masque d'ombre classique représente la fraction de rayons bloqués en chaque pixel, une information en valeurs de gris qui ne peut servir que d'approximation pour guider la composition. Ensuite, le calque d'ombre est compatible avec l'éclairage global : il enregistre l'énergie perdue depuis les sources secondaires, réfléchies au moins une fois dans la scène, là où les méthodes actuelles ne considèrent que les sources primaires. Enfin, nous démontrons l'existence d'une surestimation de l'éclairage dans trois logiciels de rendu différents lorsque le graphiste désactive les ombres pour un objet ; notre définition corrige ce défaut. Nous présentons un prototype d'implémentation des calques d'ombres à partir de quelques modifications du Path Tracing, l'algorithme de choix en production. Il exporte l'image originale et un nombre arbitraire de calques d'ombres liés à différents objets en une passe de rendu, requérant un temps supplémentaire de l'ordre de 15% dans des scènes à géométrie complexe et contenant plusieurs milieux participants. Des paramètres optionnels sont aussi proposés au graphiste pour affiner le rendu des calques d'ombres.The production of 3D animated motion picture now relies on physically realistic rendering techniques, that simulate light propagation within each scene. In this context, 3D artists must leverage lighting effects to support staging, deploy the film's narrative, and convey its emotional content to viewers. However, the equations that model the behavior of light leave little room for artistic expression. In addition, editing illumination by trial-and-error is tedious due to the long render times that physically realistic rendering requires. To remedy these problems, most animation studios resort to compositing, where artists rework a frame by associating multiple layers exported during rendering. These layers can contain geometric information on the scene, or isolate a particular lighting effect. The advantage of compositing is that interactions take place in real time, and are based on conventional image space operations. Our main contribution is the definition of a new type of layer for compositing, the shadow layer. A shadow layer contains the amount of energy lost in the scene due to the occlusion of light rays by a given object. Compared to existing tools, our approach presents several advantages for artistic editing. First, its physical meaning is straightforward: when a shadow layer is added to the original image, any shadow created by the chosen object disappears. In comparison, a traditional shadow matte represents the ratio of occluded rays at a pixel, a grayscale information that can only serve as an approximation to guide compositing operations. Second, shadow layers are compatible with global illumination: they pick up energy lost from secondary light sources that are scattered at least once in the scene, whereas the current methods only consider primary sources. Finally, we prove the existence of an overestimation of illumination in three different renderers when an artist disables the shadow of an object; our definition fixes this shortcoming. We present a prototype implementation for shadow layers obtained from a few modifications of path tracing, the main rendering algorithm in production. It exports the original image and any number of shadow layers associated with different objects in a single rendering pass, with an additional 15% time in scenes containing complex geometry and multiple participating media. Optional parameters are also proposed to the artist to fine-tune the rendering of shadow layers

    Photorealistic physically based render engines: a comparative study

    Full text link
    Pérez Roig, F. (2012). Photorealistic physically based render engines: a comparative study. http://hdl.handle.net/10251/14797.Archivo delegad

    Perception based heterogeneous subsurface scattering for film

    Get PDF
    Many real world materials exhibit complex subsurface scattering of light. This internal light interaction creates the perception of translucency for the human visual system. Translucent materials and simulation of the subsurface scattering of light has become an expected necessity for generating warmth and realism in computer generated imagery. The light transport within heterogenous materials, such as marble, has proved challenging to model and render. The current material models available to digital artists have been limited to homogeneous subsurface scattering despite a few publications documenting success at simulating heterogeneous light transport. While the publications successfully simulate this complex phenomenon, the material descriptions have been highly specialized and far from intuitive. By combining the measurable properties of heterogeneous translucent materials with the defining properties of translucency, as perceived by the human visual system, a description of heterogeneous translucent materials that is suitable for artist use in a film production pipeline can be achieved. Development of the material description focuses on integration with the film pipeline, ease of use, and reasonable approximation of heterogeneous translucency based on perception. Methods of material manipulation are explored to determine which properties should be modifiable by artists while maintaining the perception of heterogenous translucency

    Manipulação interativa de cenas fotorealistas usando path tracing

    Get PDF
    Rendering pleasing photorealistic images requires both a high-quality renderer and wellcrafted scenes. While rendering algorithms and systems have made some impressive progress over the last two decades, creating nice scenes still remains highly dependent of the artistic skills of the modeler. As a result, researchers tend to rely on a small number of existing good-looking scenes to test their algorithms. While creating new scenes from scratch is difficult for non-artists, editing existing scenes to achieve new and desired results is a task at the reach of the average graphics user. We present a system that allows users with no special artistic skills to create new scenes by modifying existing ones through a simple user interface. Enabled by modern hardware and software advancements, we render the scenes photorealistically using path tracing and provide instant feedback on the user modifications. The quality of the images generated by our system is comparable to established offline renderers, such as PBRT, while still maintaining interactive performance. Our system should stimulate the creation of new scene datasets, and allow anyone to customize existing scenes with shapes and materials according to his/her specific needs or desires. The easy customization of scenes and the high-quality renderings produced by our system may also stimulate other professionals, such as designers, scenographers, architects, decorators, etc. to make more intense use of computer generated imaging in their daily tasks.Renderizar imagens fotorealistas agradáveis requer tanto um renderizador de alta qualidade quanto cenas bem feitas. Enquanto sistemas e algoritmos de rendering tiveram progressos impressionantes nas últimas duas décadas, a criação de cenas interessantes ainda é altamente dependente nas habilidades artísticas do modelador. Como resultado, pesquisadores tendem a usar uma porção pequena de boas cenas já existentes para testar seus algoritmos. Embora a criação de cenas do zero seja difícil para não-artistas, editar cenas existentes para conseguir novos resultados é uma tarefa ao alcance do usuário médio de computação gráfica. Nós apresentamos um sistema que permite usuários sem habilidades artísticas especiais a criar novas cenas modificando cenas existentes através de uma interface simples. Baseado em avanços recentes em hardware e software nós renderizamos as cenas fotorealisticamente usando path tracing, provendo feedback instantâneo nas modificações do usuário. A qualidade das imagens geradas pelo nosso sistema é comparável a renderizadores offline estabelecidos, como o PBRT, enquanto ainda mantendo performance interativa. Nosso sistema deve estimular a criação de novos datasets de cenas, e permitir a qualquer um a customizar cenas existentes com formas e materiais de acordo com sua necessidade ou desejo. A fácil customização de cenas e as imagens de alta qualidade produzidas pelo nosso sistema também podem estimular outros profissionais, como designers, cenógrafos, arquitetus, decoradores, etc. a fazer uso mais intenso de imagens geradas por computador em suas tarefas diárias

    3D Shape Modeling Using High Level Descriptors

    Get PDF
    corecore