16 research outputs found

    Toward Evaluating Lighting Design Interface Paradigms for Novice Users

    Get PDF
    Lighting design is a complex and fundamental task in computer cinematography, involving adjustment of light parameters to define final scene appearance. Many lighting interfaces have been proposed to improve lighting design work flow. These paradigms exist in three paradigm categories: direct light parameter manipulation, indirect light feature manipulation (e.g., shadow dragging), and goal-based optimization of light through painting. To this date, no formal evaluation of the relative effectiveness of these methods has been performed. In this paper, we present a first step toward evaluating the three paradigms in the form of a user study with novice users. We focus our evaluation on simple tasks that directly affect lighting features, such as highlights, shadows and intensity gradients, in scenes with up to 2 point lights and 5 objects under direct illumination. We perform quantitative experiments to measure relative efficiency between interfaces together with qualitative input to explore the intuitiveness of the paradigms. Our results indicate that paint-based goal specification is more cumbersome than either direct or indirect manipulation. Furthermore, our investigation suggests improvements to not only the implementation of the paradigms, but also overall paradigm structure for further exploration

    Computational rim illumination of dynamic subjects using aerial robots

    Get PDF
    Lighting plays a major role in photography. Professional photographers use elaborate installations to light their subjects and achieve sophisticated styles. However, lighting moving subjects performing dynamic tasks presents significant challenges and requires expensive manual intervention. A skilled additional assistant might be needed to reposition lights as the subject changes pose or moves, and the extra logistics significantly raises costs and time. The associated latencies as the assistant lights the subject, and the communication required from the photographer to achieve optimum lighting could mean missing a critical shot. We present a new approach to lighting dynamic subjects where an aerial robot equipped with a portable light source lights the subject to automatically achieve a desired lighting effect. We focus on rim lighting, a particularly challenging effect to achieve with dynamic subjects, and allow the photographer to specify a required rim width. Our algorithm processes the images from the photographer׳s camera and provides necessary motion commands to the aerial robot to achieve the desired rim width in the resulting photographs. With an indoor setup, we demonstrate a control approach that localizes the aerial robot with reference to the subject and tracks the subject to achieve the necessary motion. In addition to indoor experiments, we perform open-loop outdoor experiments in a realistic photo-shooting scenario to understand lighting ergonomics. Our proof-of-concept results demonstrate the utility of robots in computational lighting

    Analyse de l'espace des chemins pour la composition des ombres et lumières

    Get PDF
    La réalisation des films d'animation 3D s'appuie de nos jours sur les techniques de rendu physiquement réaliste, qui simulent la propagation de la lumière dans chaque scène. Dans ce contexte, les graphistes 3D doivent jouer avec les effets de lumière pour accompagner la mise en scène, dérouler la narration du film, et transmettre son contenu émotionnel aux spectateurs. Cependant, les équations qui modélisent le comportement de la lumière laissent peu de place à l'expression artistique. De plus, l'édition de l'éclairage par essai-erreur est ralentie par les longs temps de rendu associés aux méthodes physiquement réalistes, ce qui rend fastidieux le travail des graphistes. Pour pallier ce problème, les studios d'animation ont souvent recours à la composition, où les graphistes retravaillent l'image en associant plusieurs calques issus du processus de rendu. Ces calques peuvent contenir des informations géométriques sur la scène, ou bien isoler un effet lumineux intéressant. L'avantage de la composition est de permettre une interaction en temps réel, basée sur les méthodes classiques d'édition en espace image. Notre contribution principale est la définition d'un nouveau type de calque pour la composition, le calque d'ombre. Un calque d'ombre contient la quantité d'énergie perdue dans la scène à cause du blocage des rayons lumineux par un objet choisi. Comparée aux outils existants, notre approche présente plusieurs avantages pour l'édition. D'abord, sa signification physique est simple à concevoir : lorsque l'on ajoute le calque d'ombre et l'image originale, toute ombre due à l'objet choisi disparaît. En comparaison, un masque d'ombre classique représente la fraction de rayons bloqués en chaque pixel, une information en valeurs de gris qui ne peut servir que d'approximation pour guider la composition. Ensuite, le calque d'ombre est compatible avec l'éclairage global : il enregistre l'énergie perdue depuis les sources secondaires, réfléchies au moins une fois dans la scène, là où les méthodes actuelles ne considèrent que les sources primaires. Enfin, nous démontrons l'existence d'une surestimation de l'éclairage dans trois logiciels de rendu différents lorsque le graphiste désactive les ombres pour un objet ; notre définition corrige ce défaut. Nous présentons un prototype d'implémentation des calques d'ombres à partir de quelques modifications du Path Tracing, l'algorithme de choix en production. Il exporte l'image originale et un nombre arbitraire de calques d'ombres liés à différents objets en une passe de rendu, requérant un temps supplémentaire de l'ordre de 15% dans des scènes à géométrie complexe et contenant plusieurs milieux participants. Des paramètres optionnels sont aussi proposés au graphiste pour affiner le rendu des calques d'ombres.The production of 3D animated motion picture now relies on physically realistic rendering techniques, that simulate light propagation within each scene. In this context, 3D artists must leverage lighting effects to support staging, deploy the film's narrative, and convey its emotional content to viewers. However, the equations that model the behavior of light leave little room for artistic expression. In addition, editing illumination by trial-and-error is tedious due to the long render times that physically realistic rendering requires. To remedy these problems, most animation studios resort to compositing, where artists rework a frame by associating multiple layers exported during rendering. These layers can contain geometric information on the scene, or isolate a particular lighting effect. The advantage of compositing is that interactions take place in real time, and are based on conventional image space operations. Our main contribution is the definition of a new type of layer for compositing, the shadow layer. A shadow layer contains the amount of energy lost in the scene due to the occlusion of light rays by a given object. Compared to existing tools, our approach presents several advantages for artistic editing. First, its physical meaning is straightforward: when a shadow layer is added to the original image, any shadow created by the chosen object disappears. In comparison, a traditional shadow matte represents the ratio of occluded rays at a pixel, a grayscale information that can only serve as an approximation to guide compositing operations. Second, shadow layers are compatible with global illumination: they pick up energy lost from secondary light sources that are scattered at least once in the scene, whereas the current methods only consider primary sources. Finally, we prove the existence of an overestimation of illumination in three different renderers when an artist disables the shadow of an object; our definition fixes this shortcoming. We present a prototype implementation for shadow layers obtained from a few modifications of path tracing, the main rendering algorithm in production. It exports the original image and any number of shadow layers associated with different objects in a single rendering pass, with an additional 15% time in scenes containing complex geometry and multiple participating media. Optional parameters are also proposed to the artist to fine-tune the rendering of shadow layers

    Appearance-design interfaces and tools for computer cinematography: Evaluation and application

    Get PDF
    We define appearance design as the creation and editing of scene content such as lighting and surface materials in computer graphics. The appearance design process takes a significant amount of time relative to other production tasks and poses difficult artistic challenges. Many user interfaces have been proposed to make appearance design faster, easier, and more expressive, but no formal validation of these interfaces had been published prior to our body of work. With a focus on novice users, we present a series of investigations into the strengths and weaknesses of various appearance design user interfaces. In particular, we develop an experimental methodology for the evaluation of representative user interface paradigms in the areas of lighting and material design. We conduct three user studies having subjects perform design tasks under controlled conditions. In these studies, we discover new insight into the effectiveness of each paradigm for novices measured by objective performance as well as subjective feedback. We also offer observations on common workflow and capabilities of novice users in these domains. We use the results of our lighting study to develop a new representation for artistic control of lighting, where light travels along nonlinear paths

    Real-time Cinematic Design Of Visual Aspects In Computer-generated Images

    Get PDF
    Creation of visually-pleasing images has always been one of the main goals of computer graphics. Two important components are necessary to achieve this goal --- artists who design visual aspects of an image (such as materials or lighting) and sophisticated algorithms that render the image. Traditionally, rendering has been of greater interest to researchers, while the design part has always been deemed as secondary. This has led to many inefficiencies, as artists, in order to create a stunning image, are often forced to resort to the traditional, creativity-baring, pipelines consisting of repeated rendering and parameter tweaking. Our work shifts the attention away from the rendering problem and focuses on the design. We propose to combine non-physical editing with real-time feedback and provide artists with efficient ways of designing complex visual aspects such as global illumination or all-frequency shadows. We conform to existing pipelines by inserting our editing components into existing stages, hereby making editing of visual aspects an inherent part of the design process. Many of the examples showed in this work have been, until now, extremely hard to achieve. The non-physical aspect of our work enables artists to express themselves in more creative ways, not limited by the physical parameters of current renderers. Real-time feedback allows artists to immediately see the effects of applied modifications and compatibility with existing workflows enables easy integration of our algorithms into production pipelines

    Image-Based Relighting

    Get PDF
    This thesis proposes a method for changing the lighting in some types of images. The method requires only a single input image, either a studio photograph or a synthetic image, consisting of several simple objects placed on a uniformly coloured background. Based on 2D information (contours, shadows, specular areas) extracted from the input image, the method reconstructs a 3D model of the original lighting and and 2.5D models of objects in the image. It then modifies the appearance of shading and shadows to achieve relighting. It can produce visually satisfactory results without a full 3D description of the scene geometry, and requires minimal user assistance. While developing this method, the importance of different cues for understanding 3D geometry, such as contours or shadows, were considered. Constraints like symmetry that help determine surface shapes were also explored. The method has potential application in improving the appearance of existing photographs. It can also be used in image compositing to achieve consistent lighting

    Artistic Path Space Editing of Physically Based Light Transport

    Get PDF
    Die Erzeugung realistischer Bilder ist ein wichtiges Ziel der Computergrafik, mit Anwendungen u.a. in der Spielfilmindustrie, Architektur und Medizin. Die physikalisch basierte Bildsynthese, welche in letzter Zeit anwendungsübergreifend weiten Anklang findet, bedient sich der numerischen Simulation des Lichttransports entlang durch die geometrische Optik vorgegebener Ausbreitungspfade; ein Modell, welches für übliche Szenen ausreicht, Photorealismus zu erzielen. Insgesamt gesehen ist heute das computergestützte Verfassen von Bildern und Animationen mit wohlgestalteter und theoretisch fundierter Schattierung stark vereinfacht. Allerdings ist bei der praktischen Umsetzung auch die Rücksichtnahme auf Details wie die Struktur des Ausgabegeräts wichtig und z.B. das Teilproblem der effizienten physikalisch basierten Bildsynthese in partizipierenden Medien ist noch weit davon entfernt, als gelöst zu gelten. Weiterhin ist die Bildsynthese als Teil eines weiteren Kontextes zu sehen: der effektiven Kommunikation von Ideen und Informationen. Seien es nun Form und Funktion eines Gebäudes, die medizinische Visualisierung einer Computertomografie oder aber die Stimmung einer Filmsequenz -- Botschaften in Form digitaler Bilder sind heutzutage omnipräsent. Leider hat die Verbreitung der -- auf Simulation ausgelegten -- Methodik der physikalisch basierten Bildsynthese generell zu einem Verlust intuitiver, feingestalteter und lokaler künstlerischer Kontrolle des finalen Bildinhalts geführt, welche in vorherigen, weniger strikten Paradigmen vorhanden war. Die Beiträge dieser Dissertation decken unterschiedliche Aspekte der Bildsynthese ab. Dies sind zunächst einmal die grundlegende Subpixel-Bildsynthese sowie effiziente Bildsyntheseverfahren für partizipierende Medien. Im Mittelpunkt der Arbeit stehen jedoch Ansätze zum effektiven visuellen Verständnis der Lichtausbreitung, die eine lokale künstlerische Einflussnahme ermöglichen und gleichzeitig auf globaler Ebene konsistente und glaubwürdige Ergebnisse erzielen. Hierbei ist die Kernidee, Visualisierung und Bearbeitung des Lichts direkt im alle möglichen Lichtpfade einschließenden "Pfadraum" durchzuführen. Dies steht im Gegensatz zu Verfahren nach Stand der Forschung, die entweder im Bildraum arbeiten oder auf bestimmte, isolierte Beleuchtungseffekte wie perfekte Spiegelungen, Schatten oder Kaustiken zugeschnitten sind. Die Erprobung der vorgestellten Verfahren hat gezeigt, dass mit ihnen real existierende Probleme der Bilderzeugung für Filmproduktionen gelöst werden können

    Contrôle artistique du rendu en synthèse d'images

    Get PDF
    Les images de synthèse sont désormais omniprésentes dans les domaines de création artistique, en particulier celui du cinéma. Dans ce cadre, un infographiste modélise une scène virtuelle, qui est ensuite rendue par un algorithme spécialisé, appelé moteur de rendu, pour produire une image résultat, le rendu. La description de la scène et l'algorithme de rendu reposent tous deux sur des modèles physiques afin d'assurer un résultat réaliste. Cependant, l'infographiste ne recherche pas seulement un résultat réaliste mais aussi un objectif artistique, relatif à l'oeuvre concernée. Cet objectif artistique peut être en opposition avec le réalisme physique du rendu, de la même manière qu'en peinture, des techniques comme le clair-obscur ne correspondent pas à une réalité physique mais répondent à un objectif artistique précis. Dans cette thèse, nous commençons par définir la problèmatique de contrôle artistique du rendu en synthèse d'images et établissons trois critères d'évaluation des méthodes d'édition du rendu. Nous proposons une taxonomie caractérisant les différents paradigmes d'approches permettant de contrôler le rendu. Dans nos travaux, nous nous intéressons aux approches comportementales permettant de répondre équitabement à tous les critères d'évaluation retenus. Ces méthodes agissent sur le comportement du transport lumineux. Suivant cette approche, nous proposons un formalisme théorique d'édition du rendu par téléportation du transport lumineux, que nous intégrons aux équations de radiométrie, bases des algorithmes de rendu. Nous proposons ensuite une implémentation pratique, appelée portails, dans laquelle le contrôle du transport se fait à l'aide d'un couple de surfaces, entrée et sortie, représentant la téléportation dans l'espace 3D du transport lumineux. Nous intégrons les portails à différents moteurs courants, et analysons les résultats obtenus, à la fois nouveaux et en comparaison des travaux précèdents. Finalement, nous proposons une analyse préliminaire de la structure de l'espace des chemins et faisons un tour d'horizon des travaux futurs.Nowadays, computer-generated imagery (CGI) is a standard in digital content creation, as in the context of film production. Visual artists design a model of the virtual scene. The renderer, a specific software, then produces the resulting image, called rendering. Both the 3D model and the renderer rely on physical laws in order to give photorealism. However, for artistic purposes, the visual artist does not necessarily seeks photorealism but is more inclined to care about artistic goals, mostly based on the related work of art. These artistic goals may be in opposition to physically-based rendering, in a similar way to techniques such as chiaroscuro in painting which do not yield a photorealistic result but rather aim at a precise artistic goal. In this PhD thesis, we first define the question of artistic control in rendering and specify three criteria to evaluate editing techniques. We propose a taxonomy to characterise the suitable paradigms to edit rendering. In our work, we focus on behavioural approaches, which properly suit all the evaluation criteria we defined. These methods act on the behaviour of light transport. Following this approach, we propose a theoretical formalism to edit rendering through the teleportation of light transport. We integrate this formalism in the underlying concept of rendering techniques, the rendering equation. We propose a practical tool we call \emph{RayPortals}. Using RayPortals, the rendering is edited using a pair of surfaces, input and output, defining the teleportation in 3D space of ligth transport. We integrate RayPortals in usual rendering techniques. We compare to previous work and also show some novel results. Ultimately, we propose a preliminary analysis of the path-space structure and summarize the future works

    Audioptimization : global-based acoustic design

    Get PDF
    Thesis (Ph.D.)--Massachusetts Institute of Technology, Dept. of Architecture, 1999.Includes bibliographical references (leaves 114-120).Acoustic design is a difficult problem, because the human perception of sound depends on such things as decibel level, direction of propagation, and attenuation over time, none of which are tangible or visible. The advent of computer simulation and visualization techniques for acoustic design and analysis has yielded a variety of approaches for modeling acoustic performance. However, current computer-aided design and simulation tools suffer from two major drawbacks. First, obtaining the desired acoustic effects may require a long, tedious sequence of modeling and/or simulation steps. Second, current techniques for modeling the propagation of sound in an environment are prohibitively slow and do not support interactive design. This thesis presents a new approach to computer-aided acoustic design. It is based on the inverse problem of determining material and geometric settings for an environment from a description of the desired performance. The user interactively indicates a range of acceptable material and geometric modifications for an auditorium or similar space, and specifies acoustic goals in space and time by choosing target values for a set of acoustic measures. Given this set of goals and constraints, the system performs an optimization of surface material and geometric parameters using a combination of simulated annealing and steepest descent techniques. Visualization tools extract and present the simulated sound field for points sampled in space and time. The user manipulates the visualizations to create an intuitive expression of acoustic design goals. We achieve interactive rates for surface material modifications by preprocessing the geometric component of the simulation, and accelerate geometric modifications to the auditorium by trading accuracy for speed through a number of interactive controls. We describe an interactive system that allows flexible input and display of the solution and report results for several performance spaces.by Michael Christopher Monks.Ph.D
    corecore