54 research outputs found

    The delta radiance field

    Get PDF
    The wide availability of mobile devices capable of computing high fidelity graphics in real-time has sparked a renewed interest in the development and research of Augmented Reality applications. Within the large spectrum of mixed real and virtual elements one specific area is dedicated to produce realistic augmentations with the aim of presenting virtual copies of real existing objects or soon to be produced products. Surprisingly though, the current state of this area leaves much to be desired: Augmenting objects in current systems are often presented without any reconstructed lighting whatsoever and therefore transfer an impression of being glued over a camera image rather than augmenting reality. In light of the advances in the movie industry, which has handled cases of mixed realities from one extreme end to another, it is a legitimate question to ask why such advances did not fully reflect onto Augmented Reality simulations as well. Generally understood to be real-time applications which reconstruct the spatial relation of real world elements and virtual objects, Augmented Reality has to deal with several uncertainties. Among them, unknown illumination and real scene conditions are the most important. Any kind of reconstruction of real world properties in an ad-hoc manner must likewise be incorporated into an algorithm responsible for shading virtual objects and transferring virtual light to real surfaces in an ad-hoc fashion. The immersiveness of an Augmented Reality simulation is, next to its realism and accuracy, primarily dependent on its responsiveness. Any computation affecting the final image must be computed in real-time. This condition rules out many of the methods used for movie production. The remaining real-time options face three problems: The shading of virtual surfaces under real natural illumination, the relighting of real surfaces according to the change in illumination due to the introduction of a new object into a scene, and the believable global interaction of real and virtual light. This dissertation presents contributions to answer the problems at hand. Current state-of-the-art methods build on Differential Rendering techniques to fuse global illumination algorithms into AR environments. This simple approach has a computationally costly downside, which limits the options for believable light transfer even further. This dissertation explores new shading and relighting algorithms built on a mathematical foundation replacing Differential Rendering. The result not only presents a more efficient competitor to the current state-of-the-art in global illumination relighting, but also advances the field with the ability to simulate effects which have not been demonstrated by contemporary publications until now

    Direct Animation Interfaces : an Interaction Approach to Computer Animation

    Get PDF
    Creativity tools for digital media have been largely democratised, offering a range from beginner to expert tools. Yet computer animation, the art of instilling life into believable characters and fantastic worlds, is still a highly sophisticated process restricted to the spheres of expert users. This is largely due to the methods employed: in keyframe animation dynamics are indirectly specified over abstract descriptions, while performance animation suffers from inflexibility due to a high technological overhead. The reverse trend in human-computer interaction to make interfaces more direct, intuitive, and natural to use has so far hardly touched the animation world: decades of interaction research have scarcely been linked to research and development of animation techniques. The hypothesis of this work is that an interaction approach to computer animation can inform the design and development of novel animation techniques. Three goals are formulated to illustrate the validity of this thesis. Computer animation methods and interfaces must be embedded in an interaction context. The insights this brings for designing next generation animation tools must be examined and formalised. The practical consequences for the development of motion creation and editing tools must be demonstrated with prototypes that are more direct, efficient, easy-to-learn, and flexible to use. The foundation of the procedure is a conceptual framework in the form of a comprehensive discussion of the state of the art, a design space of interfaces for time-based visual media, and a taxonomy for mappings between user and medium space-time. Based on this, an interaction-centred analysis of computer animation culminates in the concept of direct animation interfaces and guidelines for their design. These guidelines are tested in two point designs for direct input devices. The design, implementation and test of a surface-based performance animation tool takes a system approach, addressing interaction design issues as well as challenges in extending current software architectures to support novel forms of animation control. The second, a performance timing technique, shows how concepts from video browsing can be applied to motion editing for more direct and efficient animation timing

    A Framework for the Semantics-aware Modelling of Objects

    Get PDF
    The evolution of 3D visual content calls for innovative methods for modelling shapes based on their intended usage, function and role in a complex scenario. Even if different attempts have been done in this direction, shape modelling still mainly focuses on geometry. However, 3D models have a structure, given by the arrangement of salient parts, and shape and structure are deeply related to semantics and functionality. Changing geometry without semantic clues may invalidate such functionalities or the meaning of objects or their parts. We approach the problem by considering semantics as the formalised knowledge related to a category of objects; the geometry can vary provided that the semantics is preserved. We represent the semantics and the variable geometry of a class of shapes through the parametric template: an annotated 3D model whose geometry can be deformed provided that some semantic constraints remain satisfied. In this work, we design and develop a framework for the semantics-aware modelling of shapes, offering the user a single application environment where the whole workflow of defining the parametric template and applying semantics-aware deformations can take place. In particular, the system provides tools for the selection and annotation of geometry based on a formalised contextual knowledge; shape analysis methods to derive new knowledge implicitly encoded in the geometry, and possibly enrich the given semantics; a set of constraints that the user can apply to salient parts and a deformation operation that takes into account the semantic constraints and provides an optimal solution. The framework is modular so that new tools can be continuously added. While producing some innovative results in specific areas, the goal of this work is the development of a comprehensive framework combining state of the art techniques and new algorithms, thus enabling the user to conceptualise her/his knowledge and model geometric shapes. The original contributions regard the formalisation of the concept of annotation, with attached properties, and of the relations between significant parts of objects; a new technique for guaranteeing the persistence of annotations after significant changes in shape's resolution; the exploitation of shape descriptors for the extraction of quantitative information and the assessment of shape variability within a class; and the extension of the popular cage-based deformation techniques to include constraints on the allowed displacement of vertices. In this thesis, we report the design and development of the framework as well as results in two application scenarios, namely product design and archaeological reconstruction

    Analyse de l'espace des chemins pour la composition des ombres et lumières

    Get PDF
    La réalisation des films d'animation 3D s'appuie de nos jours sur les techniques de rendu physiquement réaliste, qui simulent la propagation de la lumière dans chaque scène. Dans ce contexte, les graphistes 3D doivent jouer avec les effets de lumière pour accompagner la mise en scène, dérouler la narration du film, et transmettre son contenu émotionnel aux spectateurs. Cependant, les équations qui modélisent le comportement de la lumière laissent peu de place à l'expression artistique. De plus, l'édition de l'éclairage par essai-erreur est ralentie par les longs temps de rendu associés aux méthodes physiquement réalistes, ce qui rend fastidieux le travail des graphistes. Pour pallier ce problème, les studios d'animation ont souvent recours à la composition, où les graphistes retravaillent l'image en associant plusieurs calques issus du processus de rendu. Ces calques peuvent contenir des informations géométriques sur la scène, ou bien isoler un effet lumineux intéressant. L'avantage de la composition est de permettre une interaction en temps réel, basée sur les méthodes classiques d'édition en espace image. Notre contribution principale est la définition d'un nouveau type de calque pour la composition, le calque d'ombre. Un calque d'ombre contient la quantité d'énergie perdue dans la scène à cause du blocage des rayons lumineux par un objet choisi. Comparée aux outils existants, notre approche présente plusieurs avantages pour l'édition. D'abord, sa signification physique est simple à concevoir : lorsque l'on ajoute le calque d'ombre et l'image originale, toute ombre due à l'objet choisi disparaît. En comparaison, un masque d'ombre classique représente la fraction de rayons bloqués en chaque pixel, une information en valeurs de gris qui ne peut servir que d'approximation pour guider la composition. Ensuite, le calque d'ombre est compatible avec l'éclairage global : il enregistre l'énergie perdue depuis les sources secondaires, réfléchies au moins une fois dans la scène, là où les méthodes actuelles ne considèrent que les sources primaires. Enfin, nous démontrons l'existence d'une surestimation de l'éclairage dans trois logiciels de rendu différents lorsque le graphiste désactive les ombres pour un objet ; notre définition corrige ce défaut. Nous présentons un prototype d'implémentation des calques d'ombres à partir de quelques modifications du Path Tracing, l'algorithme de choix en production. Il exporte l'image originale et un nombre arbitraire de calques d'ombres liés à différents objets en une passe de rendu, requérant un temps supplémentaire de l'ordre de 15% dans des scènes à géométrie complexe et contenant plusieurs milieux participants. Des paramètres optionnels sont aussi proposés au graphiste pour affiner le rendu des calques d'ombres.The production of 3D animated motion picture now relies on physically realistic rendering techniques, that simulate light propagation within each scene. In this context, 3D artists must leverage lighting effects to support staging, deploy the film's narrative, and convey its emotional content to viewers. However, the equations that model the behavior of light leave little room for artistic expression. In addition, editing illumination by trial-and-error is tedious due to the long render times that physically realistic rendering requires. To remedy these problems, most animation studios resort to compositing, where artists rework a frame by associating multiple layers exported during rendering. These layers can contain geometric information on the scene, or isolate a particular lighting effect. The advantage of compositing is that interactions take place in real time, and are based on conventional image space operations. Our main contribution is the definition of a new type of layer for compositing, the shadow layer. A shadow layer contains the amount of energy lost in the scene due to the occlusion of light rays by a given object. Compared to existing tools, our approach presents several advantages for artistic editing. First, its physical meaning is straightforward: when a shadow layer is added to the original image, any shadow created by the chosen object disappears. In comparison, a traditional shadow matte represents the ratio of occluded rays at a pixel, a grayscale information that can only serve as an approximation to guide compositing operations. Second, shadow layers are compatible with global illumination: they pick up energy lost from secondary light sources that are scattered at least once in the scene, whereas the current methods only consider primary sources. Finally, we prove the existence of an overestimation of illumination in three different renderers when an artist disables the shadow of an object; our definition fixes this shortcoming. We present a prototype implementation for shadow layers obtained from a few modifications of path tracing, the main rendering algorithm in production. It exports the original image and any number of shadow layers associated with different objects in a single rendering pass, with an additional 15% time in scenes containing complex geometry and multiple participating media. Optional parameters are also proposed to the artist to fine-tune the rendering of shadow layers
    corecore