191 research outputs found

    Data-driven shape analysis and processing

    Get PDF
    Data-driven methods serve an increasingly important role in discovering geometric, structural, and semantic relationships between shapes. In contrast to traditional approaches that process shapes in isolation of each other, data-driven methods aggregate information from 3D model collections to improve the analysis, modeling and editing of shapes. Through reviewing the literature, we provide an overview of the main concepts and components of these methods, as well as discuss their application to classification, segmentation, matching, reconstruction, modeling and exploration, as well as scene analysis and synthesis. We conclude our report with ideas that can inspire future research in data-driven shape analysis and processing

    Analyse de l'espace des chemins pour la composition des ombres et lumières

    Get PDF
    La réalisation des films d'animation 3D s'appuie de nos jours sur les techniques de rendu physiquement réaliste, qui simulent la propagation de la lumière dans chaque scène. Dans ce contexte, les graphistes 3D doivent jouer avec les effets de lumière pour accompagner la mise en scène, dérouler la narration du film, et transmettre son contenu émotionnel aux spectateurs. Cependant, les équations qui modélisent le comportement de la lumière laissent peu de place à l'expression artistique. De plus, l'édition de l'éclairage par essai-erreur est ralentie par les longs temps de rendu associés aux méthodes physiquement réalistes, ce qui rend fastidieux le travail des graphistes. Pour pallier ce problème, les studios d'animation ont souvent recours à la composition, où les graphistes retravaillent l'image en associant plusieurs calques issus du processus de rendu. Ces calques peuvent contenir des informations géométriques sur la scène, ou bien isoler un effet lumineux intéressant. L'avantage de la composition est de permettre une interaction en temps réel, basée sur les méthodes classiques d'édition en espace image. Notre contribution principale est la définition d'un nouveau type de calque pour la composition, le calque d'ombre. Un calque d'ombre contient la quantité d'énergie perdue dans la scène à cause du blocage des rayons lumineux par un objet choisi. Comparée aux outils existants, notre approche présente plusieurs avantages pour l'édition. D'abord, sa signification physique est simple à concevoir : lorsque l'on ajoute le calque d'ombre et l'image originale, toute ombre due à l'objet choisi disparaît. En comparaison, un masque d'ombre classique représente la fraction de rayons bloqués en chaque pixel, une information en valeurs de gris qui ne peut servir que d'approximation pour guider la composition. Ensuite, le calque d'ombre est compatible avec l'éclairage global : il enregistre l'énergie perdue depuis les sources secondaires, réfléchies au moins une fois dans la scène, là où les méthodes actuelles ne considèrent que les sources primaires. Enfin, nous démontrons l'existence d'une surestimation de l'éclairage dans trois logiciels de rendu différents lorsque le graphiste désactive les ombres pour un objet ; notre définition corrige ce défaut. Nous présentons un prototype d'implémentation des calques d'ombres à partir de quelques modifications du Path Tracing, l'algorithme de choix en production. Il exporte l'image originale et un nombre arbitraire de calques d'ombres liés à différents objets en une passe de rendu, requérant un temps supplémentaire de l'ordre de 15% dans des scènes à géométrie complexe et contenant plusieurs milieux participants. Des paramètres optionnels sont aussi proposés au graphiste pour affiner le rendu des calques d'ombres.The production of 3D animated motion picture now relies on physically realistic rendering techniques, that simulate light propagation within each scene. In this context, 3D artists must leverage lighting effects to support staging, deploy the film's narrative, and convey its emotional content to viewers. However, the equations that model the behavior of light leave little room for artistic expression. In addition, editing illumination by trial-and-error is tedious due to the long render times that physically realistic rendering requires. To remedy these problems, most animation studios resort to compositing, where artists rework a frame by associating multiple layers exported during rendering. These layers can contain geometric information on the scene, or isolate a particular lighting effect. The advantage of compositing is that interactions take place in real time, and are based on conventional image space operations. Our main contribution is the definition of a new type of layer for compositing, the shadow layer. A shadow layer contains the amount of energy lost in the scene due to the occlusion of light rays by a given object. Compared to existing tools, our approach presents several advantages for artistic editing. First, its physical meaning is straightforward: when a shadow layer is added to the original image, any shadow created by the chosen object disappears. In comparison, a traditional shadow matte represents the ratio of occluded rays at a pixel, a grayscale information that can only serve as an approximation to guide compositing operations. Second, shadow layers are compatible with global illumination: they pick up energy lost from secondary light sources that are scattered at least once in the scene, whereas the current methods only consider primary sources. Finally, we prove the existence of an overestimation of illumination in three different renderers when an artist disables the shadow of an object; our definition fixes this shortcoming. We present a prototype implementation for shadow layers obtained from a few modifications of path tracing, the main rendering algorithm in production. It exports the original image and any number of shadow layers associated with different objects in a single rendering pass, with an additional 15% time in scenes containing complex geometry and multiple participating media. Optional parameters are also proposed to the artist to fine-tune the rendering of shadow layers

    Physically Based Rendering of Synthetic Objects in Real Environments

    Full text link

    An aesthetics of touch: investigating the language of design relating to form

    Get PDF
    How well can designers communicate qualities of touch? This paper presents evidence that they have some capability to do so, much of which appears to have been learned, but at present make limited use of such language. Interviews with graduate designer-makers suggest that they are aware of and value the importance of touch and materiality in their work, but lack a vocabulary to fully relate to their detailed explanations of other aspects such as their intent or selection of materials. We believe that more attention should be paid to the verbal dialogue that happens in the design process, particularly as other researchers show that even making-based learning also has a strong verbal element to it. However, verbal language alone does not appear to be adequate for a comprehensive language of touch. Graduate designers-makers’ descriptive practices combined non-verbal manipulation within verbal accounts. We thus argue that haptic vocabularies do not simply describe material qualities, but rather are situated competences that physically demonstrate the presence of haptic qualities. Such competencies are more important than groups of verbal vocabularies in isolation. Design support for developing and extending haptic competences must take this wide range of considerations into account to comprehensively improve designers’ capabilities

    Surface Deformation Potentials on Meshes for Computer Graphics and Visualization

    Get PDF
    Shape deformation models have been used in computer graphics primarily to describe the dynamics of physical deformations like cloth draping, collisions of elastic bodies, fracture, or animation of hair. Less frequent is their application to problems not directly related to a physical process. In this thesis we apply deformations to three problems in computer graphics that do not correspond to physical deformations. To this end, we generalize the physical model by modifying the energy potential. Originally, the energy potential amounts to the physical work needed to deform a body from its rest state into a given configuration and relates material strain to internal restoring forces that act to restore the original shape. For each of the three problems considered, this potential is adapted to reflect an application specific notion of shape. Under the influence of further constraints, our generalized deformation results in shapes that balance preservation of certain shape properties and application specific objectives similar to physical equilibrium states. The applications discussed in this thesis are surface parameterization, interactive shape editing and automatic design of panorama maps. For surface parameterization, we interpret parameterizations over a planar domain as deformations from a flat initial configuration onto a given surface. In this setting, we review existing parameterization methods by analyzing properties of their potential functions and derive potentials accounting for distortion of geometric properties. Interactive shape editing allows an untrained user to modify complex surfaces, be simply grabbing and moving parts of interest. A deformation model interactively extrapolates the transformation from those parts to the rest of the surface. This thesis proposes a differential shape representation for triangle meshes leading to a potential that can be optimized interactively with a simple, tailored algorithm. Although the potential is not physically accurate, it results in intuitive deformation behavior and can be parameterized to account for different material properties. Panorama maps are blends between landscape illustrations and geographic maps that are traditionally painted by an artist to convey geographic surveyknowledge on public places like ski resorts or national parks. While panorama maps are not drawn to scale, the shown landscape remains recognizable and the observer can easily recover details necessary for self location and orientation. At the same time, important features as trails or ski slopes appear not occluded and well visible. This thesis proposes the first automatic panorama generation method. Its basis is again a surface deformation, that establishes the necessary compromise between shape preservation and feature visibility.Potentiale zur Flächendeformation auf Dreiecksnetzen für Anwendungen in der Computergrafik und Visualisierung Deformationsmodelle werden in der Computergrafik bislang hauptsächlich eingesetzt, um die Dynamik physikalischer Deformationsprozesse zu modellieren. Gängige Beispiele sind Bekleidungssimulationen, Kollisionen elastischer Körper oder Animation von Haaren und Frisuren. Deutlich seltener ist ihre Anwendung auf Probleme, die nicht direkt physikalischen Prozessen entsprechen. In der vorliegenden Arbeit werden Deformationsmodelle auf drei Probleme der Computergrafik angewandt, die nicht unmittelbar einem physikalischen Deformationsprozess entsprechen. Zu diesem Zweck wird das physikalische Modell durch eine passende Änderung der potentiellen Energie verallgemeinert. Die potentielle Energie entspricht normalerweise der physikalischen Arbeit, die aufgewendet werden muss, um einen Körper aus dem Ruhezustand in eine bestimmte Konfiguration zu verformen. Darüber hinaus setzt sie die aktuelle Verformung in Beziehung zu internen Spannungskräften, die wirken um die ursprüngliche Form wiederherzustellen. In dieser Arbeit passen wir für jedes der drei betrachteten Problemfelder die potentielle Energie jeweils so an, dass sie eine anwendungsspezifische Definition von Form widerspiegelt. Unter dem Einfluss weiterer Randbedingungen führt die so verallgemeinerte Deformation zu einer Fläche, die eine Balance zwischen der Erhaltung gewisser Formeigenschaften und Zielvorgaben der Anwendung findet. Diese Balance entspricht dem Equilibrium einer physikalischen Deformation. Die drei in dieser Arbeit diskutierten Anwendungen sind Oberflächenparameterisierung, interaktives Bearbeiten von Flächen und das vollautomatische Erzeugen von Panoramakarten im Stile von Heinrich Berann. Zur Oberflächenparameterisierung interpretieren wir Parameterisierungen über einem flachen Parametergebiet als Deformationen, die ein ursprünglich ebenes Flächenstück in eine gegebene Oberfläche verformen. Innerhalb dieses Szenarios vergleichen wir dann existierende Methoden zur planaren Parameterisierung, indem wir die resultierenden potentiellen Energien analysieren, und leiten weitere Potentiale her, die die Störung geometrischer Eigenschaften wie Fläche und Winkel erfassen. Verfahren zur interaktiven Flächenbearbeitung ermöglichen schnelle und intuitive Änderungen an einer komplexen Oberfläche. Dazu wählt der Benutzer Teile der Fläche und bewegt diese durch den Raum. Ein Deformationsmodell extrapoliert interaktiv die Transformation der gewählten Teile auf die restliche Fläche. Diese Arbeit stellt eine neue differentielle Flächenrepräsentation für diskrete Flächen vor, die zu einem einfach und interaktiv zu optimierendem Potential führt. Obwohl das vorgeschlagene Potential nicht physikalisch korrekt ist, sind die resultierenden Deformationen intuitiv. Mittels eines Parameters lassen sich außerdem bestimmte Materialeigenschaften einstellen. Panoramakarten im Stile von Heinrich Berann sind eine Verschmelzung von Landschaftsillustration und geographischer Karte. Traditionell werden sie so von Hand gezeichnet, dass bestimmt Merkmale wie beispielsweise Skipisten oder Wanderwege in einem Gebiet unverdeckt und gut sichtbar bleiben, was große Kunstfertigkeit verlangt. Obwohl diese Art der Darstellung nicht maßstabsgetreu ist, sind Abweichungen auf den ersten Blick meistens nicht zu erkennen. Dadurch kann der Betrachter markante Details schnell wiederfinden und sich so innerhalb des Gebietes orientieren. Diese Arbeit stellt das erste, vollautomatische Verfahren zur Erzeugung von Panoramakarten vor. Grundlage ist wiederum eine verallgemeinerte Oberflächendeformation, die sowohl auf Formerhaltung als auch auf die Sichtbarkeit vorgegebener geographischer Merkmale abzielt

    Programming tools for intelligent systems

    Full text link
    Les outils de programmation sont des programmes informatiques qui aident les humains à programmer des ordinateurs. Les outils sont de toutes formes et tailles, par exemple les éditeurs, les compilateurs, les débogueurs et les profileurs. Chacun de ces outils facilite une tâche principale dans le flux de travail de programmation qui consomme des ressources cognitives lorsqu’il est effectué manuellement. Dans cette thèse, nous explorons plusieurs outils qui facilitent le processus de construction de systèmes intelligents et qui réduisent l’effort cognitif requis pour concevoir, développer, tester et déployer des systèmes logiciels intelligents. Tout d’abord, nous introduisons un environnement de développement intégré (EDI) pour la programmation d’applications Robot Operating System (ROS), appelé Hatchery (Chapter 2). Deuxièmement, nous décrivons Kotlin∇, un système de langage et de type pour la programmation différenciable, un paradigme émergent dans l’apprentissage automatique (Chapter 3). Troisièmement, nous proposons un nouvel algorithme pour tester automatiquement les programmes différenciables, en nous inspirant des techniques de tests contradictoires et métamorphiques (Chapter 4), et démontrons son efficacité empirique dans le cadre de la régression. Quatrièmement, nous explorons une infrastructure de conteneurs basée sur Docker, qui permet un déploiement reproductible des applications ROS sur la plateforme Duckietown (Chapter 5). Enfin, nous réfléchissons à l’état actuel des outils de programmation pour ces applications et spéculons à quoi pourrait ressembler la programmation de systèmes intelligents à l’avenir (Chapter 6).Programming tools are computer programs which help humans program computers. Tools come in all shapes and forms, from editors and compilers to debuggers and profilers. Each of these tools facilitates a core task in the programming workflow which consumes cognitive resources when performed manually. In this thesis, we explore several tools that facilitate the process of building intelligent systems, and which reduce the cognitive effort required to design, develop, test and deploy intelligent software systems. First, we introduce an integrated development environment (IDE) for programming Robot Operating System (ROS) applications, called Hatchery (Chapter 2). Second, we describe Kotlin∇, a language and type system for differentiable programming, an emerging paradigm in machine learning (Chapter 3). Third, we propose a new algorithm for automatically testing differentiable programs, drawing inspiration from techniques in adversarial and metamorphic testing (Chapter 4), and demonstrate its empirical efficiency in the regression setting. Fourth, we explore a container infrastructure based on Docker, which enables reproducible deployment of ROS applications on the Duckietown platform (Chapter 5). Finally, we reflect on the current state of programming tools for these applications and speculate what intelligent systems programming might look like in the future (Chapter 6)

    Multimodale Bildgebung der myokardialen Heilung nach einem Herzinfarkt im Mausmodell im Kontext kardialer Stammzelltherapien

    Get PDF
    Das übergeordnete Ziel der Arbeiten war, Methoden im Mausmodell zu entwickeln und zu evaluieren, mit denen das Verhalten intramyokardial transplantierter Zellen, deren Wirkmechanismus und deren Effekt auf das linksventrikuläre Remodeling in-vivo gemessen werden kann. Das Schicksal der transplantierten Zellen und die Wirkmechanismen – zelluläre Inflammation und Angiogenese – wurden dabei vor allem mittels Positronen-Emissions-Tomographie (PET) untersucht. Für die Etablierung der Bildgebung des LV Remodeling spielte die kardiale Magnetresonanztomographie (MRT) die tragende Rolle

    The People Inside

    Get PDF
    Our collection begins with an example of computer vision that cuts through time and bureaucratic opacity to help us meet real people from the past. Buried in thousands of files in the National Archives of Australia is evidence of the exclusionary “White Australia” policies of the nineteenth and twentieth centuries, which were intended to limit and discourage immigration by non-Europeans. Tim Sherratt and Kate Bagnall decided to see what would happen if they used a form of face-detection software made ubiquitous by modern surveillance systems and applied it to a security system of a century ago. What we get is a new way to see the government documents, not as a source of statistics but, Sherratt and Bagnall argue, as powerful evidence of the people affected by racism

    Perceptually inspired image estimation and enhancement

    Get PDF
    Thesis (Ph. D.)--Massachusetts Institute of Technology, Dept. of Brain and Cognitive Sciences, 2009.Includes bibliographical references (p. 137-144).In this thesis, we present three image estimation and enhancement algorithms inspired by human vision. In the first part of the thesis, we propose an algorithm for mapping one image to another based on the statistics of a training set. Many vision problems can be cast as image mapping problems, such as, estimating reflectance from luminance, estimating shape from shading, separating signal and noise, etc. Such problems are typically under-constrained, and yet humans are remarkably good at solving them. Classic computational theories about the ability of the human visual system to solve such under-constrained problems attribute this feat to the use of some intuitive regularities of the world, e.g., surfaces tend to be piecewise constant. In recent years, there has been considerable interest in deriving more sophisticated statistical constraints from natural images, but because of the high-dimensional nature of images, representing and utilizing the learned models remains a challenge. Our techniques produce models that are very easy to store and to query. We show these techniques to be effective for a number of applications: removing noise from images, estimating a sharp image from a blurry one, decomposing an image into reflectance and illumination, and interpreting lightness illusions. In the second part of the thesis, we present an algorithm for compressing the dynamic range of an image while retaining important visual detail. The human visual system confronts a serious challenge with dynamic range, in that the physical world has an extremely high dynamic range, while neurons have low dynamic ranges.(cont.) The human visual system performs dynamic range compression by applying automatic gain control, in both the retina and the visual cortex. Taking inspiration from that, we designed techniques that involve multi-scale subband transforms and smooth gain control on subband coefficients, and resemble the contrast gain control mechanism in the visual cortex. We show our techniques to be successful in producing dynamic-range-compressed images without compromising the visibility of detail or introducing artifacts. We also show that the techniques can be adapted for the related problem of "companding", in which a high dynamic range image is converted to a low dynamic range image and saved using fewer bits, and later expanded back to high dynamic range with minimal loss of visual quality. In the third part of the thesis, we propose a technique that enables a user to easily localize image and video editing by drawing a small number of rough scribbles. Image segmentation, usually treated as an unsupervised clustering problem, is extremely difficult to solve. With a minimal degree of user supervision, however, we are able to generate selection masks with good quality. Our technique learns a classifier using the user-scribbled pixels as training examples, and uses the classifier to classify the rest of the pixels into distinct classes. It then uses the classification results as per-pixel data terms, combines them with a smoothness term that respects color discontinuities, and generates better results than state-of-art algorithms for interactive segmentation.by Yuanzhen Li.Ph.D
    corecore