80 research outputs found

    An Approach to the Procedural Generation of Worn Metal Surfaces

    Get PDF
    Motivated by the phenomenon that wear and tear tends to happen more near sharp cornersof a surface, this thesis presents a method for procedurally generating photorealistic metal surfacesbased upon evaluating curvature values. The thesis describes the development of eight metal shadersthat are used to replace the manual texture painting typically used in production. The approach isdemonstrated by applying these metal shaders to a robotic dog model from a short film involvinglive action and CG elements. Frames from a short animation of the robotic dog are presented, anda discussion of the strengths and weaknesses of this methodology

    Fluid Morphing for 2D Animations

    Get PDF
    Professionaalsel tasemel animeerimine on aeganõudev ja kulukas tegevus. Seda eriti sõltumatule arvutimängude tegijale. Siit tulenevalt osutub kasulikuks leida meetodeid, mis võimaldaks programmaatiliselt suurendada kaadrite arvu igas kahemõõtmelises raster animatsioonis. Vedeliku simulaatoriga eksperimenteerimine andis käesoleva töö autoritele idee, kuidas saavutada visuaalselt meeldiv kaadrite üleminek, kasutades selleks vedeliku dünaamikat. Tulemusena valmis programm, mis võib animaatori efektiivsust tõsta lausa mitmeid kordi. Autorid usuvad, et see avastus võib viia kahemõõtmeliste animatsioonide uuele võidukäigule — näiteks kaasaegsete arvutimängude kontekstis.Creation of professional animations is expensive and time-consuming, especially for the independent game developers. Therefore, it is rewarding to find a method that would programmatically increase the frame rate of any two-dimensional raster animation. Experimenting with a fluid simulator gave the authors an insight that to achieve visually pleasant and smooth animations, elements from fluid dynamics can be used. As a result, fluid image morphing was developed, allowing the animators to produce more significant frames than they would with the classic methods. The authors believe that this discovery could reintroduce hand drawn animations to modern computer games

    Synthetic Data in Quantitative Scanning Probe Microscopy

    Get PDF
    Synthetic data are of increasing importance in nanometrology. They can be used for development of data processing methods, analysis of uncertainties and estimation of various measurement artefacts. In this paper we review methods used for their generation and the applications of synthetic data in scanning probe microscopy, focusing on their principles, performance, and applicability. We illustrate the benefits of using synthetic data on different tasks related to development of better scanning approaches and related to estimation of reliability of data processing methods. We demonstrate how the synthetic data can be used to analyse systematic errors that are common to scanning probe microscopy methods, either related to the measurement principle or to the typical data processing paths

    Stereological techniques for synthesizing solid textures from images of aggregate materials

    Get PDF
    Thesis (Ph. D.)--Massachusetts Institute of Technology, Dept. of Electrical Engineering and Computer Science, February 2005.Includes bibliographical references (leaves 121-130).When creating photorealistic digital scenes, textures are commonly used to depict complex variation in surface appearance. For materials that have spatial variation in three dimensions, such as wood or marble, solid textures offer a natural representation. Unlike 2D textures, which can be easily captured with a photograph, it can be difficult to obtain a 3D material volume. This thesis addresses the challenge of extrapolating tileable 3D solid textures from images of aggregate materials, such as concrete, asphalt, terrazzo or granite. The approach introduced here is inspired by and builds on prior work in stereology--the study of 3D properties of a material based on 2D observations. Unlike ad hoc methods for texture synthesis, this approach has rigorous mathematical foundations that allow for reliable, accurate material synthesis with well-defined assumptions. The algorithm is also driven by psychophysical constraints to insure that slices through the synthesized volume have a perceptually similar appearance to the input image. The texture synthesis algorithm uses a variety of techniques to independently solve for the shape, distribution, and color of the embedded particles, as well as the residual noise. To approximate particle shape, I consider four methods-including two algorithms of my own contribution. I compare these methods under a variety of input conditions using automated, perceptually-motivated metrics as well as a carefully controlled psychophysical experiment. In addition to assessing the relative performance of the four algorithms, I also evaluate the reliability of the automated metrics in predicting the results of the user study. To solve for the particle distribution, I apply traditional stereological methods.(cont.) I first illustrate this approach for aggregate materials of spherical particles and then extend the technique to apply to particles of arbitrary shapes. The particle shape and distribution are used in conjunction to create an explicit 3D material volume using simulated annealing. Particle colors are assigned using a stochastic method, and high-frequency noise is replicated with the assistance of existing algorithms. The data representation is suitable for high-fidelity rendering and physical simulation. I demonstrate the effectiveness of the approach with side-by-side comparisons of real materials and their synthetic counterparts derived from the application of these techniques.by Robert Carl Jagnow.Ph.D

    Reconstruction and rendering of time-varying natural phenomena

    Get PDF
    While computer performance increases and computer generated images get ever more realistic, the need for modeling computer graphics content is becoming stronger. To achieve photo-realism detailed scenes have to be modeled often with a significant amount of manual labour. Interdisciplinary research combining the fields of Computer Graphics, Computer Vision and Scientific Computing has led to the development of (semi-)automatic modeling tools freeing the user of labour-intensive modeling tasks. The modeling of animated content is especially challenging. Realistic motion is necessary to convince the audience of computer games, movies with mixed reality content and augmented reality applications. The goal of this thesis is to investigate automated modeling techniques for time-varying natural phenomena. The results of the presented methods are animated, three-dimensional computer models of fire, smoke and fluid flows.Durch die steigende Rechenkapazität moderner Computer besteht die Möglichkeit immer realistischere Bilder virtuell zu erzeugen. Dadurch entsteht ein größerer Bedarf an Modellierungsarbeit um die nötigen Objekte virtuell zu beschreiben. Um photorealistische Bilder erzeugen zu können müssen sehr detaillierte Szenen, oft in mühsamer Handarbeit, modelliert werden. Ein interdisziplinärer Forschungszweig, der Computergrafik, Bildverarbeitung und Wissenschaftliches Rechnen verbindet, hat in den letzten Jahren die Entwicklung von (semi-)automatischen Methoden zur Modellierung von Computergrafikinhalten vorangetrieben. Die Modellierung dynamischer Inhalte ist dabei eine besonders anspruchsvolle Aufgabe, da realistische Bewegungsabläufe sehr wichtig für eine überzeugende Darstellung von Computergrafikinhalten in Filmen, Computerspielen oder Augmented-Reality Anwendungen sind. Das Ziel dieser Arbeit ist es automatische Modellierungsmethoden für dynamische Naturerscheinungen wie Wasserfluss, Feuer, Rauch und die Bewegung erhitzter Luft zu entwickeln. Das Resultat der entwickelten Methoden sind dabei dynamische, dreidimensionale Computergrafikmodelle

    Investigations on the visual perception of shape and material properties of three-dimensional transparent objects

    Get PDF
    Die vorliegende Arbeit beschäftigt sich mit der Frage, welche Prinzipien und Mechanismen der visuellen Material- und Formwahrnehmung zugrunde liegen. Während sich bisherige Arbeiten zu diesem Forschungsbereich vor allem auf die Wahrnehmung opaker Objekte konzentrierten, liegt der Fokus dieser Arbeit auf der Wahrnehmung transparenter Objekte. Aufgrund der zum Teil deutlichen Unterschiede des physikalischen Lichttransports im transparenten Fall ergeben sich Unterschiede nicht nur im Informationsgehalt des zum Auge gelangenden Lichts, sondern auch im Hinblick auf die Regularitäten im retinalen Bild, die mit den Eigenschaften solcher Objekte verbunden sind. In Bezug auf die Materialwahrnehmung wird in theoretischen und computationalen Analysen gezeigt, dass eine Bildregularität, nämlich optische Hintergrundverzerrungen, die in der Literatur zunächst für einen vielversprechenden Hinweisreiz für die Wahrnehmung der Brechungseigenschaften transparenter Materialien gehalten wurde, dazu tatsächlich wenig geeignet scheint. In Übereinstimmung mit diesen Analysen deuten die Ergebnisse mehrerer empirischer Untersuchungen darauf hin, dass Hintergrundverzerrungen vom visuellen System tatsächlich nicht zu diesem Zweck genutzt werden. Andererseits wird in entsprechenden theoretischen und computationalen Analysen zur Formwahrnehmung gezeigt, dass mehrere Bildregularitäten, neben Hintergrundverzerrungen auch Chromatizitäts- und Intensitätsveränderungen aufgrund von Absorption und Spiegelbilder aufgrund von spiegelnden Reflexionen, unter bestimmten Umständen Informationen zur Form transparenter Objekte liefern können. Eingehende empirische Untersuchungen zeigen, dass diese Regularitäten tatsächlich in bestimmten Situationen positiv zur Formwahrnehmung beitragen. Die Ergebnisse zeigen allerdings auch, dass die Formwahrnehmung von unbekannten Objekten im transparenten Fall deutlich schlechter ist als im opaken, insbesondere wenn die Objekte massiv sind.This work investigates the principles and mechanisms underlying visual material and shape perception. While previous work on this topic has concentrated primarily on the perception of opaque objects, this work focuses on the perception of transparent objects. Due to substantial differences in the physical light transport in the transparent case, there are differences not only in the information that light reaching the eye contains, but also in the regularities in the retinal image related with the properties of such objects. With regard to material perception, theoretical and computational analyses show that an image regularity, namely optical background distortions, which was initially considered in the literature to be a promising cue for the perception of the refractive properties of transparent materials, does not seem to be suitable for this purpose. In line with these analyses, the results of several empirical tests indicate that background distortions are not actually used by the visual system for this purpose. On the other hand, corresponding theoretical and computational analyses on shape perception show that several image regularities, not only background distortions but also chromaticity and intensity changes due to absorption and mirror images due to specular reflections, can, under certain circumstances, provide information on the shape of transparent objects. Thorough empirical tests show that these regularities actually contribute positively to shape perception in certain situations. However, the results also show that shape perception of unknown objects is substantially worse in the transparent case than in the opaque case, especially if the objects are massive

    Revealing the Invisible: On the Extraction of Latent Information from Generalized Image Data

    Get PDF
    The desire to reveal the invisible in order to explain the world around us has been a source of impetus for technological and scientific progress throughout human history. Many of the phenomena that directly affect us cannot be sufficiently explained based on the observations using our primary senses alone. Often this is because their originating cause is either too small, too far away, or in other ways obstructed. To put it in other words: it is invisible to us. Without careful observation and experimentation, our models of the world remain inaccurate and research has to be conducted in order to improve our understanding of even the most basic effects. In this thesis, we1 are going to present our solutions to three challenging problems in visual computing, where a surprising amount of information is hidden in generalized image data and cannot easily be extracted by human observation or existing methods. We are able to extract the latent information using non-linear and discrete optimization methods based on physically motivated models and computer graphics methodology, such as ray tracing, real-time transient rendering, and image-based rendering

    Terrainosaurus: realistic terrain synthesis using genetic algorithms

    Get PDF
    Synthetically generated terrain models are useful across a broad range of applications, including computer generated art & animation, virtual reality and gaming, and architecture. Existing algorithms for terrain generation suffer from a number of problems, especially that of being limited in the types of terrain that they can produce and of being difficult for the user to control. Typical applications of synthetic terrain have several factors in common: first, they require the generation of large regions of believable (though not necessarily physically correct) terrain features; and second, while real-time performance is often needed when visualizing the terrain, this is generally not the case when generating the terrain. In this thesis, I present a new, design-by-example method for synthesizing terrain height fields. In this approach, the user designs the layout of the terrain by sketching out simple regions using a CAD-style interface, and specifies the desired terrain characteristics of each region by providing example height fields displaying these characteristics (these height fields will typically come from real-world GIS data sources). A height field matching the user's design is generated at several levels of detail, using a genetic algorithm to blend together chunks of elevation data from the example height fields in a visually plausible manner. This method has the advantage of producing an unlimited diversity of reasonably realistic results, while requiring relatively little user effort and expertise. The guided randomization inherent in the genetic algorithm allows the algorithm to come up with novel arrangements of features, while still approximating user-specified constraints

    Synthetic image generation and the use of virtual environments for image enhancement tasks

    Get PDF
    Deep learning networks are often difficult to train if there are insufficient image samples. Gathering real-world images tailored for a specific job takes a lot of work to perform. This dissertation explores techniques for synthetic image generation and virtual environments for various image enhancement/ correction/restoration tasks, specifically distortion correction, dehazing, shadow removal, and intrinsic image decomposition. First, given various image formation equations, such as those used in distortion correction and dehazing, synthetic image samples can be produced, provided that the equation is well-posed. Second, using virtual environments to train various image models is applicable for simulating real-world effects that are otherwise difficult to gather or replicate, such as dehazing and shadow removal. Given synthetic images, one cannot train a network directly on it as there is a possible gap between the synthetic and real domains. We have devised several techniques for generating synthetic images and formulated domain adaptation methods where our trained deep-learning networks perform competitively in distortion correction, dehazing, and shadow removal. Additional studies and directions are provided for the intrinsic image decomposition problem and the exploration of procedural content generation, where a virtual Philippine city was created as an initial prototype. Keywords: image generation, image correction, image dehazing, shadow removal, intrinsic image decomposition, computer graphics, rendering, machine learning, neural networks, domain adaptation, procedural content generation
    corecore