10 research outputs found

    Procedural aging techniques of synthetic cities and 3D scenarios

    Get PDF
    Today we live in an increasingly computerized and demanding world. A world where is constantly presented the need for, the industry of video games and movies, to find ways to create more realistic graphics environments, faster and longer with a huge level of variety. To address this need, the techniques for procedural generation appeared. These techniques were used by the computer graphics industry to create textures to simulate special effects and generate complex natural models, including mostly vegetation. Within these first techniques we can find a wide range of techniques. Subsequently, with the needs to create increasingly more complex and realistic environments, emerged the solution to adapt these algorithms, already known, to something more complex such as the generation of a road infrastructure, the generation of buildings or allowed to practically generate a world only with procedural generation and a set of rules. Although this development is increasingly felt, we noticed there is an interest in a new area, which is the procedural aging of buildings in these graphical worlds. Several authors had proposed to create new and better algorithms of procedural aging in building. These authors when approaching this subject, tend to follow a very unique and specific way, creating an algorithm capable of playing a unique phenomenon of aging. Thus, identified this gap in the literature, it was decided to seize this opportunity and present and develop a procedural aging algorithm applied to buildings that is capable of reproduce different aging phenomena, and that consumes low computational resources being capable of be applied to a huge 3D scenario.Hoje em dia vivemos num mundo cada vez mais computorizado e exigente. Um mundo onde cada vez mais está presente a necessidade de a industria dos jogos de vídeo e dos filmes arranjar maneiras de criar ambientes gráficos mais realistas, mais rapidamente e já com um nível de variedade grande. Para colmatar esta necessidade surgiu então as técnicas de geração procedural. Estas técnicas aliaram-se á industria de computação gráfica para criar texturas naturais, simular efeitos especiais e gerar modelos naturais complexos, incluindo maioritariamente vegetação. Dentro destas primeiras técnicas podemos encontrar as fractais, L-system e Perlin Noise, entre outros. Posteriormente, com a necessidades de criar cada vez mais ambientes mais complexos, surgiu a solução de adaptar estes algoritmos já conhecidos para algo mais complexo, como a geração de uma estrutura rodoviária, ou como a geração de edifícios podendo assim praticamente gerar um mundo inteiro somente com a geração procedural e um conjunto de regras. Apesar de esta evolução ser cada vez mais sentida, notou-se um crescente interesse num tema em partcular, sendo essa, o envelhecimento procedural dos edifícios nestes mundos gráficos. Vários autores até então tinham-se proposto a criar novos e cada vez melhores algoritmos de envelhecimento procedural dos edifícios. Estes autores ao abordar este tema, tendem em seguir um caminho muito singular e especifico, criando um algoritmo capaz de reproduzir um unico fenomeno de envelhecimento. Assim, identificada esta lacuna na literatura, decidiu-se agarrar esta oportunidade e apresentar e desenvolver um algoritmo de envelhecimento procedural aplicado aos edifícios que é capaz de reproduzir diferentes fenomenos de envelhecimento, e que consome poucos recursos computacionais sendo capaz de ser aplicado a um grande cenário 3D

    Lazy Solid Texture Synthesis

    Get PDF
    International audienceExisting solid texture synthesis algorithms generate a full volume of color content from a set of 2D example images. We introduce a new algorithm with the unique ability to restrict synthesis to a subset of the voxels, while enforcing spatial determinism. This is especially useful when texturing objects, since only a thick layer around the surface needs to be synthesized. A major difficulty lies in reducing the dependency chain of neighborhood matching, so that each voxel only depends on a small number of other voxels. Our key idea is to synthesize a volume from a set of pre-computed 3D candidates, each being a triple of interleaved 2D neighborhoods. We present an efficient algorithm to carefully select in a pre-process only those candidates forming consistent triples. This significantly reduces the search space during subsequent synthesis. The result is a new parallel, spatially deterministic solid texture synthesis algorithm which runs efficiently on the GPU. Our approach generates high resolution solid textures on surfaces within seconds. Memory usage and synthesis time only depend on the output textured surface area. The GPU implementation of our method rapidly synthesizes new textures for the surfaces appearing when interactively breaking or cutting objects

    Lazy Solid Texture Synthesis

    Full text link

    Digital image processing for prognostic and diagnostic clinical pathology

    Get PDF
    When digital imaging and image processing methods are applied to clinical diagnostic and prognostic needs, the methods can be seen to increase human understanding and provide objective measurements. Most current clinical applications are limited to providing subjective information to healthcare professionals rather than providing objective measures. This Thesis provides detail of methods and systems that have been developed both for objective and subjective microscopy applications. A system framework is presented that provides a base for the development of microscopy imaging systems. This practical framework is based on currently available hardware and developed with standard software development tools. Image processing methods are applied to counter optical limitations of the bright field microscope, automating the system and allowing for unsupervised image capture and analysis. Current literature provides evidence that 3D visualisation has provided increased insight and application in many clinical areas. There have been recent advancements in the use of 3D visualisation for the study of soft tissue structures, but its clinical application within histology remains limited. Methods and applications have been researched and further developed which allow for the 3D reconstruction and visualisation of soft tissue structures using microtomed serial histological sections specimens. A system has been developed suitable for this need is presented giving considerations to image capture, data registration and 3D visualisation, requirements. The developed system has been used to explore and increase 3D insight on clinical samples. The area of automated objective image quantification of microscope slides presents the allure of providing objective methods replacing existing objective and subjective methods, increasing accuracy and rsducinq manual burden. One such existing objective test is DNA Image Ploidy which seeks to characterise cancer by the measurement of DNA content within individual cell nuclei, an accepted but manually burdensome method. The main novelty of the work completed lies in the development of an automated system for DNA Image Ploidy measurement, combining methods for automatic specimen focus, segmentation, parametric extraction and the implementation of an automated cell type classification system. A consideration for any clinical image processing system is the correct sampling of the tissue under study. VVhile the image capture requirements for both objective systems and subjective systems are similar there is also an important link between the 3D structures of the tissue. 3D understanding can aid in decisions regarding the sampling criteria of objective tests for as although many tests are completed in the 2D realm the clinical samples are 3D objects. Cancers such as Prostate and Breast cancer are known to be multi-focal, with areas of seeming physically, independent areas of disease within a single site. It is not possible to understand the true 3D nature of the samples using 2D micro-tomed sections in isolation from each other. The 3D systems described in this report provide a platform of the exploration of the true multi focal nature of disease soft tissue structures allowing for the sampling criteria of objective tests such as DNA Image Ploidy to be correctly set. For the Automated DNA Image Ploidy and the 3D reconstruction and visualisation systems, clinical review has been completed to test the increased insights provided. Datasets which have been reconstructed from microtomed serial sections and visualised with the developed 3D system area presented. For the automated DNA Image Ploidy system, the developed system is compared with the existing manual method to qualify the quality of data capture, operational speed and correctness of nuclei classification. Conclusions are presented for the work that has been completed and discussion given as to future areas of research that could be undertaken, extending the areas of study, increasing both clinical insight and practical application

    Stereological techniques for synthesizing solid textures from images of aggregate materials

    Get PDF
    Thesis (Ph. D.)--Massachusetts Institute of Technology, Dept. of Electrical Engineering and Computer Science, February 2005.Includes bibliographical references (leaves 121-130).When creating photorealistic digital scenes, textures are commonly used to depict complex variation in surface appearance. For materials that have spatial variation in three dimensions, such as wood or marble, solid textures offer a natural representation. Unlike 2D textures, which can be easily captured with a photograph, it can be difficult to obtain a 3D material volume. This thesis addresses the challenge of extrapolating tileable 3D solid textures from images of aggregate materials, such as concrete, asphalt, terrazzo or granite. The approach introduced here is inspired by and builds on prior work in stereology--the study of 3D properties of a material based on 2D observations. Unlike ad hoc methods for texture synthesis, this approach has rigorous mathematical foundations that allow for reliable, accurate material synthesis with well-defined assumptions. The algorithm is also driven by psychophysical constraints to insure that slices through the synthesized volume have a perceptually similar appearance to the input image. The texture synthesis algorithm uses a variety of techniques to independently solve for the shape, distribution, and color of the embedded particles, as well as the residual noise. To approximate particle shape, I consider four methods-including two algorithms of my own contribution. I compare these methods under a variety of input conditions using automated, perceptually-motivated metrics as well as a carefully controlled psychophysical experiment. In addition to assessing the relative performance of the four algorithms, I also evaluate the reliability of the automated metrics in predicting the results of the user study. To solve for the particle distribution, I apply traditional stereological methods.(cont.) I first illustrate this approach for aggregate materials of spherical particles and then extend the technique to apply to particles of arbitrary shapes. The particle shape and distribution are used in conjunction to create an explicit 3D material volume using simulated annealing. Particle colors are assigned using a stochastic method, and high-frequency noise is replicated with the assistance of existing algorithms. The data representation is suitable for high-fidelity rendering and physical simulation. I demonstrate the effectiveness of the approach with side-by-side comparisons of real materials and their synthetic counterparts derived from the application of these techniques.by Robert Carl Jagnow.Ph.D

    Hardware-supported cloth rendering

    Get PDF
    Many computer graphics applications involve rendering humans and their natural surroundings, which inevitably requires displaying textiles. To accurately resemble the appearance of e.g. clothing or furniture, reflection models are needed which are capable of modeling the highly complex reflection effects exhibited by textiles. This thesis focuses on generating realistic high quality images of textiles by developing suitable reflection models and introducing algorithms for illumination computation of cloth surfaces. As efficiency is essential for illumination computation, we additionally place great importance on exploiting graphics hardware to achieve high frame rates. To this end, we present a variety of hardware-accelerated methods to compute the illumination in textile micro geometry. We begin by showing how indirect illumination and shadows can be efficiently accounted for in heightfields, parametric surfaces, and triangle meshes. Using these methods, we can considerably speed up the computation of data structures like tabular bidirectional reflectance distribution functions (BRDFs) and bidirectional texture functions (BTFs), and also efficiently illuminate heightfield geometry and bump maps. Furthermore, we develop two shading models, which account for all important reflection properties exhibited by textiles. While the first model is suited for rendering textiles with general micro geometry, the second, based on volumetric textures, is specially tailored for rendering knitwear. To apply the second model e.g. to the triangle mesh of a garment, we finally introduce a new rendering algorithm for displaying semi-transparent volumetric textures at high interactive rates.Eine Vielzahl von Anwendungen in der Computergraphik schließen auch die Darstellung von Menschen und deren natürlicher Umgebung ein, was zwangsläufig auch die Darstellung von Textilien erfordert. Um beispielsweise das Aussehen von Bekleidung oder Möbeln genau zu erfassen, werden Reflexionsmodelle benötigt, die in der Lage sind, die hochkomplexen Reflexionseffekte von Textilien zu berücksichtigen. Der Schwerpunkt dieser Dissertation liegt in der Generierung qualitativ hochwertiger Bilder von Textilien, was wir durch die Entwicklung geeigneter Reflexionsmodelle und von Algorithmen zur Beleuchtungsberechnung an Stoffoberflächen ermöglichen. Da Effizienz essentiell für die Beleuchtungsberechnung ist, nutzen wir die Möglichkeiten von Graphikhardware aus, um hohe Bildwiederholraten zu erzielen. Hierfür legen wir eine Vielzahl von hardware-beschleunigten Methoden zur Beleuchtungsberechnung der Mikrogeometrie von Textilien vor. Zuerst zeigen wir, wie indirekte Beleuchtung und Schatten effizient in Höhenfeldern, parametrischen Flächen und Dreiecksnetzen berücksichtigt werden können. Mit Hilfe dieser Methoden kann die Berechnung von Datenstrukturen wie tabellarischer bidirectional reflectance distribution functions (BRDFs) und bidirectional texture functions (BTFs) erheblich beschleunigt, sowie die Beleuchtung von Höhenfeld-Geometrie und Bumpmaps effizient errechnet werden.Weiterhin entwickeln wir zwei Reflexionsmodelle, welche alle wichtigen Reflexionseigenschaften berücksichtigen, die Textilien aufweisen. Während das erste Modell sich zur Darstellung von Textilien mit allgemeiner Mikrogeometrie eignet, ist das zweite, welches auf volumetrischen Texturen basiert, speziell auf die Darstellung von Strickwaren zugeschnitten. Um das zweite Modell z.B. auf das Dreiecksnetz eines Bekleidungsstückes anzuwenden führen wir einen neuen Renderingalgorithmus für die Darstellung von semi-transparenten volumetrischen Texturen mit hohen Bildwiederholraten ein

    SVG 3D Graphical Presentation for Web-based Applications

    Get PDF
    Due to the rapid developments in the field of computer graphics and computer hardware, web-based applications are becoming more and more powerful, and the performance distance between web-based applications and desktop applications is increasingly closer. The Internet and the WWW have been widely used for delivering, processing, and publishing 3D data. There is increasingly demand for more and easier access to 3D content on the web. The better the browser experience, the more potential revenue that web-based content can generate for providers and others. The main focus of this thesis is on the design, develop and implementation of a new 3D generic modelling method based on Scalable Vector Graphics (SVG) for web-based applications. While the model is initialized using classical 3D graphics, the scene model is extended using SVG. A new algorithm to present 3D graphics with SVG is proposed. This includes the definition of a 3D scene in the framework, integration of 3D objects, cameras, transformations, light models and textures in a 3D scene, and the rendering of 3D objects on the web page, allowing the end-user to interactively manipulate objects on the web page. A new 3D graphics library for 3D geometric transformation and projection in the SVG GL is design and develop. A set of primitives in the SVG GL, including triangle, sphere, cylinder, cone, etc. are designed and developed. A set of complex 3D models in the SVG GL, including extrusion, revolution, Bezier surface, and point clouds are designed and developed. The new Gouraud shading algorithm and new Phong Shading algorithm in the SVG GL are proposed, designed and developed. The algorithms can be used to generate smooth shading and create highlight for 3D models. The new texture mapping algorithms for the SVG GL oriented toward web-based 3D modelling applications are proposed, designed and developed. Texture mapping algorithms for different 3D objects such as triangle, plane, sphere, cylinder, cone, etc. will also be proposed, designed and developed. This constitutes a unique and significant contribution to the disciplines of web-based 3D modelling, as well as to the process of 3D model popularization

    Transform domain texture synthesis on surfaces

    Get PDF
    In the recent past application areas such as virtual reality experiences, digital cinema and computer gamings have resulted in a renewed interest in advanced research topics in computer graphics. Although many research challenges in computer graphics have been met due to worldwide efforts, many more are yet to be met. Two key challenges which still remain open research problems are, the lack of perfect realism in animated/virtually-created objects when represented in graphical format and the need for the transmissiim/storage/exchange of a massive amount of information in between remote locations, when 3D computer generated objects are used in remote visualisations. These challenges call for further research to be focused in the above directions. Though a significant amount of ideas have been proposed by the international research community in their effort to meet the above challenges, the ideas still suffer from excessive complexity related issues resulting in high processing times and their practical inapplicability when bandwidth constraint transmission mediums are used or when the storage space or computational power of the display device is limited. In the proposed work we investigate the appropriate use of geometric representations of 3D structure (e.g. Bezier surface, NURBS, polygons) and multi-resolution, progressive representation of texture on such surfaces. This joint approach to texture synthesis has not been considered before and has significant potential in resolving current challenges in virtual realism, digital cinema and computer gaming industry. The main focus of the novel approaches that are proposed in this thesis is performing photo-realistic texture synthesis on surfaces. We have provided experimental results and detailed analysis to prove that the proposed algorithms allow fast, progressive building of texture on arbitrarily shaped 3D surfaces. In particular we investigate the above ideas in association with Bezier patch representation of 3D objects, an approach which has not been considered so far by any published world wide research effort, yet has flexibility of utmost practical importance. Further we have discussed the novel application domains that can be served by the inclusion of additional functionality within the proposed algorithms.EThOS - Electronic Theses Online ServiceGBUnited Kingdo

    A survey of 3D texturing

    No full text
    International audienceTexturing is indispensable for the realistic rendering since it adds surface details that are usually too complex to be modeled directly. Conventional 2D texture mapping remains the most usual approach to texturing, in particular for real-time applications. However, there are some major drawbacks inherent to this approach: the distortion and the discontinuity of textures as well as the lack of the "third" dimension information (geometric effects like roughcast cannot be rendered). 3D texturing has been introduced to computer graphics to resolve these problems. There are two types of 3D texturing: solid texturing that consists of defining color variations through the entire 3D space instead of the 2D one and geometric texturing that consists of adding a "real" third dimension information to surfaces in the form of "real" apparent geometry. This paper presents a detailed survey of 3D texturing. Main principles, advantages, drawbacks and applications are presented. The crucial problem of 3D textures synthesis is studied with a particular attention to analytical methods as well as physical-based models that can provide interesting solutions to this problem
    corecore