72 research outputs found

    Terrainosaurus: realistic terrain synthesis using genetic algorithms

    Get PDF
    Synthetically generated terrain models are useful across a broad range of applications, including computer generated art & animation, virtual reality and gaming, and architecture. Existing algorithms for terrain generation suffer from a number of problems, especially that of being limited in the types of terrain that they can produce and of being difficult for the user to control. Typical applications of synthetic terrain have several factors in common: first, they require the generation of large regions of believable (though not necessarily physically correct) terrain features; and second, while real-time performance is often needed when visualizing the terrain, this is generally not the case when generating the terrain. In this thesis, I present a new, design-by-example method for synthesizing terrain height fields. In this approach, the user designs the layout of the terrain by sketching out simple regions using a CAD-style interface, and specifies the desired terrain characteristics of each region by providing example height fields displaying these characteristics (these height fields will typically come from real-world GIS data sources). A height field matching the user's design is generated at several levels of detail, using a genetic algorithm to blend together chunks of elevation data from the example height fields in a visually plausible manner. This method has the advantage of producing an unlimited diversity of reasonably realistic results, while requiring relatively little user effort and expertise. The guided randomization inherent in the genetic algorithm allows the algorithm to come up with novel arrangements of features, while still approximating user-specified constraints

    Appearance Preserving Rendering of Out-of-Core Polygon and NURBS Models

    Get PDF
    In Computer Aided Design (CAD) trimmed NURBS surfaces are widely used due to their flexibility. For rendering and simulation however, piecewise linear representations of these objects are required. A relatively new field in CAD is the analysis of long-term strain tests. After such a test the object is scanned with a 3d laser scanner for further processing on a PC. In all these areas of CAD the number of primitives as well as their complexity has grown constantly in the recent years. This growth is exceeding the increase of processor speed and memory size by far and posing the need for fast out-of-core algorithms. This thesis describes a processing pipeline from the input data in the form of triangular or trimmed NURBS models until the interactive rendering of these models at high visual quality. After discussing the motivation for this work and introducing basic concepts on complex polygon and NURBS models, the second part of this thesis starts with a review of existing simplification and tessellation algorithms. Additionally, an improved stitching algorithm to generate a consistent model after tessellation of a trimmed NURBS model is presented. Since surfaces need to be modified interactively during the design phase, a novel trimmed NURBS rendering algorithm is presented. This algorithm removes the bottleneck of generating and transmitting a new tessellation to the graphics card after each modification of a surface by evaluating and trimming the surface on the GPU. To achieve high visual quality, the appearance of a surface can be preserved using texture mapping. Therefore, a texture mapping algorithm for trimmed NURBS surfaces is presented. To reduce the memory requirements for the textures, the algorithm is modified to generate compressed normal maps to preserve the shading of the original surface. Since texturing is only possible, when a parametric mapping of the surface - requiring additional memory - is available, a new simplification and tessellation error measure is introduced that preserves the appearance of the original surface by controlling the deviation of normal vectors. The preservation of normals and possibly other surface attributes allows interactive visualization for quality control applications (e.g. isophotes and reflection lines). In the last part out-of-core techniques for processing and rendering of gigabyte-sized polygonal and trimmed NURBS models are presented. Then the modifications necessary to support streaming of simplified geometry from a central server are discussed and finally and LOD selection algorithm to support interactive rendering of hard and soft shadows is described

    Modelado jerárquico de objetos 3D con superficies de subdivisión

    Get PDF
    Las SSs (Superficies de Subdivisión) son un potente paradigma de modelado de objetos 3D (tridimensionales) que establece un puente entre los dos enfoques tradicionales a la aproximación de superficies, basados en mallas poligonales y de parches alabeados, que conllevan problemas uno y otro. Los esquemas de subdivisión permiten definir una superficie suave (a tramos), como las más frecuentes en la práctica, como el límite de un proceso recursivo de refinamiento de una malla de control burda, que puede ser descrita muy compactamente. Además, la recursividad inherente a las SSs establece naturalmente una relación de anidamiento piramidal entre las mallas / NDs (Niveles de Detalle) generadas/os sucesivamente, por lo que las SSs se prestan extraordinariamente al AMRO (Análisis Multiresolución mediante Ondículas) de superficies, que tiene aplicaciones prácticas inmediatas e interesantísimas, como la codificación y la edición jerárquicas de modelos 3D. Empezamos describiendo los vínculos entre las tres áreas que han servido de base a nuestro trabajo (SSs, extracción automática de NDs y AMRO) para explicar como encajan estas tres piezas del puzzle del modelado jerárquico de objetos de 3D con SSs. El AMRO consiste en descomponer una función en una versión burda suya y un conjunto de refinamientos aditivos anidados jerárquicamente llamados "coeficientes ondiculares". La teoría clásica de ondículas estudia las señales clásicas nD: las definidas sobre dominios paramétricos homeomorfos a R" o (0,1)n como el audio (n=1), las imágenes (n=2) o el vídeo (n=3). En topologías menos triviales, como las variedades 2D) (superficies en el espacio 3D), el AMRO no es tan obvio, pero sigue siendo posible si se enfoca desde la perspectiva de las SSs. Basta con partir de una malla burda que aproxime a un bajo ND la superficie considerada, subdividirla recursivamente y, al hacerlo, ir añadiendo los coeficientes ondiculares, que son los detalles 3D necesarios para obtener aproximaciones más y más finas a la superficie original. Pasamos después a las aplicaciones prácticas que constituyen nuestros principal desarrollo original y, en particular, presentamos una técnica de codificación jerárquica de modelos 3D basada en SSs, que actúa sobre los detalles 3D mencionados: los expresa en un referencial normal loscal; los organiza según una estructura jerárquica basada en facetas; los cuantifica dedicando menos bits a sus componentes tangenciales, menos energéticas, y los "escalariza"; y los codifica dinalmente gracias a una técnica similar al SPIHT (Set Partitioning In Hierarchical Tress) de Said y Pearlman. El resultado es un código completamente embebido y al menos dos veces más compacto, para superficies mayormente suaves, que los obtenidos con técnicas de codificación progresiva de mallas 3D publicadas previamente, en las que además los NDs no están anidados piramidalmente. Finalmente, describimos varios métodos auxiliares que hemos desarrollado, mejorando técnicas previas y creando otras propias, ya que una solución completa al modelado de objetos 3D con SSs requiere resolver otros dos problemas. El primero es la extracción de una malla base (triangular, en nuestro caso) de la superficie original, habitualmente dada por una malla triangular fina con conectividad arbitraria. El segundo es la generación de un remallado recursivo con conectividad de subdivisión de la malla original/objetivo mediante un refinamiento recursivo de la malla base, calculando así los detalles 3D necesarios para corregir las posiciones predichas por la subdivisión para nuevos vértices

    Regular Hierarchical Surface Models: A conceptual model of scale variation in a GIS and its application to hydrological geomorphometry

    Get PDF
    Environmental and geographical process models inevitably involve parameters that vary spatially. One example is hydrological modelling, where parameters derived from the shape of the ground such as flow direction and flow accumulation are used to describe the spatial complexity of drainage networks. One way of handling such parameters is by using a Digital Elevation Model (DEM), such modelling is the basis of the science of geomorphometry. A frequently ignored but inescapable challenge when modellers work with DEMs is the effect of scale and geometry on the model outputs. Many parameters vary with scale as much as they vary with position. Modelling variability with scale is necessary to simplify and generalise surfaces, and desirable to accurately reconcile model components that are measured at different scales. This thesis develops a surface model that is optimised to represent scale in environmental models. A Regular Hierarchical Surface Model (RHSM) is developed that employs a regular tessellation of space and scale that forms a self-similar regular hierarchy, and incorporates Level Of Detail (LOD) ideas from computer graphics. Following convention from systems science, the proposed model is described in its conceptual, mathematical, and computational forms. The RHSM development was informed by a categorisation of Geographical Information Science (GISc) surfaces within a cohesive framework of geometry, structure, interpolation, and data model. The positioning of the RHSM within this broader framework made it easier to adapt algorithms designed for other surface models to conform to the new model. The RHSM has an implicit data model that utilises a variation of Middleton and Sivaswamy (2001)’s intrinsically hierarchical Hexagonal Image Processing referencing system, which is here generalised for rectangular and triangular geometries. The RHSM provides a simple framework to form a pyramid of coarser values in a process characterised as a scaling function. In addition, variable density realisations of the hierarchical representation can be generated by defining an error value and decision rule to select the coarsest appropriate scale for a given region to satisfy the modeller’s intentions. The RHSM is assessed using adaptions of the geomorphometric algorithms flow direction and flow accumulation. The effects of scale and geometry on the anistropy and accuracy of model results are analysed on dispersive and concentrative cones, and Light Detection And Ranging (LiDAR) derived surfaces of the urban area of Dunedin, New Zealand. The RHSM modelling process revealed aspects of the algorithms not obvious within a single geometry, such as, the influence of node geometry on flow direction results, and a conceptual weakness of flow accumulation algorithms on dispersive surfaces that causes asymmetrical results. In addition, comparison of algorithm behaviour between geometries undermined the hypothesis that variance of cell cross section with direction is important for conversion of cell accumulations to point values. The ability to analyse algorithms for scale and geometry and adapt algorithms within a cohesive conceptual framework offers deeper insight into algorithm behaviour than previously achieved. The deconstruction of algorithms into geometry neutral forms and the application of scaling functions are important contributions to the understanding of spatial parameters within GISc

    Automated 3D model generation for urban environments [online]

    Get PDF
    Abstract In this thesis, we present a fast approach to automated generation of textured 3D city models with both high details at ground level and complete coverage for birds-eye view. A ground-based facade model is acquired by driving a vehicle equipped with two 2D laser scanners and a digital camera under normal traffic conditions on public roads. One scanner is mounted horizontally and is used to determine the approximate component of relative motion along the movement of the acquisition vehicle via scan matching; the obtained relative motion estimates are concatenated to form an initial path. Assuming that features such as buildings are visible from both ground-based and airborne view, this initial path is globally corrected by Monte-Carlo Localization techniques using an aerial photograph or a Digital Surface Model as a global map. The second scanner is mounted vertically and is used to capture the 3D shape of the building facades. Applying a series of automated processing steps, a texture-mapped 3D facade model is reconstructed from the vertical laser scans and the camera images. In order to obtain an airborne model containing the roof and terrain shape complementary to the facade model, a Digital Surface Model is created from airborne laser scans, then triangulated, and finally texturemapped with aerial imagery. Finally, the facade model and the airborne model are fused to one single model usable for both walk- and fly-thrus. The developed algorithms are evaluated on a large data set acquired in downtown Berkeley, and the results are shown and discussed

    Towards Predictive Rendering in Virtual Reality

    Get PDF
    The strive for generating predictive images, i.e., images representing radiometrically correct renditions of reality, has been a longstanding problem in computer graphics. The exactness of such images is extremely important for Virtual Reality applications like Virtual Prototyping, where users need to make decisions impacting large investments based on the simulated images. Unfortunately, generation of predictive imagery is still an unsolved problem due to manifold reasons, especially if real-time restrictions apply. First, existing scenes used for rendering are not modeled accurately enough to create predictive images. Second, even with huge computational efforts existing rendering algorithms are not able to produce radiometrically correct images. Third, current display devices need to convert rendered images into some low-dimensional color space, which prohibits display of radiometrically correct images. Overcoming these limitations is the focus of current state-of-the-art research. This thesis also contributes to this task. First, it briefly introduces the necessary background and identifies the steps required for real-time predictive image generation. Then, existing techniques targeting these steps are presented and their limitations are pointed out. To solve some of the remaining problems, novel techniques are proposed. They cover various steps in the predictive image generation process, ranging from accurate scene modeling over efficient data representation to high-quality, real-time rendering. A special focus of this thesis lays on real-time generation of predictive images using bidirectional texture functions (BTFs), i.e., very accurate representations for spatially varying surface materials. The techniques proposed by this thesis enable efficient handling of BTFs by compressing the huge amount of data contained in this material representation, applying them to geometric surfaces using texture and BTF synthesis techniques, and rendering BTF covered objects in real-time. Further approaches proposed in this thesis target inclusion of real-time global illumination effects or more efficient rendering using novel level-of-detail representations for geometric objects. Finally, this thesis assesses the rendering quality achievable with BTF materials, indicating a significant increase in realism but also confirming the remainder of problems to be solved to achieve truly predictive image generation

    Hierarchical processing, editing and rendering of acquired geometry

    Get PDF
    La représentation des surfaces du monde réel dans la mémoire d’une machine peut désormais être obtenue automatiquement via divers périphériques de capture tels que les scanners 3D. Ces nouvelles sources de données, précises et rapides, amplifient de plusieurs ordres de grandeur la résolution des surfaces 3D, apportant un niveau de précision élevé pour les applications nécessitant des modèles numériques de surfaces telles que la conception assistée par ordinateur, la simulation physique, la réalité virtuelle, l’imagerie médicale, l’architecture, l’étude archéologique, les effets spéciaux, l’animation ou bien encore les jeux video. Malheureusement, la richesse de la géométrie produite par ces méthodes induit une grande, voire gigantesque masse de données à traiter, nécessitant de nouvelles structures de données et de nouveaux algorithmes capables de passer à l’échelle d’objets pouvant atteindre le milliard d’échantillons. Dans cette thèse, je propose des solutions performantes en temps et en espace aux problèmes de la modélisation, du traitement géométrique, de l’édition intéractive et de la visualisation de ces surfaces 3D complexes. La méthodologie adoptée pendant l’élaboration transverse de ces nouveaux algorithmes est articulée autour de 4 éléments clés : une approche hiérarchique systématique, une réduction locale de la dimension des problèmes, un principe d’échantillonage-reconstruction et une indépendance à l’énumération explicite des relations topologiques aussi appelée approche basée-points. En pratique, ce manuscrit propose un certain nombre de contributions, parmi lesquelles : une nouvelle structure hiérarchique hybride de partitionnement, l’Arbre Volume-Surface (VS-Tree) ainsi que de nouveaux algorithmes de simplification et de reconstruction ; un système d’édition intéractive de grands objets ; un noyau temps-réel de synthèse géométrique par raffinement et une structure multi-résolution offrant un rendu efficace de grands objets. Ces structures, algorithmes et systèmes forment une chaîne capable de traiter les objets en provenance du pipeline d’acquisition, qu’ils soient représentés par des nuages de points ou des maillages, possiblement non 2-variétés. Les solutions obtenues ont été appliquées avec succès aux données issues des divers domaines d’application précités.Digital representations of real-world surfaces can now be obtained automatically using various acquisition devices such as 3D scanners and stereo camera systems. These new fast and accurate data sources increase 3D surface resolution by several orders of magnitude, borrowing higher precision to applications which require digital surfaces. All major computer graphics applications can take benefit of this automatic modeling process, including: computer-aided design, physical simulation, virtual reality, medical imaging, architecture, archaeological study, special effects, computer animation and video games. Unfortunately, the richness of the geometry produced by these media comes at the price of a large, possibility gigantic, amount of data which requires new efficient data structures and algorithms offering scalability for processing such objects. This thesis proposes time and space efficient solutions for modeling, editing and rendering such complex surfaces, solving these problems with new algorithms sharing 4 fundamental elements: a systematic hierarchical approach, a local dimension reduction, a sampling-reconstruction paradigm and a pointbased basis. Basically, this manuscript proposes several contributions, including: a new hierarchical space subdivision structure, the Volume-Surface Tree, for geometry processing such as simplification and reconstruction; a streaming system featuring new algorithms for interactive editing of large objects, an appearancepreserving multiresolution structure for efficient rendering of large point-based surfaces, and a generic kernel for real-time geometry synthesis by refinement. These elements form a pipeline able to process acquired geometry, either represented by point clouds or non-manifold meshes. Effective results have been successfully obtained with data coming from the various applications mentioned

    Vertex classification for non-uniform geometry reduction.

    Get PDF
    Complex models created from isosurface extraction or CAD and highly accurate 3D models produced from high-resolution scanners are useful, for example, for medical simulation, Virtual Reality and entertainment. Often models in general require some sort of manual editing before they can be incorporated in a walkthrough, simulation, computer game or movie. The visualization challenges of a 3D editing tool may be regarded as similar to that of those of other applications that include an element of visualization such as Virtual Reality. However the rendering interaction requirements of each of these applications varies according to their purpose. For rendering photo-realistic images in movies computer farms can render uninterrupted for weeks, a 3D editing tool requires fast access to a model's fine data. In Virtual Reality rendering acceleration techniques such as level of detail can temporarily render parts of a scene with alternative lower complexity versions in order to meet a frame rate tolerable for the user. These alternative versions can be dynamic increments of complexity or static models that were uniformly simplified across the model by minimizing some cost function. Scanners typically have a fixed sampling rate for the entire model being scanned, and therefore may generate large amounts of data in areas not of much interest or that contribute little to the application at hand. It is therefore desirable to simplify such models non-uniformly. Features such as very high curvature areas or borders can be detected automatically and simplified differently to other areas without any interaction or visualization. However a problem arises when one wishes to manually select features of interest in the original model to preserve and create stand alone, non-uniformly reduced versions of large models, for example for medical simulation. To inspect and view such models the memory requirements of LoD representations can be prohibitive and prevent storage of a model in main memory. Furthermore, although asynchronous rendering of a base simplified model ensures a frame rate tolerable to the user whilst detail is paged, no guarantees can be made that what the user is selecting is at the original resolution of the model or of an appropriate LoD owing to disk lag or the complexity of a particular view selected by the user. This thesis presents an interactive method in the con text of a 3D editing application for feature selection from any model that fits in main memory. We present a new compression/decompression of triangle normals and colour technique which does not require dedicated hardware that allows for 87.4% memory reduction and allows larger models to fit in main memory with at most 1.3/2.5 degrees of error on triangle normals and to be viewed interactively. To address scale and available hardware resources, we reference a hierarchy of volumes of different sizes. The distances of the volumes at each level of the hierarchy to the intersection point of the line of sight with the model are calculated and these distances sorted. At startup an appropriate level of the tree is automatically chosen by separating the time required for rendering from that required for sorting and constraining the latter according to the resources available. A clustered navigation skin and depth buffer strategy allows for the interactive visualisation of models of any size, ensuring that triangles from the closest volumes are rendered over the navigation skin even when the clustered skin may be closer to the viewer than the original model. We show results with scanned models, CAD, textured models and an isosurface. This thesis addresses numerical issues arising from the optimisation of cost functions in LoD algorithms and presents a semi-automatic solution for selection of the threshold on the condition number of the matrix to be inverted for optimal placement of the new vertex created by an edge collapse. We show that the units in which a model is expressed may inadvertently affect the condition of these matrices, hence affecting the evaluation of different LoD methods with different solvers. We use the same solver with an automatically calibrated threshold to evaluate different uniform geometry reduction techniques. We then present a framework for non-uniform reduction of regular scanned models that can be used in conjunction with a variety of LoD algorithms. The benefits of non-uniform reduction are presented in the context of an animation system. (Abstract shortened by UMI.)

    Perceptually Modulated Level of Detail for Virtual Environments

    Get PDF
    Institute for Computing Systems ArchitectureThis thesis presents a generic and principled solution for optimising the visual complexity of any arbitrary computer-generated virtual environment (VE). This is performed with the ultimate goal of reducing the inherent latencies of current virtual reality (VR) technology. Effectively, we wish to remove extraneous detail from an environment which the user cannot perceive, and thus modulate the graphical complexity of a VE with little or no perceptual artifacts. The work proceeds by investigating contemporary models and theories of visual perception and then applying these to the field of real-time computer graphics. Subsequently, a technique is devised to assess the perceptual content of a computer-generated image in terms of spatial frequency (c/deg), and a model of contrast sensitivity is formulated to describe a user's ability to perceive detail under various conditions in terms of this metric. This allows us to base the level of detail (LOD) of each object in a VE on a measure of the degree of spatial detail which the user can perceive at any instant (taking into consideration the size of an object, its angular velocity, and the degree to which it exists in the peripheral field). Additionally, a generic polygon simplification framework is presented to complement the use of perceptually modulated LOD. The efficient implementation of this perceptual model is discussed and a prototype system is evaluated through a suite of experiments. These include a number of low-level psychophysical studies (to evaluate the accuracy of the model), a task performance study (to evaluate the effects of the model on the user), and an analysis of system performance gain (to evaluate the effects of the model on the system). The results show that for the test application chosen, the frame rate of the simulation was manifestly improved (by four to five-fold) with no perceivable drop in image fidelity. As a result, users were able to perform the given wayfinding task more proficiently and rapidly. Finally, conclusions are drawn on the application and utility of perceptually-based optimisations; both in reference to this work, and in the wider context
    corecore