197 research outputs found

    High fidelity compression of irregularly sampled height fields

    Get PDF
    This paper presents a method to compress irregularly sampled height-fields based on a multi-resolution framework. Unlike many other height-field compression techniques, no resampling is required so the original height-field data is recovered (less quantization error). The method decomposes the compression task into two complementary phases: an in-plane compression scheme for (x, y) coordinate positions, and a separate multi-resolution z compression step. This decoupling allows subsequent improvements in either phase to be seamlessly integrated and also allows for independent control of bit-rates in the decoupled dimensions, should this be desired. Results are presented for a number of height-field sample sets quantized to 12 bits for each of x and y, and 10 bits for z. Total lossless encoded data sizes range from 11 to 24 bits per point, with z bit-rates lying in the range 2.9 to 8.1 bits per z coordinate. Lossy z bit-rates (we do not lossily encode x and y) lie in the range 0.7 to 5.9 bits per z coordinate, with a worst-case root-mean-squared (RMS) error of less than 1.7% of the z range. Even with aggressive lossy encoding, at least 40% of the point samples are perfectly reconstructed

    Spline wavelet image coding and synthesis for a VLSI based difference engine

    Get PDF
    Bibliography: leaves 142-146.The efficiency of an image compression/synthesis system based on a spline multi-resolution analysis (MRA) is investigated. The proposed system uses a quadratic spline wavelet transform combined with minimum-mean squared error vector quantization to achieve image compression. Image synthesis is accomplished by utilizing the properties of the MRA and the architecture of a custom designed display processor, the Difference Engine. The latter is ideally suited to rendering images with polynomial intensity profiles, such as those generated by the proposed spline :V1RA. Based on these properties, an adaptive image synthesis system is developed which enables one to reduce the number of instruction cycles required to reproduce images compressed using the quadratic spline wavelet transform. This adaptive approach is computationally simple and fairly robust. In addition, there is little overhead involved in its implementation

    A GPU-Based Level of Detail System for the Real-Time Simulation and Rendering of Large-Scale Granular Terrain

    Get PDF
    We describe a system that is able to efficiently render large-scale particle-based granular terrains in real-time. This is achieved by integrating a particle-based granular terrain simulation with a heightfield-based terrain system, effectively creating a level of detail system. By quickly converting areas of terrain from the heightfield-based representation to the particle-based representation around dynamic objects which collide with the terrain, we are able to create the appearance of a large-scale particle-based granular terrain, whilst maintaining real-time frame rates

    High fidelity compression of irregularly sampled height-fields

    Get PDF
    This paper presents a method to compress irregularly sampled height-fields based on a multi-resolution framework. Unlike many other height-field compression techniques, no resampling is required so the original height-field data is recovered (less quantization error). The method decomposes the compression task into two complementary phases: an in-plane compression scheme for (x, y) coordinate positions, and a separate multi-resolution z compression step. This decoupling allows subsequent improvements in either phase to be seamlessly integrated and also allows for independent control of bit-rates in the decoupled dimensions, should this be desired. Results are presented for a number of height-field sample sets quantized to 12 bits for each of x and y, and 10 bits for z. Total lossless encoded data sizes range from 11 to 24 bits per point, with z bit-rates lying in the range 2.9 to 8.1 bits per z coordinate. Lossy z bit-rates (we do not lossily encode x and y) lie in the range 0.7 to 5.9 bits per z coordinate, with a worst-case root-mean-squared (RMS) error of less than 1.7% of the z range. Even with aggressive lossy encoding, at least 40% of the point samples are perfectly reconstructed

    Moving Least-Squares Reconstruction of Large Models with GPUs

    Get PDF
    Modern laser range scanning campaigns produce extremely large point clouds, and reconstructing a triangulated surface thus requires both out-of-core techniques and significant computational power. We present a GPU-accelerated implementation of the Moving Least Squares (MLS) surface reconstruction technique. While several previous out-of-core approaches use a sweep-plane approach, we subdivide the space into cubic regions that are processed independently. This independence allows the algorithm to be parallelized using multiple GPUs, either in a single machine or a cluster. It also allows data sets with billions of point samples to be processed on a standard desktop PC. We show that our implementation is an order of magnitude faster than a CPU-based implementation when using a single GPU, and scales well to 8 GPUs

    Efficient Procedural Generation of Forests

    Get PDF
    Forested landscapes are an important component of many large virtual environments in games and film. In order to reduce modelling time, procedural methods are often used. Unfortunately, procedural tree generation tends to be slow and resource-intensive for large forests. The main contribution of this paper is the development of an efficient procedural generation system for the creation of large forests. Our system uses L-systems, a grammar-based procedural technique, to generate each tree. We algorithmically modify L-system tree grammars to intelligently use an instance cache for tree branches. Our instancing approach not only makes efficient use of memory but also reduces the visual repetition artifacts which can arise due to the granularity of the instances. Instances can represent a range of structures, from a single branch to multiple branches or even an entire tree. Our system improves the speed and memory requirements for forest generation by 3–4 orders of magnitude over naïve methods: we generate over 1 000 000 trees in 4.5 seconds, while using only 350MB of memory

    City Sketching

    Get PDF
    Procedural methods offer an automated means of generating complex cityscapes, incorporating the placement of park areas and the layout of roads, plots and buildings. Unfortunately, existing interfaces to procedural city systems tend to either focus on a single aspect of city layout (such as the road network) ignoring interaction with other elements (such as building dimensions) or expect numeric input with little visual feedback, short of the completed city, which may take up to several minutes to generate. In this paper we present an interface to procedural city generation, which, through a combination of sketching and gestural input, enables users to specify different land usage (parkland, commercial, residential and industrial), and control the geometric attributes of roads, plots and buildings. Importantly, the inter-relationship of these elements is pre-visualized so that their impact on the final city layout can be predicted. Once generated, further editing, for instance shaping the city skyline or redrawing individual roads, is supported. In general, City Sketching provides a powerful and intuitive interface for designing complex urban layouts
    corecore