613 research outputs found

    Lossless data compression of grid-based digital elevation models: a PNG image format evaluation

    Get PDF
    At present, computers, lasers, radars, planes and satellite technologies make possible very fast and accurate topographic data acquisition for the production of maps. However, the problem of managing and manipulating this data efficiently remains. One particular type of map is the elevation map. When stored on a computer, it is often referred to as a Digital Elevation Model (DEM). A DEM is usually a square matrix of elevations. It is like an image, except that it contains a single channel of information (that is, elevation) and can be compressed in a lossy or lossless manner by way of existing image compression protocols. Compression has the effect of reducing memory requirements and speed of transmission over digital links, while maintaining the integrity of data as required. In this context, this paper investigates the effects of the PNG (Portable Network Graphics) lossless image compression protocol on floating-point elevation values for 16-bit DEMs of dissimilar terrain characteristics. The PNG is a robust, universally supported, extensible, lossless, general-purpose and patent-free image format. Tests demonstrate that the compression ratios and run decompression times achieved with the PNG lossless compression protocol can be comparable to, or better than, proprietary lossless JPEG variants, other image formats and available lossless compression algorithms

    1994 Science Information Management and Data Compression Workshop

    Get PDF
    This document is the proceedings from the 'Science Information Management and Data Compression Workshop,' which was held on September 26-27, 1994, at the NASA Goddard Space Flight Center, Greenbelt, Maryland. The Workshop explored promising computational approaches for handling the collection, ingestion, archival and retrieval of large quantities of data in future Earth and space science missions. It consisted of eleven presentations covering a range of information management and data compression approaches that are being or have been integrated into actual or prototypical Earth or space science data information systems, or that hold promise for such an application. The workshop was organized by James C. Tilton and Robert F. Cromp of the NASA Goddard Space Flight Center

    A Lossy JPEG2000-based Data Hiding Method for Scalable 3D Terrain Visualization

    No full text
    International audienceThe data needed for 3D terrain visualization consists, essentially, of a texture image and its corresponding digital elevation model (DEM). A blind data hiding method is proposed for the synchronous unification of this disparate data whereby the lossless discrete wavelet transformed (DWTed) DEM is embedded in the tier-1 coded quantized and DWTed Y component of the texture image from the lossy JPEG2000 pipeline. The multiresolution nature of wavelets provides us the scalability that can cater for the diversity of client capacities in terms of computing, memory and network resources in today's network environment. The results have been interesting and for a bitrate as low as 0.0120.012 bit per pixel (bpp), a satisfactory visualization was realized. We compare the obtained results with those of a previous method that interrupt the lossless JPEG2000 codec immediately after the DWT step and embeds lossless DWTed DEM in the reversibly DWTed Y component of texture. The proposed method proved to be more effective in the sense that for the same bitrate one observed lesser quality loss for respective resolutions

    3D oceanographic data compression using 3D-ODETLAP

    Get PDF
    This paper describes a 3D environmental data compression technique for oceanographic datasets. With proper point selection, our method approximates uncompressed marine data using an over-determined system of linear equations based on, but essentially different from, the Laplacian partial differential equation. Then this approximation is refined via an error metric. These two steps work alternatively until a predefined satisfying approximation is found. Using several different datasets and metrics, we demonstrate that our method has an excellent compression ratio. To further evaluate our method, we compare it with 3D-SPIHT. 3D-ODETLAP averages 20% better compression than 3D-SPIHT on our eight test datasets, from World Ocean Atlas 2005. Our method provides up to approximately six times better compression on datasets with relatively small variance. Meanwhile, with the same approximate mean error, we demonstrate a significantly smaller maximum error compared to 3D-SPIHT and provide a feature to keep the maximum error under a user-defined limit

    High fidelity compression of irregularly sampled height fields

    Get PDF
    This paper presents a method to compress irregularly sampled height-fields based on a multi-resolution framework. Unlike many other height-field compression techniques, no resampling is required so the original height-field data is recovered (less quantization error). The method decomposes the compression task into two complementary phases: an in-plane compression scheme for (x, y) coordinate positions, and a separate multi-resolution z compression step. This decoupling allows subsequent improvements in either phase to be seamlessly integrated and also allows for independent control of bit-rates in the decoupled dimensions, should this be desired. Results are presented for a number of height-field sample sets quantized to 12 bits for each of x and y, and 10 bits for z. Total lossless encoded data sizes range from 11 to 24 bits per point, with z bit-rates lying in the range 2.9 to 8.1 bits per z coordinate. Lossy z bit-rates (we do not lossily encode x and y) lie in the range 0.7 to 5.9 bits per z coordinate, with a worst-case root-mean-squared (RMS) error of less than 1.7% of the z range. Even with aggressive lossy encoding, at least 40% of the point samples are perfectly reconstructed

    LIDAR data classification and compression

    Get PDF
    Airborne Laser Detection and Ranging (LIDAR) data has a wide range of applications in agriculture, archaeology, biology, geology, meteorology, military and transportation, etc. LIDAR data consumes hundreds of gigabytes in a typical day of acquisition, and the amount of data collected will continue to grow as sensors improve in resolution and functionality. LIDAR data classification and compression are therefore very important for managing, visualizing, analyzing and using this huge amount of data. Among the existing LIDAR data classification schemes, supervised learning has been used and can obtain up to 96% of accuracy. However some of the features used are not readily available, and the training data is also not always available in practice. In existing LIDAR data compression schemes, the compressed size can be 5%-23% of the original size, but still could be in the order of gigabyte, which is impractical for many applications. The objectives of this dissertation are (1) to develop LIDAR classification schemes that can classify airborne LIDAR data more accurately without some features or training data that existing work requires; (2) to explore lossy compression schemes that can compress LIDAR data at a much higher compression rate than is currently available. We first investigate two independent ways to classify LIDAR data depending on the availability of training data: when training data is available, we use supervised machine learning techniques such as support vector machine (SVM); when training data is not readily available, we develop an unsupervised classification method that can classify LIDAR data as good as supervised classification methods. Experimental results show that the accuracy of our classification results are over 99%. We then present two new lossy LIDAR data compression methods and compare their performance. The first one is a wavelet based compression scheme while the second one is geometry based. Our new geometry based compression is a geometry and statistics driven LIDAR point-cloud compression method which combines both application knowledge and scene content to enable fast transmission from the sensor platform while preserving the geometric properties of objects within a scene. The new algorithm is based on the idea of compression by classification. It utilizes the unique height function simplicity as well as the local spatial coherence and linearity of the aerial LIDAR data and can automatically compress the data to the desired level-of-details defined by the user. Either of the two developed classification methods can be used to automatically detect regions that are not locally linear such as vegetations or trees. In those regions, the local statistics descriptions, such as mean, variance, expectation, etc., are stored to efficiently represent the region and restore the geometry in the decompression phase. The new geometry-based compression schemes for building and ground data can compress efficiently and significantly reduce the file size, while retaining a good fit for the scalable "zoom in" requirements. Experimental results show that compared with existing LIDAR lossy compression work, our proposed approach achieves two orders of magnitude lower bit rate with the same quality, making it feasible for applications that were not practical before. The ability to store information into a database and query them efficiently becomes possible with the proposed highly efficient compression scheme.Includes bibliographical references (pages 106-116)

    An Efficient Data-hiding Method Based on Lossless JPEG2000 for a Scalable and Synchronized Visualization of 3D Terrains

    No full text
    International audienceReal-time on-line 3D visualization of terrain is a memory intensive process accompanied by considerably large data transfer across the network and thus data compression is inevitable. The upcoming standard of JPEG2000 is well suited for such network based transfers since it offers the additional advantage of resolution scalability resulting in incremental improvement of quality. The 3D visualization process is, essentially, the linking of the texture image with the terrain geometry obtained from DEM; the data are heterogeneous and normally involves more than one file. This work is concerned with the interleaving of these files into one jp2 file in a synchronized way so that the file format is conserved for compliance to the JPEG2000 standard. This synchronization is achieved by using a scalable data hiding method to embed the lossless wavelet transformed DEM in the corresponding lossless JPEG2000 coded texture. For the DEM and the texture, the level of transform is the same. With this approach the 3D visualization is efficient even if a small fraction of the initial data is transmitted

    Guidelines for Best Practice and Quality Checking of Ortho Imagery

    Get PDF
    For almost 10 years JRC's ¿Guidelines for Best Practice and Quality Control of Ortho Imagery¿ has served as a reference document for the production of orthoimagery not only for the purposes of CAP but also for many medium-to-large scale photogrammetric applications. The aim is to provide the European Commission and the remote sensing user community with a general framework of the best approaches for quality checking of orthorectified remotely sensed imagery, and the expected best practice, required to achieve good results. Since the last major revision (2003) the document was regularly updated in order to include state-of-the-art technologies. The major revision of the document was initiated last year in order to consolidate the information that was introduced to the document in the last five years. Following the internal discussion and the outcomes of the meeting with an expert panel it was decided to adopt as possible a process-based structure instead of a more sensor-based used before and also to keep the document as much generic as possible by focusing on the core aspects of the photogrammetric process. Additionally to any structural changes in the document new information was introduced mainly concerned with image resolution and radiometry, digital airborne sensors, data fusion, mosaicking and data compression. The Guidelines of best practice is used as the base for our work on the definition of technical specifications for the orthoimagery. The scope is to establish a core set of measures to ensure sufficient image quality for the purposes of CAP and particularly for the Land Parcel Identification System (PLIS), and also to define the set of metadata necessary for data documentation and overall job tracking.JRC.G.3-Agricultur
    • …
    corecore