220 research outputs found

    PSVDAG: Compact Voxelized Representation of 3D Scenes Using Pointerless Sparse Voxel Directed Acyclic Graphs

    Get PDF
    This paper deals with the issue of geometry representation of voxelized three-dimensional scenes using hierarchical data structures. These include pointerless Sparse Voxel Octrees that have no pointers on child nodes and allow a compact binary representation. However, if necessary, there is a possibility to reconstruct these pointers for rapid traversing. Sparse Voxel Directed Acyclic Graphs added 32-bit pointers to child nodes and merging of common subtrees, which can be considered lossless compression. By merging common subtrees, no decompression overhead occurs at the time of traversing. The hierarchical data structure proposed herein - the Pointerless Sparse Voxel Directed Acyclic Graph - incorporates the benefits of both - pointerless Sparse Voxel Octrees (by avoiding storing pointers on child nodes) and Sparse Voxel Directed Acyclic Graphs (by allowing the merging of common subtrees due the introduction of labels and callers). The proposed data structure supports the quick and easy reconstruction of pointers by introducing the Active Child Node Count. It also potentially allows Child Node Mask compression of its nodes. This paper presents the proposed data structure and its binary-level encoding in detail. It compares the effectiveness of the representation of voxelized three-dimensional scenes (originally represented in OBJ format) in the proposed data structure with the data structures mentioned above. It also summarizes statistical data providing a more detailed description of the various parameters of the data structure for different scenes stored in multiple resolutions

    Studies on image compression and image reconstruction

    Get PDF
    During this six month period our works concentrated on three, somewhat different areas. We looked at and developed a number of error concealment schemes for use in a variety of video coding environments. This work is described in an accompanying (draft) Masters thesis. In the thesis we describe application of this techniques to the MPEG video coding scheme. We felt that the unique frame ordering approach used in the MPEG scheme would be a challenge to any error concealment/error recovery technique. We continued with our work in the vector quantization area. We have also developed a new type of vector quantizer, which we call a scan predictive vector quantization. The scan predictive VQ was tested on data processed at Goddard to approximate Landsat 7 HRMSI resolution and compared favorably with existing VQ techniques. A paper describing this work is included. The third area is concerned more with reconstruction than compression. While there is a variety of efficient lossless image compression schemes, they all have a common property that they use past data to encode future data. This is done either via taking differences, context modeling, or by building dictionaries. When encoding large images, this common property becomes a common flaw. When the user wishes to decode just a portion of the image, the requirement that the past history be available forces the decoding of a significantly larger portion of the image than desired by the user. Even with intelligent partitioning of the image dataset, the number of pixels decoded may be four times the number of pixels requested. We have developed an adaptive scanning strategy which can be used with any lossless compression scheme and which lowers the additional number of pixels to be decoded to about 7 percent of the number of pixels requested! A paper describing these results is included

    Tchebichef Moment Based Hilbert Scan for Image Compression

    Get PDF
    Image compression is now essential for applications such as transmission and storage in data base, so we need to compress a vast amount of information whereas, the compressed ratio and quality of compressed image must be enhanced, for this reason, this paper develop a new algorithm that used a discrete orthogonal Tchebichef moment based Hilbert curve for image compression. The analyzed image was divided into 8×8 image sub-blocks, the Tchebichef moment has been applied to each one, and then the transformed coefficients 8×8 sub-block shall be reordered in Hilbert scan into a linear array, at this step Huffman coding is implemented. Experimental results show that this algorithm improves the coding efficiency on the one hand; and on the other hand the quality of reconstructed image is also not significantly decreased. Keywords: Huffman Coding, Tchebichef Moment Transforms, Orthogonal Moment Functions, Hilbert, zigzag scan

    A hybrid predictive technique for lossless image compression

    Get PDF
    Compression of images is of great interest in applications where efficiency with respect to data storage or transmission bandwidth is sought.The rapid growth of social media and digital networks have given rise to huge amount of image data being accessed and exchanged daily. However, the larger the image size, the longer it takes to transmit and archive. In other words, high quality images require huge amount of transmission bandwidth and storage space. Suitable image compression can help in reducing the image size and improving transmission speed. Lossless image compression is especially crucial in fields such as remote sensing healthcare network, security and military applications as the quality of images needs to be maintained to avoid any errors during analysis or diagnosis. In this paper, a hybrid prediction lossless image compression algorithm is proposed to address these issues. The algorithm is achieved by combining predictive Differential Pulse Code Modulation (DPCM) and Integer Wavelet Transform (IWT). Entropy and compression ratio calculation are used to analyze the performance of the designed coding. The analysis shows that the best hybrid predictive algorithm is the sequence of DPCM-IWT-Huffman which has bits sizes reduced by 36%, 48%, 34% and 13% for tested images of Lena, Cameraman, Pepper and Baboon, respectively

    Context-based Space Filling Curves

    Full text link

    Fast Compressed Segmentation Volumes for Scientific Visualization

    Full text link
    Voxel-based segmentation volumes often store a large number of labels and voxels, and the resulting amount of data can make storage, transfer, and interactive visualization difficult. We present a lossless compression technique which addresses these challenges. It processes individual small bricks of a segmentation volume and compactly encodes the labelled regions and their boundaries by an iterative refinement scheme. The result for each brick is a list of labels, and a sequence of operations to reconstruct the brick which is further compressed using rANS-entropy coding. As the relative frequencies of operations are very similar across bricks, the entropy coding can use global frequency tables for an entire data set which enables efficient and effective parallel (de)compression. Our technique achieves high throughput (up to gigabytes per second both for compression and decompression) and strong compression ratios of about 1% to 3% of the original data set size while being applicable to GPU-based rendering. We evaluate our method for various data sets from different fields and demonstrate GPU-based volume visualization with on-the-fly decompression, level-of-detail rendering (with optional on-demand streaming of detail coefficients to the GPU), and a caching strategy for decompressed bricks for further performance improvement.Comment: IEEE Vis 202

    Halcyon -- A Pathology Imaging and Feature analysis and Management System

    Full text link
    Halcyon is a new pathology imaging analysis and feature management system based on W3C linked-data open standards and is designed to scale to support the needs for the voluminous production of features from deep-learning feature pipelines. Halcyon can support multiple users with a web-based UX with access to all user data over a standards-based web API allowing for integration with other processes and software systems. Identity management and data security is also provided.Comment: 15 pages, 11 figures. arXiv admin note: text overlap with arXiv:2005.0646
    corecore