18 research outputs found

    Data compression techniques applied to high resolution high frame rate video technology

    Get PDF
    An investigation is presented of video data compression applied to microgravity space experiments using High Resolution High Frame Rate Video Technology (HHVT). An extensive survey of methods of video data compression, described in the open literature, was conducted. The survey examines compression methods employing digital computing. The results of the survey are presented. They include a description of each method and assessment of image degradation and video data parameters. An assessment is made of present and near term future technology for implementation of video data compression in high speed imaging system. Results of the assessment are discussed and summarized. The results of a study of a baseline HHVT video system, and approaches for implementation of video data compression, are presented. Case studies of three microgravity experiments are presented and specific compression techniques and implementations are recommended

    The Space and Earth Science Data Compression Workshop

    Get PDF
    This document is the proceedings from a Space and Earth Science Data Compression Workshop, which was held on March 27, 1992, at the Snowbird Conference Center in Snowbird, Utah. This workshop was held in conjunction with the 1992 Data Compression Conference (DCC '92), which was held at the same location, March 24-26, 1992. The workshop explored opportunities for data compression to enhance the collection and analysis of space and Earth science data. The workshop consisted of eleven papers presented in four sessions. These papers describe research that is integrated into, or has the potential of being integrated into, a particular space and/or Earth science data information system. Presenters were encouraged to take into account the scientists's data requirements, and the constraints imposed by the data collection, transmission, distribution, and archival system

    Exclusive-or preprocessing and dictionary coding of continuous-tone images.

    Get PDF
    The field of lossless image compression studies the various ways to represent image data in the most compact and efficient manner possible that also allows the image to be reproduced without any loss. One of the most efficient strategies used in lossless compression is to introduce entropy reduction through decorrelation. This study focuses on using the exclusive-or logic operator in a decorrelation filter as the preprocessing phase of lossless image compression of continuous-tone images. The exclusive-or logic operator is simply and reversibly applied to continuous-tone images for the purpose of extracting differences between neighboring pixels. Implementation of the exclusive-or operator also does not introduce data expansion. Traditional as well as innovative prediction methods are included for the creation of inputs for the exclusive-or logic based decorrelation filter. The results of the filter are then encoded by a variation of the Lempel-Ziv-Welch dictionary coder. Dictionary coding is selected for the coding phase of the algorithm because it does not require the storage of code tables or probabilities and because it is lower in complexity than other popular options such as Huffman or Arithmetic coding. The first modification of the Lempel-Ziv-Welch dictionary coder is that image data can be read in a sequence that is linear, 2-dimensional, or an adaptive combination of both. The second modification of the dictionary coder is that the coder can instead include multiple, dynamically chosen dictionaries. Experiments indicate that the exclusive-or operator based decorrelation filter when combined with a modified Lempel-Ziv-Welch dictionary coder provides compression comparable to algorithms that represent the current standard in lossless compression. The proposed algorithm provides compression performance that is below the Context-Based, Adaptive, Lossless Image Compression (CALIC) algorithm by 23%, below the Low Complexity Lossless Compression for Images (LOCO-I) algorithm by 19%, and below the Portable Network Graphics implementation of the Deflate algorithm by 7%, but above the Zip implementation of the Deflate algorithm by 24%. The proposed algorithm uses the exclusive-or operator in the modeling phase and uses modified Lempel-Ziv-Welch dictionary coding in the coding phase to form a low complexity, reversible, and dynamic method of lossless image compression

    An intelligent system for the classification and selection of novel and efficient lossless image compression algorithms

    Get PDF
    We are currently living in an era revolutionised by the development of smart phones and digital cameras. Most people are using phones and cameras in every aspect of their lives. With this development comes a high level of competition between the technology companies developing these devices, each one trying to enhance its products to meet the new market demands. One of the most sought-after criteria of any smart phone or digital camera is the camera’s resolution. Digital imaging and its applications are growing rapidly; as a result of this growth, the image size is increasing, and alongside this increase comes the important challenge of saving these large-sized images and transferring them over networks. With the increase in image size, the interest in image compression is increasing as well, to improve the storage size and transfer time. In this study, the researcher proposes two new lossless image compression algorithms. Both proposed algorithms focus on decreasing the image size by reducing the image bit-depth through using well defined methods of reducing the coloration between the image intensities.The first proposed lossless image compression algorithm is called Column Subtraction Compression (CSC), which aims to decrease the image size without losing any of the image information by using a colour transformation method as a pre-processing phase, followed by the proposed Column Subtraction Compression function to decrease the image size. The proposed algorithm is specially designed for compressing natural images. The CSC algorithm was evaluated for colour images and compared against benchmark schemes obtained from (Khan et al., 2017). It achieved the best compression size over the existing methods by enhancing the average storage saving of the BBWCA, JPEG 2000 LS, KMTF– BWCA, HEVC and basic BWCA algorithms by 2.5%, 15.6%, 41.6%, 7.8% and 45.07% respectively. The CSC algorithm simple implementation positively affects the execution time and makes it one of the fastest algorithms, since it needed less than 0.5 second for compressing and decompressing natural images obtained from (Khan et al., 2017). The proposed algorithm needs only 19.36 seconds for compressing and decompressing all of the 10 images from the Kodak image set, while the BWCA, KMTF – BWCA and BBWCA need 398.5s, 429.24s and 475.38s respectively. Nevertheless, the CSC algorithm achieved less compression ratio, when compressing low resolution images since it was designed for compressing high resolution images. To solve this issue, the researcher proposed the Low-Resolution Column Subtraction Compression algorithm (LRCSC) to enhance the CSC low compression ratio related to compressing low-resolution images.The LRCSC algorithm starts by using the CSC algorithm as a pre-processing phase, followed by the Huffman algorithm and Run-Length Coding (RLE) to decrease the image size as a final compression phase. The LRCSC enhanced the average storage saving of the CSC algorithm for raster map images by achieving 13.68% better compression size. The LRCSC algorithm decreases the raster map image set size by saving 96% from the original image set size but did not reach the best results when compared with the PNG, GIF, BLiSE and BBWCA where the storage saving is 97.42%, 98.33%, 98.92% and 98.93% respectively. The LRCSC algorithm enhanced the compression execution time with acceptable compression ratio. Both of the proposed algorithms are effective with any image types such as colour or greyscale images. The proposed algorithms save a lot of memory storage and dramatically decreased the execution time.Finally, to take full benefits of the two newly developed algorithms, anew system is developed based on running both of the algorithm for the same input image and then suggest the appropriate algorithm to be used for the de-compression phase

    Proceedings of the Scientific Data Compression Workshop

    Get PDF
    Continuing advances in space and Earth science requires increasing amounts of data to be gathered from spaceborne sensors. NASA expects to launch sensors during the next two decades which will be capable of producing an aggregate of 1500 Megabits per second if operated simultaneously. Such high data rates cause stresses in all aspects of end-to-end data systems. Technologies and techniques are needed to relieve such stresses. Potential solutions to the massive data rate problems are: data editing, greater transmission bandwidths, higher density and faster media, and data compression. Through four subpanels on Science Payload Operations, Multispectral Imaging, Microwave Remote Sensing and Science Data Management, recommendations were made for research in data compression and scientific data applications to space platforms

    DESIGN AND IMPLEMENTATION OF AN EFFICIENT IMAGE COMPRESSOR FOR WIRELESS CAPSULE ENDOSCOPY

    Get PDF
    Capsule endoscope (CE) is a diagnosis tool for gastrointestinal (GI) diseases. Area and power are the two important parameters for the components used in CE. To optimize these two parameters, an efficient image compressor is desired. The mage compressor should be able to sufficiently compress the captured images to save transmission power, retain reconstruction quality for accurate diagnosis and consumes small physical area. To meet all of the above mentioned conditions, we have studied several transform coding based lossy compression algorithms in this thesis. The core computation tool of these compressors is the Discrete Cosine Transform (DCT) kernel. The DCT accumulates the distributed energy of an image in a small centralized area and supports more compression with non-significant quality degradation. The conventional DCT requires complex floating point multiplication, which is not feasible for wireless capsule endoscopy (WCE) application because of its high implementation cost. So, an integer version of the DCT, known as iDCT, is used in this work. Several low complexity iDCTs along with different color space converters (such as, YUV, YEF, YCgCo) were combined to obtain the desired compression level. At the end a quantization stage is used in the proposed algorithm to achieve further compression. We have analyzed the endoscopic images and based on their properties, three quantization matrix sets have been proposed for three color planes. The algorithms are verified at both software (using MATLAB) and hardware (using HDL Verilog coding) levels. In the end, the performance of all the proposed schemes has been evaluated for optimal operation in WCE application

    Digital image compression

    Get PDF

    The 1995 Science Information Management and Data Compression Workshop

    Get PDF
    This document is the proceedings from the 'Science Information Management and Data Compression Workshop,' which was held on October 26-27, 1995, at the NASA Goddard Space Flight Center, Greenbelt, Maryland. The Workshop explored promising computational approaches for handling the collection, ingestion, archival, and retrieval of large quantities of data in future Earth and space science missions. It consisted of fourteen presentations covering a range of information management and data compression approaches that are being or have been integrated into actual or prototypical Earth or space science data information systems, or that hold promise for such an application. The Workshop was organized by James C. Tilton and Robert F. Cromp of the NASA Goddard Space Flight Center

    Power Efficient Data Compression Hardware for Wearable and Wireless Biomedical Sensing Devices

    Get PDF
    This thesis aims to verify a possible benefit lossless data compression and reduction techniques can bring to a wearable and wireless biomedical device, which is anticipated to be system power saving. A wireless transceiver is one of the main contributors to the system power of a wireless biomedical sensing device, and reducing the data transmitted by the transceiver with a minimum hardware cost can therefore help to save the power. This thesis is going to investigate the impact of the data compression and reduction on the system power of a wearable and wireless biomedical device and trying to find a proper compression technique that can achieve power saving of the device. The thesis first examines some widely used lossy and lossless data compression and reduction techniques for biomedical data, especially EEG data. Then it introduces a novel lossless biomedical data compression technique designed for this research called Log2 sub-band encoding. The thesis then moves on to the biomedical data compression evaluation of the Log2 sub-band encoding and an existing 2-stage technique consisting of the DPCM and the Huffman encoding. The next part of this thesis explores the signal classification potential of the Log2 sub-band encoding. It was found that some of the signal features extracted as a by-product during the Log2 sub-band encoding process could be used to detect certain signal events like epileptic seizures, with a proper method. The final section of the thesis focuses on the power analysis of the hardware implementation of two compression techniques referred to earlier, as well as the system power analysis. The results show that the Log2 sub-band is comparable and even superior to the 2-stage technique in terms of data compression and power performance. The system power requirement of an EEG signal recorder that has the Log2 sub-band implemented is significantly reduced

    Compression of Three-Dimensional Magnetic Resonance Brain Images.

    Get PDF
    Losslessly compressing a medical image set with multiple slices is paramount in radiology since all the information within a medical image set is crucial for both diagnosis and treatment. This dissertation presents a novel and efficient diagnostically lossless compression scheme (predicted wavelet lossless compression method) for sets of magnetic resonance (MR) brain images, which are called 3-D MR brain images. This compression scheme provides 3-D MR brain images with the progressive and preliminary diagnosis capabilities. The spatial dependency in 3-D MR brain images is studied with histograms, entropy, correlation, and wavelet decomposition coefficients. This spatial dependency is utilized to design three kinds of predictors, i.e., intra-, inter-, and intra-and-inter-slice predictors, that use the correlation among neighboring pixels. Five integer wavelet transformations are applied to the prediction residues. It shows that the intra-slice predictor 3 using a x-pixel and a y-pixel for prediction plus the 1st-level (2, 2) interpolating integer wavelet with run-length and arithmetic coding achieves the best compression. An automated threshold based background noise removal technique is applied to remove the noise outside the diagnostic region. This preprocessing method improves the compression ratio of the proposed compression technique by approximately 1.61 times. A feature vector based approach is used to determine the representative slice with the most discernible brain structures. This representative slice is progressively encoded by a lossless embedded zerotree wavelet method. A rough version of this representative slice is gradually transmitted at an increasing bit rate so the validity of the whole set can be determined early. This feature vector based approach is also utilized to detect multiple sclerosis (MS) at an early stage. Our compression technique with the progressive and preliminary diagnosis capability is tested with simulated and real 3-D MR brain image sets. The compression improvement versus the best commonly used lossless compression method (lossless JPEG) is 41.83% for simulated 3-D MR brain image sets and 71.42% for real 3-D MR brain image sets. The accuracy of the preliminary MS diagnosis is 66.67% based on six studies with an expert radiologist\u27s diagnosis
    corecore