41 research outputs found

    Compressing computer network measurements using embedded zerotree wavelets

    Get PDF
    Monitoring and measuring various metrics of high data rate and high capacity networks produces a vast amount of information over a long period of time. Characteristics such as throughput and delay are derived from packet level information and can be represented as time series signals. This paper looks at the Embedded Zero Tree algorithm, proposed by Shapiro, in order to compress computer network delay and throughput measurements while preserving the quality of interesting features and controlling the level of quality of the compressed signal. The quality characteristics that are examined are the preservation of the mean square error (MSE), the standard deviation, the general visual quality (the PSNR) and the scaling behavior. Experimental results are obtained to evaluate the behaviour of the algorithm on delay and data rate signals. Finally, a comparison of compression performance is presented against the lossless tool bzip2

    Custom Lossless Compression and High-Quality Lossy Compression of White Blood Cell Microscopy Images for Display and Machine Learning Applications

    Get PDF
    This master's thesis investigates both custom lossless compression and high-quality lossy compression of microscopy images of white blood cells produced by CellaVision's blood analysis systems. A number of different compression strategies have been developed and evaluated, all of which are taking advantage of the specific color filter array used in the sensor in the cameras in the analysis systems. Lossless compression has been the main focus of this thesis. The lossless compression method, of those developed, that gave best result is based on a statistical autoregressive model. A model is constructed for each color channel with external information from the other color channels. The difference between the predictions from the statistical model and the original is further Huffman coded. The method achieves an average bit-rate of 3.0409 bits per pixel on the test set consisting of 604 images. The proposed lossy method is based on taking the difference between the image compressed with an ordinary lossy compression method, JPEG 2000, and the original image. The JPEG 2000 image is saved, as well as the differences at the foreground (i.e. locations with cells), in order to keep the cells identical to the cells in the original image, but allow loss of information for the, not so important, background. This method achieves a bit-rate of 2.4451 bits per pixel, with a peak signal-to-noise-ratio (PSNR) of 48.05 dB

    Compression Efficiency for Combining Different Embedded Image Compression Techniques with Huffman Encoding

    Get PDF
    This thesis presents a technique for image compression which uses the different embedded Wavelet based image coding in combination with Huffman- encoder(for further compression). There are different types of algorithms available for lossy image compression out of which Embedded Zerotree Wavelet(EZW), Set Partitioning in Hierarchical Trees (SPIHT) and Modified SPIHT algorithms are the some of the important compression techniques. EZW algorithm is based on progressive encoding to compress an image into a bit stream with increasing accuracy. The EZW encoder was originally designed to operate on 2D images, but it can also use to other dimensional signals. Progressive encoding is also called as embedded encoding. Main feature of ezw algorithm is capability of meeting an exact target bit rate with corresponding rate distortion rate(RDF). Set Partitioning in Hierarchical Trees (SPIHT) is an improved version of EZW and has become the general standard of EZW. SPIHT is a very efficient image compression algorithm that is based on the idea of coding groups of wavelet coefficients as zero trees. Since the order in which the subsets are tested for significance is important in a practical implementation the significance information is stored in three ordered lists called list of insignificant sets (LIS) list of insignificant pixels (LIP) and list of significant pixels (LSP). Modified SPIHT algorithm and the preprocessing techniques provide significant quality (both subjectively and objectively) reconstruction at the decoder with little additional computational complexity as compared to the previous techniques. This proposed method can reduce redundancy to a certain extend. Simulation results show that these hybrid algorithms yield quite promising PSNR values at low bitrates

    Combined Industry, Space and Earth Science Data Compression Workshop

    Get PDF
    The sixth annual Space and Earth Science Data Compression Workshop and the third annual Data Compression Industry Workshop were held as a single combined workshop. The workshop was held April 4, 1996 in Snowbird, Utah in conjunction with the 1996 IEEE Data Compression Conference, which was held at the same location March 31 - April 3, 1996. The Space and Earth Science Data Compression sessions seek to explore opportunities for data compression to enhance the collection, analysis, and retrieval of space and earth science data. Of particular interest is data compression research that is integrated into, or has the potential to be integrated into, a particular space or earth science data information system. Preference is given to data compression research that takes into account the scien- tist's data requirements, and the constraints imposed by the data collection, transmission, distribution and archival systems

    Compression of Three-Dimensional Magnetic Resonance Brain Images.

    Get PDF
    Losslessly compressing a medical image set with multiple slices is paramount in radiology since all the information within a medical image set is crucial for both diagnosis and treatment. This dissertation presents a novel and efficient diagnostically lossless compression scheme (predicted wavelet lossless compression method) for sets of magnetic resonance (MR) brain images, which are called 3-D MR brain images. This compression scheme provides 3-D MR brain images with the progressive and preliminary diagnosis capabilities. The spatial dependency in 3-D MR brain images is studied with histograms, entropy, correlation, and wavelet decomposition coefficients. This spatial dependency is utilized to design three kinds of predictors, i.e., intra-, inter-, and intra-and-inter-slice predictors, that use the correlation among neighboring pixels. Five integer wavelet transformations are applied to the prediction residues. It shows that the intra-slice predictor 3 using a x-pixel and a y-pixel for prediction plus the 1st-level (2, 2) interpolating integer wavelet with run-length and arithmetic coding achieves the best compression. An automated threshold based background noise removal technique is applied to remove the noise outside the diagnostic region. This preprocessing method improves the compression ratio of the proposed compression technique by approximately 1.61 times. A feature vector based approach is used to determine the representative slice with the most discernible brain structures. This representative slice is progressively encoded by a lossless embedded zerotree wavelet method. A rough version of this representative slice is gradually transmitted at an increasing bit rate so the validity of the whole set can be determined early. This feature vector based approach is also utilized to detect multiple sclerosis (MS) at an early stage. Our compression technique with the progressive and preliminary diagnosis capability is tested with simulated and real 3-D MR brain image sets. The compression improvement versus the best commonly used lossless compression method (lossless JPEG) is 41.83% for simulated 3-D MR brain image sets and 71.42% for real 3-D MR brain image sets. The accuracy of the preliminary MS diagnosis is 66.67% based on six studies with an expert radiologist\u27s diagnosis

    Progressively communicating rich telemetry from autonomous underwater vehicles via relays

    Get PDF
    Submitted in partial fulfillment of the requirements for the degree of Doctor of Philosophy at the Massachusetts Institute of Technology and the Woods Hole Oceanographic Institution June 2012As analysis of imagery and environmental data plays a greater role in mission construction and execution, there is an increasing need for autonomous marine vehicles to transmit this data to the surface. Without access to the data acquired by a vehicle, surface operators cannot fully understand the state of the mission. Communicating imagery and high-resolution sensor readings to surface observers remains a significant challenge – as a result, current telemetry from free-roaming autonomous marine vehicles remains limited to ‘heartbeat’ status messages, with minimal scientific data available until after recovery. Increasing the challenge, longdistance communication may require relaying data across multiple acoustic hops between vehicles, yet fixed infrastructure is not always appropriate or possible. In this thesis I present an analysis of the unique considerations facing telemetry systems for free-roaming Autonomous Underwater Vehicles (AUVs) used in exploration. These considerations include high-cost vehicle nodes with persistent storage and significant computation capabilities, combined with human surface operators monitoring each node. I then propose mechanisms for interactive, progressive communication of data across multiple acoustic hops. These mechanisms include wavelet-based embedded coding methods, and a novel image compression scheme based on texture classification and synthesis. The specific characteristics of underwater communication channels, including high latency, intermittent communication, the lack of instantaneous end-to-end connectivity, and a broadcast medium, inform these proposals. Human feedback is incorporated by allowing operators to identify segments of data thatwarrant higher quality refinement, ensuring efficient use of limited throughput. I then analyze the performance of these mechanisms relative to current practices. Finally, I present CAPTURE, a telemetry architecture that builds on this analysis. CAPTURE draws on advances in compression and delay tolerant networking to enable progressive transmission of scientific data, including imagery, across multiple acoustic hops. In concert with a physical layer, CAPTURE provides an endto- end networking solution for communicating science data from autonomous marine vehicles. Automatically selected imagery, sonar, and time-series sensor data are progressively transmitted across multiple hops to surface operators. Human operators can request arbitrarily high-quality refinement of any resource, up to an error-free reconstruction. The components of this system are then demonstrated through three field trials in diverse environments on SeaBED, OceanServer and Bluefin AUVs, each in different software architectures.Thanks to the National Science Foundation, and the National Oceanic and Atmospheric Administration for their funding of my education and this work

    The Applications of Discrete Wavelet Transform in Image Processing: A Review

    Get PDF
    This paper reviews the newly published works on applying waves to image processing depending on the analysis of multiple solutions. the wavelet transformation reviewed in detail including wavelet function, integrated wavelet transformation, discrete wavelet transformation, rapid wavelet transformation, DWT properties, and DWT advantages. After reviewing the basics of wavelet transformation theory, various applications of wavelet are reviewed and multi-solution analysis, including image compression, image reduction, image optimization, and image watermark. In addition, we present the concept and theory of quadruple waves for the future progress of wavelet transform applications and quadruple solubility applications. The aim of this paper is to provide a wide-ranging review of the survey found able on wavelet-based image processing applications approaches. It will be beneficial for scholars to execute effective image processing applications approaches

    High ratio wavelet video compression through real-time rate-distortion estimation.

    Get PDF
    Thesis (M.Sc.Eng.)-University of Natal, Durban, 2003.The success of the wavelet transform in the compression of still images has prompted an expanding effort to exercise this transform in the compression of video. Most existing video compression methods incorporate techniques from still image compression, such techniques being abundant, well defined and successful. This dissertation commences with a thorough review and comparison of wavelet still image compression techniques. Thereafter an examination of wavelet video compression techniques is presented. Currently, the most effective video compression system is the DCT based framework, thus a comparison between these and the wavelet techniques is also given. Based on this review, this dissertation then presents a new, low-complexity, wavelet video compression scheme. Noting from a complexity study that the generation of temporally decorrelated, residual frames represents a significant computational burden, this scheme uses the simplest such technique; difference frames. In the case of local motion, these difference frames exhibit strong spatial clustering of significant coefficients. A simple spatial syntax is created by splitting the difference frame into tiles. Advantage of the spatial clustering may then be taken by adaptive bit allocation between the tiles. This is the central idea of the method. In order to minimize the total distortion of the frame, the scheme uses the new p-domain rate-distortion estimation scheme with global numerical optimization to predict the optimal distribution of bits between tiles. Thereafter each tile is independently wavelet transformed and compressed using the SPIHT technique. Throughout the design process computational efficiency was the design imperative, thus leading to a real-time, software only, video compression scheme. The scheme is finally compared to both the current video compression standards and the leading wavelet schemes from the literature in terms of computational complexity visual quality. It is found that for local motion scenes the proposed algorithm executes approximately an order of magnitude faster than these methods, and presents output of similar quality. This algorithm is found to be suitable for implementation in mobile and embedded devices due to its moderate memory and computational requirements

    Rate scalable image compression in the wavelet domain

    Get PDF
    This thesis explores image compression in the wavelet transform domain. This the- sis considers progressive compression based on bit plane coding. The rst part of the thesis investigates the scalar quantisation technique for multidimensional images such as colour and multispectral image. Embedded coders such as SPIHT and SPECK are known to be very simple and e cient algorithms for compression in the wavelet do- main. However, these algorithms require the use of lists to keep track of partitioning processes, and such lists involve high memory requirement during the encoding process. A listless approach has been proposed for multispectral image compression in order to reduce the working memory required. The earlier listless coders are extended into three dimensional coder so that redundancy in the spectral domain can be exploited. Listless implementation requires a xed memory of 4 bits per pixel to represent the state of each transformed coe cient. The state is updated during coding based on test of sig- ni cance. Spectral redundancies are exploited to improve the performance of the coder by modifying its scanning rules and the initial marker/state. For colour images, this is done by conducting a joint the signi cant test for the chrominance planes. In this way, the similarities between the chrominance planes can be exploited during the cod- ing process. Fixed memory listless methods that exploit spectral redundancies enable e cient coding while maintaining rate scalability and progressive transmission. The second part of the thesis addresses image compression using directional filters in the wavelet domain. A directional lter is expected to improve the retention of edge and curve information during compression. Current implementations of hybrid wavelet and directional (HWD) lters improve the contour representation of compressed images, but su er from the pseudo-Gibbs phenomenon in the smooth regions of the images. A di erent approach to directional lters in the wavelet transforms is proposed to remove such artifacts while maintaining the ability to preserve contours and texture. Imple- mentation with grayscale images shows improvements in terms of distortion rates and the structural similarity, especially in images with contours. The proposed transform manages to preserve the directional capability without pseudo-Gibbs artifacts and at the same time reduces the complexity of wavelet transform with directional lter. Fur-ther investigation to colour images shows the transform able to preserve texture and curve.EThOS - Electronic Theses Online ServiceGBUnited Kingdo

    Data compression in smart distribution systems via singular value decomposition

    Get PDF
    Electrical distribution systems have been experiencing many changes in recent times. Advances in metering system infrastructure and the deployment of a large number of smart meters in the grid will produce a big volume of data that will be required for many different applications. Despite the significant investments taking place in the communications infrastructure, this remains a bottleneck for the implementation of some applications. This paper presents a methodology for lossy data compression in smart distribution systems using the singular value decomposition technique. The proposed method is capable of significantly reducing the volume of data to be transmitted through the communications network and accurately reconstructing the original data. These features are illustrated by results from tests carried out using real data collected from metering devices at many different substations
    corecore