1,698 research outputs found

    The optimal sequence compression

    Get PDF
    This paper presents the optimal compression for sequences with undefined values. Let we have (N−m)(N-m) undefined and mm defined positions in the boolean sequence vvVvv V of length NN. The sequence code length can\u27t be less then mm in general case, otherwise at least two sequences will have the same code. We present the coding algorithm which generates codes of almost mm length, i.e. almost equal to the lower bound. The paper presents the decoding circuit too. The circuit has low complexity which depends from the inverse density of defined values D(vvV)=fracNmD(vv V) = frac{N}{m}. The decoding circuit includes RAM and random logic. It performs sequential decoding. The total RAM size is proportional to the logleft(D(vvV)ight),logleft(D(vv V) ight) , the number of random logic cells is proportional to loglogleft(D(vvV)ight)∗left(logloglogleft(D(vvV)ight)ight)2.log logleft(D(vv V) ight) * left(log log logleft(D(vv V) ight) ight)^2 . So the decoding circuit will be small enough even for the very low density sequences. The decoder complexity doesn\u27t depend of the sequence length at all

    Data compression for satellite images

    Get PDF
    An efficient data compression system is presented for satellite pictures and two grey level pictures derived from satellite pictures. The compression techniques take advantages of the correlation between adjacent picture elements. Several source coding methods are investigated. Double delta coding is presented and shown to be the most efficient. Both predictive differential quantizing technique and double delta coding can be significantly improved by applying a background skipping technique. An extension code is constructed. This code requires very little storage space and operates efficiently. Simulation results are presented for various coding schemes and source codes

    An innovative two-stage data compression scheme using adaptive block merging technique

    Get PDF
    Test data has increased enormously owing to the rising on-chip complexity of integrated circuits. It further increases the test data transportation time and tester memory. The non-correlated test bits increase the issue of the test power. This paper presents a two-stage block merging based test data minimization scheme which reduces the test bits, test time and test power. A test data is partitioned into blocks of fixed sizes which are compressed using two-stage encoding technique. In stage one, successive blocks are merged to retain a representative block. In stage two, the retained pattern block is further encoding based on the existence of ten different subcases between the sub-block formed by splitting the retained pattern block into two halves. Non-compatible blocks are also split into two sub-blocks and tried for encoded using lesser bits. Decompression architecture to retrieve the original test data is presented. Simulation results obtained corresponding to different ISCAS′89 benchmarks circuits reflect its effectiveness in achieving better compression

    In-Suit Doppler Technology Assessment

    Get PDF
    The objective of this program was to perform a technology assessment survey of non-invasive air embolism detection utilizing Doppler ultrasound methodologies. The primary application of this technology will be a continuous monitor for astronauts while performing extravehicular activities (EVA's). The technology assessment was to include: (1) development of a full understanding of all relevant background research; and (2) a survey of the medical ultrasound marketplace for expertise, information, and technical capability relevant to this development. Upon completion of the assessment, LSR was to provide an overview of technological approaches and R&D/manufacturing organizations

    Hybrid Region-based Image Compression Scheme for Mamograms and Ultrasound Images

    Get PDF
    The need for transmission and archive of mammograms and ultrasound Images has dramatically increased in tele-healthcare applications. Such images require large amount of' storage space which affect transmission speed. Therefore an effective compression scheme is essential. Compression of these images. in general. laces a great challenge to compromise between the higher compression ratio and the relevant diagnostic information. Out of the many studied compression schemes. lossless . IPl. (i- LS and lossy SPII IT are found to he the most efficient ones. JPEG-LS and SI'll IT are chosen based on a comprehensive experimental study carried on a large number of mammograms and ultrasound images of different sizes and texture. The lossless schemes are evaluated based on the compression ratio and compression speed. The distortion in the image quality which is introduced by lossy methods evaluated based on objective criteria using Mean Square Error (MSE) and Peak signal to Noise Ratio (PSNR). It is found that lossless compression can achieve a modest compression ratio 2: 1 - 4: 1. bossy compression schemes can achieve higher compression ratios than lossless ones but at the price of the image quality which may impede diagnostic conclusions. In this work, a new compression approach called Ilvbrid Region-based Image Compression Scheme (IIYRICS) has been proposed for the mammograms and ultrasound images to achieve higher compression ratios without compromising the diagnostic quality. In I LYRICS, a modification for JPI; G-LS is introduced to encode the arbitrary shaped disease affected regions. Then Shape adaptive SPIT IT is applied on the remaining non region of interest. The results clearly show that this hybrid strategy can yield high compression ratios with perfect reconstruction of diagnostic relevant regions, achieving high speed transmission and less storage requirement. For the sample images considered in our experiment, the compression ratio increases approximately ten times. However, this increase depends upon the size of the region of interest chosen. It is also föund that the pre-processing (contrast stretching) of region of interest improves compression ratios on mammograms but not on ultrasound images

    Optimum Implementation of Compound Compression of a Computer Screen for Real-Time Transmission in Low Network Bandwidth Environments

    Get PDF
    Remote working is becoming increasingly more prevalent in recent times. A large part of remote working involves sharing computer screens between servers and clients. The image content that is presented when sharing computer screens consists of both natural camera captured image data as well as computer generated graphics and text. The attributes of natural camera captured image data differ greatly to the attributes of computer generated image data. An image containing a mixture of both natural camera captured image and computer generated image data is known as a compound image. The research presented in this thesis focuses on the challenge of constructing a compound compression strategy to apply the ‘best fit’ compression algorithm for the mixed content found in a compound image. The research also involves analysis and classification of the types of data a given compound image may contain. While researching optimal types of compression, consideration is given to the computational overhead of a given algorithm because the research is being developed for real time systems such as cloud computing services, where latency has a detrimental impact on end user experience. The previous and current state of the art videos codec’s have been researched along many of the most current publishing’s from academia, to design and implement a novel approach to a low complexity compound compression algorithm that will be suitable for real time transmission. The compound compression algorithm will utilise a mixture of lossless and lossy compression algorithms with parameters that can be used to control the performance of the algorithm. An objective image quality assessment is needed to determine whether the proposed algorithm can produce an acceptable quality image after processing. Both traditional metrics such as Peak Signal to Noise Ratio will be used along with a new more modern approach specifically designed for compound images which is known as Structural Similarity Index will be used to define the quality of the decompressed Image. In finishing, the compression strategy will be tested on a set of generated compound images. Using open source software, the same images will be compressed with the previous and current state of the art video codec’s to compare the three main metrics, compression ratio, computational complexity and objective image quality
    • …
    corecore