12,657 research outputs found

    An Efficient Test Vector Compression Technique Based on Block Merging

    Get PDF
    In this paper, we present a new test data compression technique based on block merging. The technique capitalizes on the fact that many consecutive blocks of the test data can be merged together. Compression is achieved by storing the merged block and the number of blocks merged. It also takes advantage of cases where the merged block can be filled by all 0’s or all 1’s. Test data decompression is performed on chip using a simple circuitry that repeats the merged block the required number of times. The decompression circuitry has the advantage of being test data independent. Experimental results on benchmark circuits demonstrate the effectiveness of the proposed technique compared to previous approaches

    An Efficient Test Vector Compression Technique Based on Block Merging

    Get PDF
    In this paper, we present a new test data compression technique based on block merging. The technique capitalizes on the fact that many consecutive blocks of the test data can be merged together. Compression is achieved by storing the merged block and the number of blocks merged. It also takes advantage of cases where the merged block can be filled by all 0’s or all 1’s. Test data decompression is performed on chip using a simple circuitry that repeats the merged block the required number of times. The decompression circuitry has the advantage of being test data independent. Experimental results on benchmark circuits demonstrate the effectiveness of the proposed technique compared to previous approaches

    Multi-loop quality scalability based on high efficiency video coding

    Get PDF
    Scalable video coding performance largely depends on the underlying single layer coding efficiency. In this paper, the quality scalability capabilities are evaluated on a base of the new High Efficiency Video Coding (HEVC) standard under development. To enable the evaluation, a multi-loop codec has been designed using HEVC. Adaptive inter-layer prediction is realized by including the lower layer in the reference list of the enhancement layer. As a result, adaptive scalability on frame level and on prediction unit level is accomplished. Compared to single layer coding, 19.4% Bjontegaard Delta bitrate increase is measured over approximately a 30dB to 40dB PSNR range. When compared to simulcast, 20.6% bitrate reduction can be achieved. Under equivalent conditions, the presented technique achieves 43.8% bitrate reduction over Coarse Grain Scalability of the SVC - H.264/AVC-based standard

    Handling Massive N-Gram Datasets Efficiently

    Get PDF
    This paper deals with the two fundamental problems concerning the handling of large n-gram language models: indexing, that is compressing the n-gram strings and associated satellite data without compromising their retrieval speed; and estimation, that is computing the probability distribution of the strings from a large textual source. Regarding the problem of indexing, we describe compressed, exact and lossless data structures that achieve, at the same time, high space reductions and no time degradation with respect to state-of-the-art solutions and related software packages. In particular, we present a compressed trie data structure in which each word following a context of fixed length k, i.e., its preceding k words, is encoded as an integer whose value is proportional to the number of words that follow such context. Since the number of words following a given context is typically very small in natural languages, we lower the space of representation to compression levels that were never achieved before. Despite the significant savings in space, our technique introduces a negligible penalty at query time. Regarding the problem of estimation, we present a novel algorithm for estimating modified Kneser-Ney language models, that have emerged as the de-facto choice for language modeling in both academia and industry, thanks to their relatively low perplexity performance. Estimating such models from large textual sources poses the challenge of devising algorithms that make a parsimonious use of the disk. The state-of-the-art algorithm uses three sorting steps in external memory: we show an improved construction that requires only one sorting step thanks to exploiting the properties of the extracted n-gram strings. With an extensive experimental analysis performed on billions of n-grams, we show an average improvement of 4.5X on the total running time of the state-of-the-art approach.Comment: Published in ACM Transactions on Information Systems (TOIS), February 2019, Article No: 2

    Evolutionary Approaches to Minimizing Network Coding Resources

    Get PDF
    We wish to minimize the resources used for network coding while achieving the desired throughput in a multicast scenario. We employ evolutionary approaches, based on a genetic algorithm, that avoid the computational complexity that makes the problem NP-hard. Our experiments show great improvements over the sub-optimal solutions of prior methods. Our new algorithms improve over our previously proposed algorithm in three ways. First, whereas the previous algorithm can be applied only to acyclic networks, our new method works also with networks with cycles. Second, we enrich the set of components used in the genetic algorithm, which improves the performance. Third, we develop a novel distributed framework. Combining distributed random network coding with our distributed optimization yields a network coding protocol where the resources used for coding are optimized in the setup phase by running our evolutionary algorithm at each node of the network. We demonstrate the effectiveness of our approach by carrying out simulations on a number of different sets of network topologies.Comment: 9 pages, 6 figures, accepted to the 26th Annual IEEE Conference on Computer Communications (INFOCOM 2007

    Low Bit Rate Video Coding

    Get PDF
    Variable length bit rate (VLBR) broadly encompasses video coding which mandates a temporal frequency of 10 frames per second (fps) or less. Object-based video coding represents a very promising option for VLBR coding, though the problems of object identification and segmentation need to be addressed by further research. Pattern-based coding is a simplified object segmentation process that is computationally much less expensive, though a real-time, content-dependent pattern generation approach will certainly improve its acceptance for VLBR coding. In this paper pattern based coding technique is used. In this paper, a very low bit-rate video coding algorithm that focuses on moving regions is performed. The aim is to improve the coding performance, which gives better subjective and objective quality than that of the conventional coding methods at the same bit rate. Eight patterns are pre-defined to approximate the moving regions in a macroblock. The patterns are then used for motion estimation and compensation to reduce the prediction errors. Furthermore, in order to increase the compression performance, the residual errors of a macroblock are rearranged into a block with no significant increase of high-order DCT coefficients. As a result, both the prediction efficiency and the compression efficiency are improved. This paper shows that using pattern based coding the compression ratio is better
    corecore