96,635 research outputs found

    Compiler optimization and ordering effects on VLIW code compression

    Get PDF
    Code size has always been an important issue for all embedded applications as well as larger systems. Code compression techniques have been devised as a way of battling bloated code; however, the impact of VLIW compiler methods and outputs on these compression schemes has not been thoroughly investigated. This paper describes the application of single- and multipleinstruction dictionary methods for code compression to decrease overall code size for the TI TMS320C6xxx DSP family. The compression scheme is applied to benchmarks taken from the Mediabench benchmark suite built with differing compiler optimization parameters. In the single instruction encoding scheme, it was found that compression ratios were not a useful indicator of the best overall code size – the best results (smallest overall code size) were obtained when the compression scheme was applied to sizeoptimized code. In the multiple instruction encoding scheme, changing parallel instruction order was found to only slightly improve compression in unoptimized code and does not affect the code compression when it is applied to builds already optimized for size

    Learned transform compression with optimized entropy encoding

    Full text link
    We consider the problem of learned transform compression where we learn both, the transform as well as the probability distribution over the discrete codes. We utilize a soft relaxation of the quantization operation to allow for back-propagation of gradients and employ vector (rather than scalar) quantization of the latent codes. Furthermore, we apply similar relaxation in the code probability assignments enabling direct optimization of the code entropy. To the best of our knowledge, this approach is completely novel. We conduct a set of proof-of concept experiments confirming the potency of our approaches.Comment: Neural Compression Workshop @ ICLR 202

    Experiments on joint source-channel fractal image coding with unequal error protection

    No full text
    We propose a joint source-channel coding system for fractal image compression. We allocate the available total bit rate between the source code and a range of error-correcting codes using a Lagrange multiplier optimization technique. The principle of the proposed unequal error protection strategy is to partition the information bits into sensitivity classes and to assign one code from a range of error-correcting codes to each sensitivity class in a nearly optimal way. Experimental results show that joint source-channel coding with fractal image compression is feasible, leads to ef"cient protection strategies, and outperforms previous works in this "eld that only covered channel coding with a "xed source rate

    Joint source-channel iterative receivers using LDPC codes

    Get PDF
    Using an asymptotic analysis, the optimization of a receiver using joint source-channel decoding involving LDPC codes is performed for the following iterative systems: (a) optimized joint source-channel receiver, (b) backward compatible iterative receiver, (c) optimal tandem receiver assuming a perfect source compression, (d) classical tandem receiver. Optimization and simulation results are provided for different code rates and codeword lengths.Grâce à une analyse asymptotique, l’optimisation d'un récepteur utilisant un décodage source-canal conjoint (DSCC) impliquant un code LDPC est considérée pour les systèmes itératifs suivant : (a) récepteur source-canal conjoint optimisé, (b) récepteurs compatibles avec des applications non conjointes, (c) le récepteur tandem optimal supposant une compression de source parfaite, (d) le récepteur tandem classique. Des résultats d’optimisation et de simulation sont proposés pour différents rendements et longueurs de mots de code

    Low cost architecture for JPEG2000 encoder without code-block memory

    Get PDF
    [[abstract]]The amount of memory required for code-block is one of the most important issues in JPEG2000 encoder chip implementation. This work tries to unify the output scanning order of the 2D-DWT and the processing order of the EBCOT and further to eliminate the code-block memory completely eliminated. We also propose a new architecture for embedded block coding (EBC), code-block switch adaptive embedded block coding (CS-AEBC), which can skip the insignificant bit-planes to reduce the computation time and save power consumption. Besides, a new dynamic rate distortion optimization (RDO) approach is proposed to reduce the computation time when the EBC processes lossy compression operation. The total memory required for the proposed JPEG2000 is only 2KB of internal memory, and the bandwidth required for the external memory is 2.1 B/cycle.[[conferencetype]]ĺś‹éš›[[conferencedate]]20080623-20080626[[iscallforpapers]]Y[[conferencelocation]]Hannover, German
    • …
    corecore