92 research outputs found

    Fast and Efficient Entropy Coding Architectures for Massive Data Compression

    Get PDF
    The compression of data is fundamental to alleviating the costs of transmitting and storing massive datasets employed in myriad fields of our society. Most compression systems employ an entropy coder in their coding pipeline to remove the redundancy of coded symbols. The entropy-coding stage needs to be efficient, to yield high compression ratios, and fast, to process large amounts of data rapidly. Despite their widespread use, entropy coders are commonly assessed for some particular scenario or coding system. This work provides a general framework to assess and optimize different entropy coders. First, the paper describes three main families of entropy coders, namely those based on variable-to-variable length codes (V2VLC), arithmetic coding (AC), and tabled asymmetric numeral systems (tANS). Then, a low-complexity architecture for the most representative coder(s) of each family is presented-more precisely, a general version of V2VLC, the MQ, M, and a fixed-length version of AC and two different implementations of tANS. These coders are evaluated under different coding conditions in terms of compression efficiency and computational throughput. The results obtained suggest that V2VLC and tANS achieve the highest compression ratios for most coding rates and that the AC coder that uses fixed-length codewords attains the highest throughput. The experimental evaluation discloses the advantages and shortcomings of each entropy-coding scheme, providing insights that may help to select this stage in forthcoming compression systems

    25th Annual Computational Neuroscience Meeting: CNS-2016

    Get PDF
    Abstracts of the 25th Annual Computational Neuroscience Meeting: CNS-2016 Seogwipo City, Jeju-do, South Korea. 2–7 July 201

    A Comprehensive Review of Distributed Coding Algorithms for Visual Sensor Network (VSN)

    Get PDF
    Since the invention of low cost camera, it has been widely incorporated into the sensor node in Wireless Sensor Network (WSN) to form the Visual Sensor Network (VSN). However, the use of camera is bringing with it a set of new challenges, because all the sensor nodes are powered by batteries. Hence, energy consumption is one of the most critical issues that have to be taken into consideration. In addition to this, the use of batteries has also limited the resources (memory, processor) that can be incorporated into the sensor node. The life time of a VSN decreases quickly as the image is transferred to the destination. One of the solutions to the aforementioned problem is to reduce the data to be transferred in the network by using image compression. In this paper, a comprehensive survey and analysis of distributed coding algorithms that can be used to encode images in VSN is provided. This also includes an overview of these algorithms, together with their advantages and deficiencies when implemented in VSN. These algorithms are then compared at the end to determine the algorithm that is more suitable for VSN

    Reconciliation for Satellite-Based Quantum Key Distribution

    Full text link
    This thesis reports on reconciliation schemes based on Low-Density Parity-Check (LDPC) codes in Quantum Key Distribution (QKD) protocols. It particularly focuses on a trade-off between the complexity of such reconciliation schemes and the QKD key growth, a trade-off that is critical to QKD system deployments. A key outcome of the thesis is a design of optimised schemes that maximise the QKD key growth based on finite-size keys for a range of QKD protocols. Beyond this design, the other four main contributions of the thesis are summarised as follows. First, I show that standardised short-length LDPC codes can be used for a special Discrete Variable QKD (DV-QKD) protocol and highlight the trade-off between the secret key throughput and the communication latency in space-based implementations. Second, I compare the decoding time and secret key rate performances between typical LDPC-based rate-adaptive and non-adaptive schemes for different channel conditions and show that the design of Mother codes for the rate-adaptive schemes is critical but remains an open question. Third, I demonstrate a novel design strategy that minimises the probability of the reconciliation process being the bottleneck of the overall DV-QKD system whilst achieving a target QKD rate (in bits per second) with a target ceiling on the failure probability with customised LDPC codes. Fourth, in the context of Continuous Variable QKD (CV-QKD), I construct an in-depth optimisation analysis taking both the security and the reconciliation complexity into account. The outcome of the last contribution leads to a reconciliation scheme delivering the highest secret key rate for a given processor speed which allows for the optimal solution to CV-QKD reconciliation

    Versatile Error-Control Coding Systems

    Get PDF
    $NC research reported in this thesis is in the field of error-correcting codes, which has evolved as a very important branch of information theory. The main use of error-correcting codes is to increase the reliability of digital data transmitted through a noisy environment. There are, sometimes, alternative ways of increasing the reliability of data transmission, but coding methods are now competitive in cost and complexity in many cases because of recent advances in technology. The first two chapters of this thesis introduce the subject of error-correcting codes, review some of the published literature in this field and discuss the advan­tages of various coding techniques. After presenting linear block codes attention is from then on concentrated on cyclic codes, which is the subject of Chapter 3. The first part of Chapter 3 presents the mathemati­cal background necessary for the study of cyclic codes and examines existing methods of encoding and their practical implementation. In the second part of Chapter 3 various ways of decoding cyclic codes are studied and from these considerations, a general decoder for cyclic codes is devised and is presented in Chapter 4. Also, a review of the principal classes of cyclic codes is presented. Chapter 4 describes an experimental system constructed for measuring the performance of cyclic codes initially RC5GI5SCD by random errors and then by bursts of errors. Simulated channels are used both for random and burst errors. A computer simulation of the whole system was made in order to verify the accuracy of the experimental results obtained. Chapter 5 presents the various results obtained with the experimental system and by computer simulation, which allow a comparison of the efficiency of various cyclic codes to be made. Finally, Chapter 6 summarises and dis­cusses the main results of the research and suggests interesting points for future investigation in the area. The main objective of this research is to contribute towards the solution of a fairly wide range of problems arising in the design of efficient coding schemes for practical applications; i.e. a study of coding from an engineering point of view

    Cooperative Radio Communications for Green Smart Environments

    Get PDF
    The demand for mobile connectivity is continuously increasing, and by 2020 Mobile and Wireless Communications will serve not only very dense populations of mobile phones and nomadic computers, but also the expected multiplicity of devices and sensors located in machines, vehicles, health systems and city infrastructures. Future Mobile Networks are then faced with many new scenarios and use cases, which will load the networks with different data traffic patterns, in new or shared spectrum bands, creating new specific requirements. This book addresses both the techniques to model, analyse and optimise the radio links and transmission systems in such scenarios, together with the most advanced radio access, resource management and mobile networking technologies. This text summarises the work performed by more than 500 researchers from more than 120 institutions in Europe, America and Asia, from both academia and industries, within the framework of the COST IC1004 Action on "Cooperative Radio Communications for Green and Smart Environments". The book will have appeal to graduates and researchers in the Radio Communications area, and also to engineers working in the Wireless industry. Topics discussed in this book include: • Radio waves propagation phenomena in diverse urban, indoor, vehicular and body environments• Measurements, characterization, and modelling of radio channels beyond 4G networks• Key issues in Vehicle (V2X) communication• Wireless Body Area Networks, including specific Radio Channel Models for WBANs• Energy efficiency and resource management enhancements in Radio Access Networks• Definitions and models for the virtualised and cloud RAN architectures• Advances on feasible indoor localization and tracking techniques• Recent findings and innovations in antenna systems for communications• Physical Layer Network Coding for next generation wireless systems• Methods and techniques for MIMO Over the Air (OTA) testin

    Universal codes in the shared-randomness model for channels with general distortion capabilities

    Full text link
    We put forth new models for universal channel coding. Unlike standard codes which are designed for a specific type of channel, our most general universal code makes communication resilient on every channel, provided the noise level is below the tolerated bound, where the noise level t of a channel is the logarithm of its ambiguity (the maximum number of strings that can be distorted into a given one). The other more restricted universal codes still work for large classes of natural channels. In a universal code, encoding is channel-independent, but the decoding function knows the type of channel. We allow the encoding and the decoding functions to share randomness, which is unavailable to the channel. There are two scenarios for the type of attack that a channel can perform. In the oblivious scenario, codewords belong to an additive group and the channel distorts a codeword by adding a vector from a fixed set. The selection is based on the message and the encoding function, but not on the codeword. In the Hamming scenario, the channel knows the codeword and is fully adversarial. For a universal code, there are two parameters of interest: the rate, which is the ratio between the message length k and the codeword length n, and the number of shared random bits. We show the existence in both scenarios of universal codes with rate 1-t/n - o(1), which is optimal modulo the o(1) term. The number of shared random bits is O(log n) in the oblivious scenario, and O(n) in the Hamming scenario, which, for typical values of the noise level, we show to be optimal, modulo the constant hidden in the O() notation. In both scenarios, the universal encoding is done in time polynomial in n, but the channel-dependent decoding procedures are in general not efficient. For some weaker classes of channels we construct universal codes with polynomial-time encoding and decoding.Comment: Removed the mentioning of online matching, which is not used her

    Complexity and second moment of the mathematical theory of communication

    Get PDF
    The performance of an error correcting code is evaluated by its block error probability, code rate, and encoding and decoding complexity. The performance of a series of codes is evaluated by, as the block lengths approach infinity, whether their block error probabilities decay to zero, whether their code rates converge to channel capacity, and whether their growth in complexities stays under control. Over any discrete memoryless channel, I build codes such that: for one, their block error probabilities and code rates scale like random codes’; and for two, their encoding and decoding complexities scale like polar codes’. Quantitatively, for any constants π, ρ > 0 such that π+2ρ < 1, I construct a series of error correcting codes with block length N approaching infinity, block error probability exp(−Nπ), code rate N−ρ less than the channel capacity, and encoding and decoding complexity O(N logN) per code block. Over any discrete memoryless channel, I also build codes such that: for one, they achieve channel capacity rapidly; and for two, their encoding and decoding complexities outperform all known codes over non-BEC channels. Quantitatively, for any constants τ, ρ > 0 such that 2ρ < 1, I construct a series of error correcting codes with block length N approaching infinity, block error probability exp(−(logN)τ ), code rate N−ρ less than the channel capacity, and encoding and decoding complexity O(N log(logN)) per code block. The two aforementioned results are built upon two pillars—a versatile framework that generates codes on the basis of channel polarization, and a calculus–probability machinery that evaluates the performances of codes. The framework that generates codes and the machinery that evaluates codes can be extended to many other scenarios in network information theory. To name a few: lossless compression with side information, lossy compression, Slepian–Wolf problem, Wyner–Ziv Problem, multiple access channel, wiretap channel of type I, and broadcast channel. In each scenario, the adapted notions of block error probability and code rate approach their limits at the same paces as specified above

    Multi-factor Physical Layer Security Authentication in Short Blocklength Communication

    Full text link
    Lightweight and low latency security schemes at the physical layer that have recently attracted a lot of attention include: (i) physical unclonable functions (PUFs), (ii) localization based authentication, and, (iii) secret key generation (SKG) from wireless fading coefficients. In this paper, we focus on short blocklengths and propose a fast, privacy preserving, multi-factor authentication protocol that uniquely combines PUFs, proximity estimation and SKG. We focus on delay constrained applications and demonstrate the performance of the SKG scheme in the short blocklength by providing a numerical comparison of three families of channel codes, including half rate low density parity check codes (LDPC), Bose Chaudhuri Hocquenghem (BCH), and, Polar Slepian Wolf codes for n=512, 1024. The SKG keys are incorporated in a zero-round-trip-time resumption protocol for fast re-authentication. All schemes of the proposed mutual authentication protocol are shown to be secure through formal proofs using Burrows, Abadi and Needham (BAN) and Mao and Boyd (MB) logic as well as the Tamarin-prover

    Source Symbol Purging-Based Distributed Conditional Arithmetic Coding

    No full text
    A distributed arithmetic coding algorithm based on source symbol purging and using the context model is proposed to solve the asymmetric Slepian–Wolf problem. The proposed scheme is to make better use of both the correlation between adjacent symbols in the source sequence and the correlation between the corresponding symbols of the source and the side information sequences to improve the coding performance of the source. Since the encoder purges a part of symbols from the source sequence, a shorter codeword length can be obtained. Those purged symbols are still used as the context of the subsequent symbols to be encoded. An improved calculation method for the posterior probability is also proposed based on the purging feature, such that the decoder can utilize the correlation within the source sequence to improve the decoding performance. In addition, this scheme achieves better error performance at the decoder by adding a forbidden symbol in the encoding process. The simulation results show that the encoding complexity and the minimum code rate required for lossless decoding are lower than that of the traditional distributed arithmetic coding. When the internal correlation strength of the source is strong, compared with other DSC schemes, the proposed scheme exhibits a better decoding performance under the same code rate
    • …
    corecore