92 research outputs found

    Spinal codes

    Get PDF
    Spinal codes are a new class of rateless codes that enable wireless networks to cope with time-varying channel conditions in a natural way, without requiring any explicit bit rate selection. The key idea in the code is the sequential application of a pseudo-random hash function to the message bits to produce a sequence of coded symbols for transmission. This encoding ensures that two input messages that differ in even one bit lead to very different coded sequences after the point at which they differ, providing good resilience to noise and bit errors. To decode spinal codes, this paper develops an approximate maximum-likelihood decoder, called the bubble decoder, which runs in time polynomial in the message size and achieves the Shannon capacity over both additive white Gaussian noise (AWGN) and binary symmetric channel (BSC) models. Experimental results obtained from a software implementation of a linear-time decoder show that spinal codes achieve higher throughput than fixed-rate LDPC codes, rateless Raptor codes, and the layered rateless coding approach of Strider, across a range of channel conditions and message sizes. An early hardware prototype that can decode at 10 Mbits/s in FPGA demonstrates that spinal codes are a practical construction.Massachusetts Institute of Technology (Irwin and Joan Jacobs Presidential Fellowship)Massachusetts Institute of Technology (Claude E. Shannon Assistantship)Intel Corporation (Intel Fellowship

    Viterbi algorithm in continuous-phase frequency shift keying

    Get PDF
    The Viterbi algorithm, an application of dynamic programming, is widely used for estimation and detection problems in digital communications and signal processing. It is used to detect signals in communication channels with memory, and to decode sequential error-control codes that are used to enhance the performance of digital communication systems. The Viterbi algorithm is also used in speech and character recognition tasks where the speech signals or characters are modeled by hidden Markov models. This project explains the basics of the Viterbi algorithm as applied to systems in digital communication systems, and speech and character recognition. It also focuses on the operations and the practical memory requirements to implement the Viterbi algorithm in real-time. A forward error correction technique known as convolutional coding with Viterbi decoding was explored. In this project, the basic Viterbi decoder behavior model was built and simulated. The convolutional encoder, BPSK and AWGN channel were implemented in MATLAB code. The BER was tested to evaluate the decoding performance. The theory of Viterbi Algorithm is introduced based on convolutional coding. The application of Viterbi Algorithm in the Continuous-Phase Frequency Shift Keying (CPFSK) is presented. Analysis for the performance is made and compared with the conventional coherent estimator. The main issue of this thesis is to implement the RTL level model of Viterbi decoder. The RTL Viterbi decoder model includes the Branch Metric block, the Add-Compare-Select block, the trace-back block, the decoding block and next state block. With all done, we further understand about the Viterbi decoding algorithm

    Cross-layer wireless bit rate adaptation

    Get PDF
    This paper presents SoftRate, a wireless bit rate adaptation protocol that is responsive to rapidly varying channel conditions. Unlike previous work that uses either frame receptions or signal-to-noise ratio (SNR) estimates to select bit rates, SoftRate uses confidence information calculated by the physical layer and exported to higher layers via the SoftPHY interface to estimate the prevailing channel bit error rate (BER). Senders use this BER estimate, calculated over each received packet (even when the packet has no bit errors), to pick good bit rates. SoftRate's novel BER computation works across different wireless environments and hardware without requiring any retraining. SoftRate also uses abrupt changes in the BER estimate to identify interference, enabling it to reduce the bit rate only in response to channel errors caused by attenuation or fading. Our experiments conducted using a software radio prototype show that SoftRate achieves 2X higher throughput than popular frame-level protocols such as SampleRate and RRAA. It also achieves 20% more throughput than an SNR-based protocol trained on the operating environment, and up to 4X higher throughput than an untrained SNR-based protocol. The throughput gains using SoftRate stem from its ability to react to channel variations within a single packet-time and its robustness to collision losses.National Science Foundation (U.S.) (Grant CNS-0721702)National Science Foundation (U.S.) (Grant CNS-0520032)Foxconn International Holdings Ltd

    О стойкости кодовой электронной подписи на основе протокола идентификации Штерна

    Get PDF
    The paper provides a complete description of the digital signature scheme based on the Stern identication protocol. We also present the proof of the existential unforgeability of the scheme under the chosen message attack (EUF-CMA) in the random oracle model (ROM). Finally, we discuss the choice of the signature parameters, in particular providing 70-bit security

    Joint Source-Channel Coding Optimized On End-to-End Distortion for Multimedia Source

    Get PDF
    In order to achieve high efficiency, multimedia source coding usually relies on the use of predictive coding. While more efficient, source coding based on predictive coding has been considered to be more sensitive to errors during communication. With the current volume and importance of multimedia communication, minimizing the overall distortion during communication over an error-prone channel is critical. In addition, for real-time scenarios, it is necessary to consider additional constraints such as fix and small delay for a given bit rate. To comply with these requirements, we seek an efficient joint source-channel coding scheme. In this work, end-to-end distortion is studied for a first order autoregressive synthetic source that represents a general multimedia traffic. This study reveals that predictive coders achieve the same channel-induced distortion performance as memoryless codecs when applying optimal error concealment. We propose a joint source-channel system based on incremental redundancy that satisfies the fixed delay and error-prone channel constraints and combines DPCM as a source encoder and a rate-compatible punctured convolutional (RCPC) error control codec. To calculate the joint source-channel coding rate allocation that minimizes end-to-end distortion, we develop a Markov Decision Process (MDP) approach for delay constrained feedback Hybrid ARQ, and we use a Dynamic Programming (DP) technique. Our simulation results support the improvement in end-to-end distortion compared to a conventional Forward Error Control (FEC) approach with no feedback

    Timely and Massive Communication in 6G: Pragmatics, Learning, and Inference

    Full text link
    5G has expanded the traditional focus of wireless systems to embrace two new connectivity types: ultra-reliable low latency and massive communication. The technology context at the dawn of 6G is different from the past one for 5G, primarily due to the growing intelligence at the communicating nodes. This has driven the set of relevant communication problems beyond reliable transmission towards semantic and pragmatic communication. This paper puts the evolution of low-latency and massive communication towards 6G in the perspective of these new developments. At first, semantic/pragmatic communication problems are presented by drawing parallels to linguistics. We elaborate upon the relation of semantic communication to the information-theoretic problems of source/channel coding, while generalized real-time communication is put in the context of cyber-physical systems and real-time inference. The evolution of massive access towards massive closed-loop communication is elaborated upon, enabling interactive communication, learning, and cooperation among wireless sensors and actuators.Comment: Submitted for publication to IEEE BITS (revised version preprint

    High-threshold and low-overhead fault-tolerant quantum memory

    Full text link
    Quantum error correction becomes a practical possibility only if the physical error rate is below a threshold value that depends on a particular quantum code, syndrome measurement circuit, and a decoding algorithm. Here we present an end-to-end quantum error correction protocol that implements fault-tolerant memory based on a family of LDPC codes with a high encoding rate that achieves an error threshold of 0.8%0.8\% for the standard circuit-based noise model. This is on par with the surface code which has remained an uncontested leader in terms of its high error threshold for nearly 20 years. The full syndrome measurement cycle for a length-nn code in our family requires nn ancillary qubits and a depth-7 circuit composed of nearest-neighbor CNOT gates. The required qubit connectivity is a degree-6 graph that consists of two edge-disjoint planar subgraphs. As a concrete example, we show that 12 logical qubits can be preserved for ten million syndrome cycles using 288 physical qubits in total, assuming the physical error rate of 0.1%0.1\%. We argue that achieving the same level of error suppression on 12 logical qubits with the surface code would require more than 4000 physical qubits. Our findings bring demonstrations of a low-overhead fault-tolerant quantum memory within the reach of near-term quantum processors

    Quantum Key Distribution Data Post-Processing with Limited Resources: Towards Satellite-Based Quantum Communication

    Get PDF
    Quantum key distribution (QKD), a novel cryptographic technique for secure distribution of secret keys between two parties, is the first successful quantum technology to emerge from quantum information science. The security of QKD is guaranteed by fundamental properties of quantum mechanical systems, unlike public-key cryptography whose security depends on difficult to solve mathematical problems such as factoring. Current terrestrial quantum links are limited to about 250 km. However, QKD could soon be deployed on a global scale over free-space links to an orbiting satellite used as a trusted node. Envisioning a photonic uplink to a quantum receiver positioned on a low Earth orbit satellite, the Canadian Quantum Encryption and Science Satellite (QEYSSat) is a collaborative project involving Canadian universities, the Canadian Space Agency (CSA) and industry partners. This thesis presents some of the research conducted towards feasibility studies of the QEYSSat mission. One of the main goals of this research is to develop technologies for data acquisition and processing required for a satellite-based QKD system. A working testbed system helps to establish firmly grounded estimates of the overall complexity, the computing resources necessary, and the bandwidth requirements of the classical communication channel. It can also serve as a good foundation for the design and development of a future payload computer onboard QEYSSat. This thesis describes the design and implementation of a QKD post-processing system which aims to minimize the computing requirements at one side of the link, unlike most traditional implementations which assume symmetric computing resources at each end. The post-processing software features precise coincidence analysis, error correction based on low-density parity-check codes, privacy amplification employing Toeplitz hash functions, and a procedure for automated polarization alignment. The system's hardware and software components integrate fully with a quantum optical apparatus used to demonstrate the feasibility of QKD with a satellite uplink. Detailed computing resource requirements and QKD results from the operation of the entire system at high-loss regimes are presented here
    corecore