19 research outputs found

    Quickest Sequence Phase Detection

    Full text link
    A phase detection sequence is a length-nn cyclic sequence, such that the location of any length-kk contiguous subsequence can be determined from a noisy observation of that subsequence. In this paper, we derive bounds on the minimal possible kk in the limit of n→∞n\to\infty, and describe some sequence constructions. We further consider multiple phase detection sequences, where the location of any length-kk contiguous subsequence of each sequence can be determined simultaneously from a noisy mixture of those subsequences. We study the optimal trade-offs between the lengths of the sequences, and describe some sequence constructions. We compare these phase detection problems to their natural channel coding counterparts, and show a strict separation between the fundamental limits in the multiple sequence case. Both adversarial and probabilistic noise models are addressed.Comment: To appear in the IEEE Transactions on Information Theor

    A Novel Power Allocation Scheme for Two-User GMAC with Finite Input Constellations

    Full text link
    Constellation Constrained (CC) capacity regions of two-user Gaussian Multiple Access Channels (GMAC) have been recently reported, wherein an appropriate angle of rotation between the constellations of the two users is shown to enlarge the CC capacity region. We refer to such a scheme as the Constellation Rotation (CR) scheme. In this paper, we propose a novel scheme called the Constellation Power Allocation (CPA) scheme, wherein the instantaneous transmit power of the two users are varied by maintaining their average power constraints. We show that the CPA scheme offers CC sum capacities equal (at low SNR values) or close (at high SNR values) to those offered by the CR scheme with reduced decoding complexity for QAM constellations. We study the robustness of the CPA scheme for random phase offsets in the channel and unequal average power constraints for the two users. With random phase offsets in the channel, we show that the CC sum capacity offered by the CPA scheme is more than the CR scheme at high SNR values. With unequal average power constraints, we show that the CPA scheme provides maximum gain when the power levels are close, and the advantage diminishes with the increase in the power difference.Comment: To appear in IEEE Transactions on Wireless Communications, 10 pages and 7 figure

    Applications of Coding Theory to Massive Multiple Access and Big Data Problems

    Get PDF
    The broad theme of this dissertation is design of schemes that admit iterative algorithms with low computational complexity to some new problems arising in massive multiple access and big data. Although bipartite Tanner graphs and low-complexity iterative algorithms such as peeling and message passing decoders are very popular in the channel coding literature they are not as widely used in the respective areas of study and this dissertation serves as an important step in that direction to bridge that gap. The contributions of this dissertation can be categorized into the following three parts. In the first part of this dissertation, a timely and interesting multiple access problem for a massive number of uncoordinated devices is considered wherein the base station is interested only in recovering the list of messages without regard to the identity of the respective sources. A coding scheme with polynomial encoding and decoding complexities is proposed for this problem, the two main features of which are (i) design of a close-to-optimal coding scheme for the T-user Gaussian multiple access channel and (ii) successive interference cancellation decoder. The proposed coding scheme not only improves on the performance of the previously best known coding scheme by ≈ 13 dB but is only ≈ 6 dB away from the random Gaussian coding information rate. In the second part construction-D lattices are constructed where the underlying linear codes are nested binary spatially-coupled low-density parity-check codes (SCLDPC) codes with uniform left and right degrees. It is shown that the proposed lattices achieve the Poltyrev limit under multistage belief propagation decoding. Leveraging this result lattice codes constructed from these lattices are applied to the three user symmetric interference channel. For channel gains within 0.39 dB from the very strong interference regime, the proposed lattice coding scheme with the iterative belief propagation decoder, for target error rates of ≈ 10^-5, is only 2:6 dB away the Shannon limit. The third part focuses on support recovery in compressed sensing and the nonadaptive group testing (GT) problems. Prior to this work, sensing schemes based on left-regular sparse bipartite graphs and iterative recovery algorithms based on peeling decoder were proposed for the above problems. These schemes require O(K logN) and Ω(K logK logN) measurements respectively to recover the sparse signal with high probability (w.h.p), where N, K denote the dimension and sparsity of the signal respectively (K (double backward arrow) N). Also the number of measurements required to recover at least (1 - €) fraction of defective items w.h.p (approximate GT) is shown to be cv€_K logN/K. In this dissertation, instead of the left-regular bipartite graphs, left-and- right regular bipartite graph based sensing schemes are analyzed. It is shown that this design strategy enables to achieve superior and sharper results. For the support recovery problem, the number of measurements is reduced to the optimal lower bound of Ω (K log N/K). Similarly for the approximate GT, proposed scheme only requires c€_K log N/ K measurements. For the probabilistic GT, proposed scheme requires (K logK log vN/ K) measurements which is only log K factor away from the best known lower bound of Ω (K log N/ K). Apart from the asymptotic regime, the proposed schemes also demonstrate significant improvement in the required number of measurements for finite values of K, N

    Applications of Coding Theory to Massive Multiple Access and Big Data Problems

    Get PDF
    The broad theme of this dissertation is design of schemes that admit iterative algorithms with low computational complexity to some new problems arising in massive multiple access and big data. Although bipartite Tanner graphs and low-complexity iterative algorithms such as peeling and message passing decoders are very popular in the channel coding literature they are not as widely used in the respective areas of study and this dissertation serves as an important step in that direction to bridge that gap. The contributions of this dissertation can be categorized into the following three parts. In the first part of this dissertation, a timely and interesting multiple access problem for a massive number of uncoordinated devices is considered wherein the base station is interested only in recovering the list of messages without regard to the identity of the respective sources. A coding scheme with polynomial encoding and decoding complexities is proposed for this problem, the two main features of which are (i) design of a close-to-optimal coding scheme for the T-user Gaussian multiple access channel and (ii) successive interference cancellation decoder. The proposed coding scheme not only improves on the performance of the previously best known coding scheme by ≈ 13 dB but is only ≈ 6 dB away from the random Gaussian coding information rate. In the second part construction-D lattices are constructed where the underlying linear codes are nested binary spatially-coupled low-density parity-check codes (SCLDPC) codes with uniform left and right degrees. It is shown that the proposed lattices achieve the Poltyrev limit under multistage belief propagation decoding. Leveraging this result lattice codes constructed from these lattices are applied to the three user symmetric interference channel. For channel gains within 0.39 dB from the very strong interference regime, the proposed lattice coding scheme with the iterative belief propagation decoder, for target error rates of ≈ 10^-5, is only 2:6 dB away the Shannon limit. The third part focuses on support recovery in compressed sensing and the nonadaptive group testing (GT) problems. Prior to this work, sensing schemes based on left-regular sparse bipartite graphs and iterative recovery algorithms based on peeling decoder were proposed for the above problems. These schemes require O(K logN) and Ω(K logK logN) measurements respectively to recover the sparse signal with high probability (w.h.p), where N, K denote the dimension and sparsity of the signal respectively (K (double backward arrow) N). Also the number of measurements required to recover at least (1 - €) fraction of defective items w.h.p (approximate GT) is shown to be cv€_K logN/K. In this dissertation, instead of the left-regular bipartite graphs, left-and- right regular bipartite graph based sensing schemes are analyzed. It is shown that this design strategy enables to achieve superior and sharper results. For the support recovery problem, the number of measurements is reduced to the optimal lower bound of Ω (K log N/K). Similarly for the approximate GT, proposed scheme only requires c€_K log N/ K measurements. For the probabilistic GT, proposed scheme requires (K logK log vN/ K) measurements which is only log K factor away from the best known lower bound of Ω (K log N/ K). Apart from the asymptotic regime, the proposed schemes also demonstrate significant improvement in the required number of measurements for finite values of K, N

    Superposition Mapping & Related Coding Techniques

    Get PDF
    Since Shannon's landmark paper in 1948, it has been known that the capacity of a Gaussian channel can be achieved if and only if the channel outputs are Gaussian. In the low signal-to-noise ratio (SNR) regime, conventional mapping schemes suffice for approaching the Shannon limit, while in the high SNR regime, these mapping schemes, which produce uniformly distributed symbols, are insufficient to achieve the capacity. To solve this problem, researchers commonly resort to the technique of signal shaping that mends the symbol distribution, which is originally uniform, into a Gaussian-like one. Superposition mapping (SM) refers to a class of mapping techniques which use linear superposition to load binary digits onto finite-alphabet symbols that are suitable for waveform transmission. Different from conventional mapping schemes, the output symbols of a superposition mapper can easily be made Gaussian-like, which effectively eliminates the necessity of active signal shaping. For this reason, superposition mapping is of great interest for theoretical research as well as for practical implementations. It is an attractive alternative to signal shaping for approaching the channel capacity in the high SNR regime. This thesis aims to provide a deep insight into the principles of superposition mapping and to derive guidelines for systems adopting it. Particularly, the influence of power allocation to the system performance, both w.r.t the achievable power efficiency and supportable bandwidth efficiency, is made clear. Considerable effort is spent on finding code structures that are matched to SM. It is shown that currently prevalent code design concepts, which are mostly derived for coded transmission with bijective uniform mapping, do not really fit with superposition mapping, which is often non-bijective and nonuniform. As the main contribution, a novel coding strategy called low-density hybrid-check (LDHC) coding is proposed. LDHC codes are optimal and universally applicable for SM with arbitrary type of power allocation

    High Capacity CDMA and Collaborative Techniques

    Get PDF
    The thesis investigates new approaches to increase the user capacity and improve the error performance of Code Division Multiple Access (CDMA) by employing adaptive interference cancellation and collaborative spreading and space diversity techniques. Collaborative Coding Multiple Access (CCMA) is also investigated as a separate technique and combined with CDMA. The advantages and shortcomings of CDMA and CCMA are analysed and new techniques for both the uplink and downlink are proposed and evaluated. Multiple access interference (MAI) problem in the uplink of CDMA is investigated first. The practical issues of multiuser detection (MUD) techniques are reviewed and a novel blind adaptive approach to interference cancellation (IC) is proposed. It exploits the constant modulus (CM) property of digital signals to blindly suppress interference during the despreading process and obtain amplitude estimation with minimum mean squared error for use in cancellation stages. Two new blind adaptive receiver designs employing successive and parallel interference cancellation architectures using the CM algorithm (CMA) referred to as ‘CMA-SIC’ and ‘BA-PIC’, respectively, are presented. These techniques have shown to offer near single user performance for large number of users. It is shown to increase the user capacity by approximately two fold compared with conventional IC receivers. The spectral efficiency analysis of the techniques based on output signal-to interference-and-noise ratio (SINR) also shows significant gain in data rate. Furthermore, an effective and low complexity blind adaptive subcarrier combining (BASC) technique using a simple gradient descent based algorithm is proposed for Multicarrier-CDMA. It suppresses MAI without any knowledge of channel amplitudes and allows large number of users compared with equal gain and maximum ratio combining techniques normally used in practice. New user collaborative schemes are proposed and analysed theoretically and by simulations in different channel conditions to achieve spatial diversity for uplink of CCMA and CDMA. First, a simple transmitter diversity and its equivalent user collaborative diversity techniques for CCMA are designed and analysed. Next, a new user collaborative scheme with successive interference cancellation for uplink of CDMA referred to as collaborative SIC (C-SIC) is investigated to reduce MAI and achieve improved diversity. To further improve the performance of C-SIC under high system loading conditions, Collaborative Blind Adaptive SIC (C-BASIC) scheme is proposed. It is shown to minimize the residual MAI, leading to improved user capacity and a more robust system. It is known that collaborative diversity schemes incur loss in throughput due to the need of orthogonal time/frequency slots for relaying source’s data. To address this problem, finally a novel near-unity-rate scheme also referred to as bandwidth efficient collaborative diversity (BECD) is proposed and evaluated for CDMA. Under this scheme, pairs of users share a single spreading sequence to exchange and forward their data employing a simple superposition or space-time encoding methods. At the receiver collaborative joint detection is performed to separate each paired users’ data. It is shown that the scheme can achieve full diversity gain at no extra bandwidth as inter-user channel SNR becomes high. A novel approach of ‘User Collaboration’ is introduced to increase the user capacity of CDMA for both the downlink and uplink. First, collaborative group spreading technique for the downlink of overloaded CDMA system is introduced. It allows the sharing of the same single spreading sequence for more than one user belonging to the same group. This technique is referred to as Collaborative Spreading CDMA downlink (CS-CDMA-DL). In this technique T-user collaborative coding is used for each group to form a composite codeword signal of the users and then a single orthogonal sequence is used for the group. At each user’s receiver, decoding of composite codeword is carried out to extract the user’s own information while maintaining a high SINR performance. To improve the bit error performance of CS-CDMA-DL in Rayleigh fading conditions, Collaborative Space-time Spreading (C-STS) technique is proposed by combining the collaborative coding multiple access and space-time coding principles. A new scheme for uplink of CDMA using the ‘User Collaboration’ approach, referred to as CS-CDMA-UL is presented next. When users’ channels are independent (uncorrelated), significantly higher user capacity can be achieved by grouping multiple users to share the same spreading sequence and performing MUD on per group basis followed by a low complexity ML decoding at the receiver. This approach has shown to support much higher number of users than the available sequences while also maintaining the low receiver complexity. For improved performance under highly correlated channel conditions, T-user collaborative coding is also investigated within the CS-CDMA-UL system

    Reliable Software for Unreliable Hardware - A Cross-Layer Approach

    Get PDF
    A novel cross-layer reliability analysis, modeling, and optimization approach is proposed in this thesis that leverages multiple layers in the system design abstraction (i.e. hardware, compiler, system software, and application program) to exploit the available reliability enhancing potential at each system layer and to exchange this information across multiple system layers

    Error-Correction Coding and Decoding: Bounds, Codes, Decoders, Analysis and Applications

    Get PDF
    Coding; Communications; Engineering; Networks; Information Theory; Algorithm

    Network information theory for classical-quantum channels

    Full text link
    Network information theory is the study of communication problems involving multiple senders, multiple receivers and intermediate relay stations. The purpose of this thesis is to extend the main ideas of classical network information theory to the study of classical-quantum channels. We prove coding theorems for quantum multiple access channels, quantum interference channels, quantum broadcast channels and quantum relay channels. A quantum model for a communication channel describes more accurately the channel's ability to transmit information. By using physically faithful models for the channel outputs and the detection procedure, we obtain better communication rates than would be possible using a classical strategy. In this thesis, we are interested in the transmission of classical information, so we restrict our attention to the study of classical-quantum channels. These are channels with classical inputs and quantum outputs, and so the coding theorems we present will use classical encoding and quantum decoding. We study the asymptotic regime where many copies of the channel are used in parallel, and the uses are assumed to be independent. In this context, we can exploit information-theoretic techniques to calculate the maximum rates for error-free communication for any channel, given the statistics of the noise on that channel. These theoretical bounds can be used as a benchmark to evaluate the rates achieved by practical communication protocols. Most of the results in this thesis consider classical-quantum channels with finite dimensional output systems, which are analogous to classical discrete memoryless channels. In the last chapter, we will show some applications of our results to a practical optical communication scenario, in which the information is encoded in continuous quantum degrees of freedom, which are analogous to classical channels with Gaussian noise.Comment: Ph.D. Thesis, McGill University, School of Computer Science, July 2012, 223 pages, 18 figures, 36 TikZ diagram
    corecore