392 research outputs found

    High rate space time code with linear decoding complexity for multiple transmitting antennas

    Get PDF
    The multipath nature of the wireless channel, results in a superposition of the signals of each path at the receiver. This can lead to either constructive or destructive interference. Strong destructive interference is frequently referred to as deep fade and may result in temporary failure of communication due to the severe drop in the channel\u27s signal-to-noise ratio (SNR). To avoid this situation, signal diversity might be introduced. When having more than one antenna at the transmitter and / or receiver, forming a Multiple-Input Multiple-Output (MIMO) channel, spatial diversity can be employed to overcome the fading problem. Space time block codes (STBC) have been shown to be used well with the MIMO channel. Each type of STBC is designed to optimize a different criteria such as rate and diversity, while other characteristics of the code are its error performance and decoding computational complexity. The Orthogonal STBC (OSTBC) family of codes is known to achieve full diversity as well as very simple implementation of the Maximum Likelihood (ML) decoder. However, it was proven that, with complex symbol constellation one cannot achieve a full rate code when the number of transmitting antennas is larger than two. Quasi OSTBC are codes with full rate but with the penalty of more complex decoding, and in general does not achieve full diversity. In this work, new techniques for OSTBC transmission / decoding are explored, such that a full rate code can be transmitted and decoded with linear complexity. The Row Elimination Method (REM) for OSTBC transmission is introduced, which basically involves the transmission of only part of the original OSTBC codeword, resulting in a full rate code termed Semi-Orthogonal STBC (SSTBC). Novel decoding scheme is presented, such that the SSTBC decoding computational complexity remains linear although the transmitted codeword is not orthogonal anymore. A new OSTBC, that complies with the new scheme\u27s requirements, is presented for any number of transmit antennas. The performance of the new scheme is studied under various settings, such as system with limited feedback and multiple antennas at the receiver. The general decoding techniques presented for STBC, assume perfect channel knowledge at the receiver. It was shown, that the performance of any STBC system is severely degraded due to partial channel state information, results from imperfect channel estimation. To minimize the performance loss, one may lengthen the training sequences used for the channel estimation which, inevitably, results in some rate loss. In addition, complex decoding schemes can be used at the receiver to jointly decode the data while enhancing the channel estimation. It is suggested in this work to apply adaptive techniques to mitigate the performance loss without the penalty of additional rate loss or complex decoding. Namely, the bootstrap algorithm is used to further refine the received signals, resulting in better effective rate and performance in the presence of channel estimation errors. Modified implementations for the bootstrap\u27s weights calculation method are also presented, to improve the convergence rate of the algorithm, as well as to maintain a very low computational burden

    On the Derivation of Optimal Partial Successive Interference Cancellation

    Get PDF
    The necessity of accurate channel estimation for Successive and Parallel Interference Cancellation is well known. Iterative channel estimation and channel decoding (for instance by means of the Expectation-Maximization algorithm) is particularly important for these multiuser detection schemes in the presence of time varying channels, where a high density of pilots is necessary to track the channel. This paper designs a method to analytically derive a weighting factor Ī±\alpha, necessary to improve the efficiency of interference cancellation in the presence of poor channel estimates. Moreover, this weighting factor effectively mitigates the presence of incorrect decisions at the output of the channel decoder. The analysis provides insight into the properties of such interference cancellation scheme and the proposed approach significantly increases the effectiveness of Successive Interference Cancellation under the presence of channel estimation errors, which leads to gains of up to 3 dB.Comment: IEEE GLOBECOM 201

    Reliable Inference from Unreliable Agents

    Get PDF
    Distributed inference using multiple sensors has been an active area of research since the emergence of wireless sensor networks (WSNs). Several researchers have addressed the design issues to ensure optimal inference performance in such networks. The central goal of this thesis is to analyze distributed inference systems with potentially unreliable components and design strategies to ensure reliable inference in such systems. The inference process can be that of detection or estimation or classification, and the components/agents in the system can be sensors and/or humans. The system components can be unreliable due to a variety of reasons: faulty sensors, security attacks causing sensors to send falsified information, or unskilled human workers sending imperfect information. This thesis first quantifies the effect of such unreliable agents on the inference performance of the network and then designs schemes that ensure a reliable overall inference. In the first part of this thesis, we study the case when only sensors are present in the system, referred to as sensor networks. For sensor networks, the presence of malicious sensors, referred to as Byzantines, are considered. Byzantines are sensors that inject false information into the system. In such systems, the effect of Byzantines on the overall inference performance is characterized in terms of the optimal attack strategies. Game-theoretic formulations are explored to analyze two-player interactions. Next, Byzantine mitigation schemes are designed that address the problem from the system\u27s perspective. These mitigation schemes are of two kinds: Byzantine identification schemes and Byzantine tolerant schemes. Using learning based techniques, Byzantine identification schemes are designed that learn the identity of Byzantines in the network and use this information to improve system performance. When such schemes are not possible, Byzantine tolerant schemes using error-correcting codes are developed that tolerate the effect of Byzantines and maintain good performance in the network. Error-correcting codes help in correcting the erroneous information from these Byzantines and thereby counter their attack. The second line of research in this thesis considers humans-only networks, referred to as human networks. A similar research strategy is adopted for human networks where, the effect of unskilled humans sharing beliefs with a central observer called \emph{CEO} is analyzed, and the loss in performance due to the presence of such unskilled humans is characterized. This problem falls under the family of problems in information theory literature referred to as the \emph{CEO Problem}, but for belief sharing. The asymptotic behavior of the minimum achievable mean squared error distortion at the CEO is studied in the limit when the number of agents LL and the sum rate RR tend to infinity. An intermediate regime of performance between the exponential behavior in discrete CEO problems and the 1/R1/R behavior in Gaussian CEO problems is established. This result can be summarized as the fact that sharing beliefs (uniform) is fundamentally easier in terms of convergence rate than sharing measurements (Gaussian), but sharing decisions is even easier (discrete). Besides theoretical analysis, experimental results are reported for experiments designed in collaboration with cognitive psychologists to understand the behavior of humans in the network. The act of fusing decisions from multiple agents is observed for humans and the behavior is statistically modeled using hierarchical Bayesian models. The implications of such modeling on the design of large human-machine systems is discussed. Furthermore, an error-correcting codes based scheme is proposed to improve system performance in the presence of unreliable humans in the inference process. For a crowdsourcing system consisting of unskilled human workers providing unreliable responses, the scheme helps in designing easy-to-perform tasks and also mitigates the effect of erroneous data. The benefits of using the proposed approach in comparison to the majority voting based approach are highlighted using simulated and real datasets. In the final part of the thesis, a human-machine inference framework is developed where humans and machines interact to perform complex tasks in a faster and more efficient manner. A mathematical framework is built to understand the benefits of human-machine collaboration. Such a study is extremely important for current scenarios where humans and machines are constantly interacting with each other to perform even the simplest of tasks. While machines perform best in some tasks, humans still give better results in tasks such as identifying new patterns. By using humans and machines together, one can extract complete information about a phenomenon of interest. Such an architecture, referred to as Human-Machine Inference Networks (HuMaINs), provides promising results for the two cases of human-machine collaboration: \emph{machine as a coach} and \emph{machine as a colleague}. For simple systems, we demonstrate tangible performance gains by such a collaboration which provides design modules for larger, and more complex human-machine systems. However, the details of such larger systems needs to be further explored

    Near-capacity fixed-rate and rateless channel code constructions

    No full text
    Fixed-rate and rateless channel code constructions are designed for satisfying conflicting design tradeoffs, leading to codes that benefit from practical implementations, whilst offering a good bit error ratio (BER) and block error ratio (BLER) performance. More explicitly, two novel low-density parity-check code (LDPC) constructions are proposed; the first construction constitutes a family of quasi-cyclic protograph LDPC codes, which has a Vandermonde-like parity-check matrix (PCM). The second construction constitutes a specific class of protograph LDPC codes, which are termed as multilevel structured (MLS) LDPC codes. These codes possess a PCM construction that allows the coexistence of both pseudo-randomness as well as a structure requiring a reduced memory. More importantly, it is also demonstrated that these benefits accrue without any compromise in the attainable BER/BLER performance. We also present the novel concept of separating multiple users by means of user-specific channel codes, which is referred to as channel code division multiple access (CCDMA), and provide an example based on MLS LDPC codes. In particular, we circumvent the difficulty of having potentially high memory requirements, while ensuring that each userā€™s bits in the CCDMA system are equally protected. With regards to rateless channel coding, we propose a novel family of codes, which we refer to as reconfigurable rateless codes, that are capable of not only varying their code-rate but also to adaptively modify their encoding/decoding strategy according to the near-instantaneous channel conditions. We demonstrate that the proposed reconfigurable rateless codes are capable of shaping their own degree distribution according to the nearinstantaneous requirements imposed by the channel, but without any explicit channel knowledge at the transmitter. Additionally, a generalised transmit preprocessing aided closed-loop downlink multiple-input multiple-output (MIMO) system is presented, in which both the channel coding components as well as the linear transmit precoder exploit the knowledge of the channel state information (CSI). More explicitly, we embed a rateless code in a MIMO transmit preprocessing scheme, in order to attain near-capacity performance across a wide range of channel signal-to-ratios (SNRs), rather than only at a specific SNR. The performance of our scheme is further enhanced with the aid of a technique, referred to as pilot symbol assisted rateless (PSAR) coding, whereby a predetermined fraction of pilot bits is appropriately interspersed with the original information bits at the channel coding stage, instead of multiplexing pilots at the modulation stage, as in classic pilot symbol assisted modulation (PSAM). We subsequently demonstrate that the PSAR code-aided transmit preprocessing scheme succeeds in gleaning more information from the inserted pilots than the classic PSAM technique, because the pilot bits are not only useful for sounding the channel at the receiver but also beneficial for significantly reducing the computational complexity of the rateless channel decoder

    Learning to Decode the Surface Code with a Recurrent, Transformer-Based Neural Network

    Full text link
    Quantum error-correction is a prerequisite for reliable quantum computation. Towards this goal, we present a recurrent, transformer-based neural network which learns to decode the surface code, the leading quantum error-correction code. Our decoder outperforms state-of-the-art algorithmic decoders on real-world data from Google's Sycamore quantum processor for distance 3 and 5 surface codes. On distances up to 11, the decoder maintains its advantage on simulated data with realistic noise including cross-talk, leakage, and analog readout signals, and sustains its accuracy far beyond the 25 cycles it was trained on. Our work illustrates the ability of machine learning to go beyond human-designed algorithms by learning from data directly, highlighting machine learning as a strong contender for decoding in quantum computers

    A Critical Review of Physical Layer Security in Wireless Networking

    Get PDF
    Wireless networking has kept evolving with additional features and increasing capacity. Meanwhile, inherent characteristics of wireless networking make it more vulnerable than wired networks. In this thesis we present an extensive and comprehensive review of physical layer security in wireless networking. Different from cryptography, physical layer security, emerging from the information theoretic assessment of secrecy, could leverage the properties of wireless channel for security purpose, by either enabling secret communication without the need of keys, or facilitating the key agreement process. Hence we categorize existing literature into two main branches, namely keyless security and key-based security. We elaborate the evolution of this area from the early theoretic works on the wiretap channel, to its generalizations to more complicated scenarios including multiple-user, multiple-access and multiple-antenna systems, and introduce not only theoretical results but practical implementations. We critically and systematically examine the existing knowledge by analyzing the fundamental mechanics for each approach. Hence we are able to highlight advantages and limitations of proposed techniques, as well their interrelations, and bring insights into future developments of this area
    • ā€¦
    corecore