164 research outputs found

    Distributed space-time block codes for two-hop wireless relay networks

    Get PDF
    Recently, the idea of space-time coding has been applied to wireless relay networks wherein a set of geographically separated relay nodes cooperate to process the received signal from the source and forward them to the destination such that the signal received at the destination appears like a Space-Time Block Code (STBC). Such STBCs (referred to as Distributed Space-Time Block Codes (DSTBCs)) when appropriately designed are known to offer spatial diversity. It is known that different classes of DSTBCs can be designed primarily depending on (i) whether the Amplify and Forward (AF) protocol or the Decode and Forward (DF) protocol is employed at the relays and (ii) whether the relay nodes are synchronized or not. In this paper, we present a survey on the problems and results associated with the design of DSTBCs for the following classes of two-hop wireless relay networks: (i) synchronous relay networks with AF protocols, (ii) asynchronous relay networks with AF protocols (iii) synchronous relay networks with DF protocols and (iv) asynchronous relay Fig. 1. Co-located MIMO channel model networks with DF protocols

    Two-group decodable distributed differential space-time code for wireless relay networks based on SAST codes 2

    Get PDF
    Space-time code can be implemented in wireless relay networks when all relays cooperate to generate the code at the receiver. In this case, it is called distributed space-time code. If the channel response changes very quickly, the idea of differential space-time coding is needed to overcome the difficulty of updating the channel state information at the receiver. As a result, the transmitted signal can be demodulated without any knowledge of the channel state information at the relays or the receiver. In this paper, development of new low decoding complexity distributed differential space-time codes is considered. The developed codes are designed using semiorthogonal algebraic space-time codes. They work for networks with an even number of relays and have a two-group decodable maximum likelihood receiver. The performance of the new codes is analyzed via MATLAB simulation which demonstrates that they outperform both cyclic codes and circulant codes

    DMT Optimality of LR-Aided Linear Decoders for a General Class of Channels, Lattice Designs, and System Models

    Full text link
    The work identifies the first general, explicit, and non-random MIMO encoder-decoder structures that guarantee optimality with respect to the diversity-multiplexing tradeoff (DMT), without employing a computationally expensive maximum-likelihood (ML) receiver. Specifically, the work establishes the DMT optimality of a class of regularized lattice decoders, and more importantly the DMT optimality of their lattice-reduction (LR)-aided linear counterparts. The results hold for all channel statistics, for all channel dimensions, and most interestingly, irrespective of the particular lattice-code applied. As a special case, it is established that the LLL-based LR-aided linear implementation of the MMSE-GDFE lattice decoder facilitates DMT optimal decoding of any lattice code at a worst-case complexity that grows at most linearly in the data rate. This represents a fundamental reduction in the decoding complexity when compared to ML decoding whose complexity is generally exponential in rate. The results' generality lends them applicable to a plethora of pertinent communication scenarios such as quasi-static MIMO, MIMO-OFDM, ISI, cooperative-relaying, and MIMO-ARQ channels, in all of which the DMT optimality of the LR-aided linear decoder is guaranteed. The adopted approach yields insight, and motivates further study, into joint transceiver designs with an improved SNR gap to ML decoding.Comment: 16 pages, 1 figure (3 subfigures), submitted to the IEEE Transactions on Information Theor

    Instantly Decodable Network Coding: From Centralized to Device-to-Device Communications

    Get PDF
    From its introduction to its quindecennial, network coding has built a strong reputation for enhancing packet recovery and achieving maximum information flow in both wired and wireless networks. Traditional studies focused on optimizing the throughput of the system by proposing elaborate schemes able to reach the network capacity. With the shift toward distributed computing on mobile devices, performance and complexity become both critical factors that affect the efficiency of a coding strategy. Instantly decodable network coding presents itself as a new paradigm in network coding that trades off these two aspects. This paper review instantly decodable network coding schemes by identifying, categorizing, and evaluating various algorithms proposed in the literature. The first part of the manuscript investigates the conventional centralized systems, in which all decisions are carried out by a central unit, e.g., a base-station. In particular, two successful approaches known as the strict and generalized instantly decodable network are compared in terms of reliability, performance, complexity, and packet selection methodology. The second part considers the use of instantly decodable codes in a device-to-device communication network, in which devices speed up the recovery of the missing packets by exchanging network coded packets. Although the performance improvements are directly proportional to the computational complexity increases, numerous successful schemes from both the performance and complexity viewpoints are identified

    Distributed Space Time Coding for Wireless Two-way Relaying

    Full text link
    We consider the wireless two-way relay channel, in which two-way data transfer takes place between the end nodes with the help of a relay. For the Denoise-And-Forward (DNF) protocol, it was shown by Koike-Akino et. al. that adaptively changing the network coding map used at the relay greatly reduces the impact of Multiple Access interference at the relay. The harmful effect of the deep channel fade conditions can be effectively mitigated by proper choice of these network coding maps at the relay. Alternatively, in this paper we propose a Distributed Space Time Coding (DSTC) scheme, which effectively removes most of the deep fade channel conditions at the transmitting nodes itself without any CSIT and without any need to adaptively change the network coding map used at the relay. It is shown that the deep fades occur when the channel fade coefficient vector falls in a finite number of vector subspaces of C2\mathbb{C}^2, which are referred to as the singular fade subspaces. DSTC design criterion referred to as the \textit{singularity minimization criterion} under which the number of such vector subspaces are minimized is obtained. Also, a criterion to maximize the coding gain of the DSTC is obtained. Explicit low decoding complexity DSTC designs which satisfy the singularity minimization criterion and maximize the coding gain for QAM and PSK signal sets are provided. Simulation results show that at high Signal to Noise Ratio, the DSTC scheme provides large gains when compared to the conventional Exclusive OR network code and performs slightly better than the adaptive network coding scheme proposed by Koike-Akino et. al.Comment: 27 pages, 4 figures, A mistake in the proof of Proposition 3 given in Appendix B correcte

    Lagrange Coded Computing: Optimal Design for Resiliency, Security and Privacy

    Get PDF
    We consider a scenario involving computations over a massive dataset stored distributedly across multiple workers, which is at the core of distributed learning algorithms. We propose Lagrange Coded Computing (LCC), a new framework to simultaneously provide (1) resiliency against stragglers that may prolong computations; (2) security against Byzantine (or malicious) workers that deliberately modify the computation for their benefit; and (3) (information-theoretic) privacy of the dataset amidst possible collusion of workers. LCC, which leverages the well-known Lagrange polynomial to create computation redundancy in a novel coded form across workers, can be applied to any computation scenario in which the function of interest is an arbitrary multivariate polynomial of the input dataset, hence covering many computations of interest in machine learning. LCC significantly generalizes prior works to go beyond linear computations. It also enables secure and private computing in distributed settings, improving the computation and communication efficiency of the state-of-the-art. Furthermore, we prove the optimality of LCC by showing that it achieves the optimal tradeoff between resiliency, security, and privacy, i.e., in terms of tolerating the maximum number of stragglers and adversaries, and providing data privacy against the maximum number of colluding workers. Finally, we show via experiments on Amazon EC2 that LCC speeds up the conventional uncoded implementation of distributed least-squares linear regression by up to 13.43×13.43\times, and also achieves a 2.36×2.36\times-12.65×12.65\times speedup over the state-of-the-art straggler mitigation strategies
    corecore