384,073 research outputs found

    MIMO Transceivers With Decision Feedback and Bit Loading: Theory and Optimization

    Get PDF
    This paper considers MIMO transceivers with linear precoders and decision feedback equalizers (DFEs), with bit allocation at the transmitter. Zero-forcing (ZF) is assumed. Considered first is the minimization of transmitted power, for a given total bit rate and a specified set of error probabilities for the symbol streams. The precoder and DFE matrices are optimized jointly with bit allocation. It is shown that the generalized triangular decomposition (GTD) introduced by Jiang, Li, and Hager offers an optimal family of solutions. The optimal linear transceiver (which has a linear equalizer rather than a DFE) with optimal bit allocation is a member of this family. This shows formally that, under optimal bit allocation, linear and DFE transceivers achieve the same minimum power. The DFE transceiver using the geometric mean decomposition (GMD) is another member of this optimal family, and is such that optimal bit allocation yields identical bits for all symbol streams—no bit allocation is necessary—when the specified error probabilities are identical for all streams. The QR-based system used in VBLAST is yet another member of the optimal family and is particularly well-suited when limited feedback is allowed from receiver to transmitter. Two other optimization problems are then considered: a) minimization of power for specified set of bit rates and error probabilities (the QoS problem), and b) maximization of bit rate for fixed set of error probabilities and power. It is shown in both cases that the GTD yields an optimal family of solutions

    Generalized Triangular Decomposition in Transform Coding

    Get PDF
    A general family of optimal transform coders (TCs) is introduced here based on the generalized triangular decomposition (GTD) developed by Jiang This family includes the Karhunen-Loeve transform (KLT) and the generalized version of the prediction-based lower triangular transform (PLT) introduced by Phoong and Lin as special cases. The coding gain of the entire family, with optimal bit allocation, is equal to that of the KLT and the PLT. Even though the original PLT introduced by Phoong is not applicable for vectors that are not blocked versions of scalar wide sense stationary processes, the GTD-based family includes members that are natural extensions of the PLT, and therefore also enjoy the so-called MINLAB structure of the PLT, which has the unit noise-gain property. Other special cases of the GTD-TC are the geometric mean decomposition (GMD) and the bidiagonal decomposition (BID) transform coders. The GMD-TC in particular has the property that the optimum bit allocation is a uniform allocation; this is because all its transform domain coefficients have the same variance, implying thereby that the dynamic ranges of the coefficients to be quantized are identical

    Multiple Beamforming with Perfect Coding

    Full text link
    Perfect Space-Time Block Codes (PSTBCs) achieve full diversity, full rate, nonvanishing constant minimum determinant, uniform average transmitted energy per antenna, and good shaping. However, the high decoding complexity is a critical issue for practice. When the Channel State Information (CSI) is available at both the transmitter and the receiver, Singular Value Decomposition (SVD) is commonly applied for a Multiple-Input Multiple-Output (MIMO) system to enhance the throughput or the performance. In this paper, two novel techniques, Perfect Coded Multiple Beamforming (PCMB) and Bit-Interleaved Coded Multiple Beamforming with Perfect Coding (BICMB-PC), are proposed, employing both PSTBCs and SVD with and without channel coding, respectively. With CSI at the transmitter (CSIT), the decoding complexity of PCMB is substantially reduced compared to a MIMO system employing PSTBC, providing a new prospect of CSIT. Especially, because of the special property of the generation matrices, PCMB provides much lower decoding complexity than the state-of-the-art SVD-based uncoded technique in dimensions 2 and 4. Similarly, the decoding complexity of BICMB-PC is much lower than the state-of-the-art SVD-based coded technique in these two dimensions, and the complexity gain is greater than the uncoded case. Moreover, these aforementioned complexity reductions are achieved with only negligible or modest loss in performance.Comment: accepted to journa

    Differential Inequalities in Multi-Agent Coordination and Opinion Dynamics Modeling

    Get PDF
    Distributed algorithms of multi-agent coordination have attracted substantial attention from the research community; the simplest and most thoroughly studied of them are consensus protocols in the form of differential or difference equations over general time-varying weighted graphs. These graphs are usually characterized algebraically by their associated Laplacian matrices. Network algorithms with similar algebraic graph theoretic structures, called being of Laplacian-type in this paper, also arise in other related multi-agent control problems, such as aggregation and containment control, target surrounding, distributed optimization and modeling of opinion evolution in social groups. In spite of their similarities, each of such algorithms has often been studied using separate mathematical techniques. In this paper, a novel approach is offered, allowing a unified and elegant way to examine many Laplacian-type algorithms for multi-agent coordination. This approach is based on the analysis of some differential or difference inequalities that have to be satisfied by the some "outputs" of the agents (e.g. the distances to the desired set in aggregation problems). Although such inequalities may have many unbounded solutions, under natural graphic connectivity conditions all their bounded solutions converge (and even reach consensus), entailing the convergence of the corresponding distributed algorithms. In the theory of differential equations the absence of bounded non-convergent solutions is referred to as the equation's dichotomy. In this paper, we establish the dichotomy criteria of Laplacian-type differential and difference inequalities and show that these criteria enable one to extend a number of recent results, concerned with Laplacian-type algorithms for multi-agent coordination and modeling opinion formation in social groups.Comment: accepted to Automatic

    Virtual Machines Embedding for Cloud PON AWGR and Server Based Data Centres

    Full text link
    In this study, we investigate the embedding of various cloud applications in PON AWGR and Server Based Data Centres

    Sparse Recovery from Combined Fusion Frame Measurements

    Full text link
    Sparse representations have emerged as a powerful tool in signal and information processing, culminated by the success of new acquisition and processing techniques such as Compressed Sensing (CS). Fusion frames are very rich new signal representation methods that use collections of subspaces instead of vectors to represent signals. This work combines these exciting fields to introduce a new sparsity model for fusion frames. Signals that are sparse under the new model can be compressively sampled and uniquely reconstructed in ways similar to sparse signals using standard CS. The combination provides a promising new set of mathematical tools and signal models useful in a variety of applications. With the new model, a sparse signal has energy in very few of the subspaces of the fusion frame, although it does not need to be sparse within each of the subspaces it occupies. This sparsity model is captured using a mixed l1/l2 norm for fusion frames. A signal sparse in a fusion frame can be sampled using very few random projections and exactly reconstructed using a convex optimization that minimizes this mixed l1/l2 norm. The provided sampling conditions generalize coherence and RIP conditions used in standard CS theory. It is demonstrated that they are sufficient to guarantee sparse recovery of any signal sparse in our model. Moreover, a probabilistic analysis is provided using a stochastic model on the sparse signal that shows that under very mild conditions the probability of recovery failure decays exponentially with increasing dimension of the subspaces
    corecore