12,281 research outputs found

    On the Capacity of Vector Gaussian Channels With Bounded Inputs

    Get PDF
    The capacity of a deterministic multiple-input multiple-output channel under the peak and average power constraints is investigated. For the identity channel matrix, the approach of Shamai et al. is generalized to the higher dimension settings to derive the necessary and sufficient conditions for the optimal input probability density function. This approach prevents the usage of the identity theorem of the holomorphic functions of several complex variables which seems to fail in the multi-dimensional scenarios. It is proved that the support of the capacity-achieving distribution is a finite set of hyper-spheres with mutual independent phases and amplitude in the spherical domain. Subsequently, it is shown that when the average power constraint is relaxed, if the number of antennas is large enough, the capacity has a closed-form solution and constant amplitude signaling at the peak power achieves it. Moreover, it will be observed that in a discrete-time memoryless Gaussian channel, the average power constrained capacity, which results from a Gaussian input distribution, can be closely obtained by an input where the support of its magnitude is a discrete finite set. Finally, we investigate some upper and lower bounds for the capacity of the non-identity channel matrix and evaluate their performance as a function of the condition number of the channel

    A digital interface for Gaussian relay and interference networks: Lifting codes from the discrete superposition model

    Full text link
    For every Gaussian network, there exists a corresponding deterministic network called the discrete superposition network. We show that this discrete superposition network provides a near-optimal digital interface for operating a class consisting of many Gaussian networks in the sense that any code for the discrete superposition network can be naturally lifted to a corresponding code for the Gaussian network, while achieving a rate that is no more than a constant number of bits lesser than the rate it achieves for the discrete superposition network. This constant depends only on the number of nodes in the network and not on the channel gains or SNR. Moreover the capacities of the two networks are within a constant of each other, again independent of channel gains and SNR. We show that the class of Gaussian networks for which this interface property holds includes relay networks with a single source-destination pair, interference networks, multicast networks, and the counterparts of these networks with multiple transmit and receive antennas. The code for the Gaussian relay network can be obtained from any code for the discrete superposition network simply by pruning it. This lifting scheme establishes that the superposition model can indeed potentially serve as a strong surrogate for designing codes for Gaussian relay networks. We present similar results for the K x K Gaussian interference network, MIMO Gaussian interference networks, MIMO Gaussian relay networks, and multicast networks, with the constant gap depending additionally on the number of antennas in case of MIMO networks.Comment: Final versio

    On the Gaussian Many-to-One X Channel

    Full text link
    In this paper, the Gaussian many-to-one X channel, which is a special case of general multiuser X channel, is studied. In the Gaussian many-to-one X channel, communication links exist between all transmitters and one of the receivers, along with a communication link between each transmitter and its corresponding receiver. As per the X channel assumption, transmission of messages is allowed on all the links of the channel. This communication model is different from the corresponding many-to-one interference channel (IC). Transmission strategies which involve using Gaussian codebooks and treating interference from a subset of transmitters as noise are formulated for the above channel. Sum-rate is used as the criterion of optimality for evaluating the strategies. Initially, a 3×33 \times 3 many-to-one X channel is considered and three transmission strategies are analyzed. The first two strategies are shown to achieve sum-rate capacity under certain channel conditions. For the third strategy, a sum-rate outer bound is derived and the gap between the outer bound and the achieved rate is characterized. These results are later extended to the K×KK \times K case. Next, a region in which the many-to-one X channel can be operated as a many-to-one IC without loss of sum-rate is identified. Further, in the above region, it is shown that using Gaussian codebooks and treating interference as noise achieves a rate point that is within K/21K/2 -1 bits from the sum-rate capacity. Subsequently, some implications of the above results to the Gaussian many-to-one IC are discussed. Transmission strategies for the many-to-one IC are formulated and channel conditions under which the strategies achieve sum-rate capacity are obtained. A region where the sum-rate capacity can be characterized to within K/21K/2-1 bits is also identified.Comment: Submitted to IEEE Transactions on Information Theory; Revised and updated version of the original draf

    Gaussian Multiple and Random Access in the Finite Blocklength Regime

    Get PDF
    This paper presents finite-blocklength achievabil- ity bounds for the Gaussian multiple access channel (MAC) and random access channel (RAC) under average-error and maximal-power constraints. Using random codewords uniformly distributed on a sphere and a maximum likelihood decoder, the derived MAC bound on each transmitter’s rate matches the MolavianJazi-Laneman bound (2015) in its first- and second-order terms, improving the remaining terms to ½ log n/n + O(1/n) bits per channel use. The result then extends to a RAC model in which neither the encoders nor the decoder knows which of K possible transmitters are active. In the proposed rateless coding strategy, decoding occurs at a time n t that depends on the decoder’s estimate t of the number of active transmitters k. Single-bit feedback from the decoder to all encoders at each potential decoding time n_i, i ≤ t, informs the encoders when to stop transmitting. For this RAC model, the proposed code achieves the same first-, second-, and third-order performance as the best known result for the Gaussian MAC in operation
    corecore