31 research outputs found

    Wireless Network Simplification: the Gaussian N-Relay Diamond Network

    Get PDF
    We consider the Gaussian N-relay diamond network, where a source wants to communicate to a destination node through a layer of N-relay nodes. We investigate the following question: what fraction of the capacity can we maintain by using only k out of the N available relays? We show that independent of the channel configurations and the operating SNR, we can always find a subset of k relays which alone provide a rate (kC/(k+1))-G, where C is the information theoretic cutset upper bound on the capacity of the whole network and G is a constant that depends only on N and k (logarithmic in N and linear in k). In particular, for k = 1, this means that half of the capacity of any N-relay diamond network can be approximately achieved by routing information over a single relay. We also show that this fraction is tight: there are configurations of the N-relay diamond network where every subset of k relays alone can at most provide approximately a fraction k/(k+1) of the total capacity. These high-capacity k-relay subnetworks can be also discovered efficiently. We propose an algorithm that computes a constant gap approximation to the capacity of the Gaussian N-relay diamond network in O(N log N) running time and discovers a high-capacity k-relay subnetwork in O(kN) running time. This result also provides a new approximation to the capacity of the Gaussian N-relay diamond network which is hybrid in nature: it has both multiplicative and additive gaps. In the intermediate SNR regime, this hybrid approximation is tighter than existing purely additive or purely multiplicative approximations to the capacity of this network.Comment: Submitted to Transactions on Information Theory in October 2012. The new version includes discussions on the algorithmic complexity of discovering a high-capacity subnetwork and on the performance of amplify-and-forwar

    Approximate Capacity of Gaussian Relay Networks

    Get PDF
    We present an achievable rate for general Gaussian relay networks. We show that the achievable rate is within a constant number of bits from the information-theoretic cut-set upper bound on the capacity of these networks. This constant depends on the topology of the network, but not the values of the channel gains. Therefore, we uniformly characterize the capacity of Gaussian relay networks within a constant number of bits, for all channel parameters.Comment: This paper is submited to 2008 IEEE International Symposium on Information Theory (ISIT 2008) -In the revised format the approximation gap (\kappa) is sharpene

    Low Complexity Scheduling and Coding for Wireless Networks

    Get PDF
    The advent of wireless communication technologies has created a paradigm shift in the accessibility of communication. With it has come an increased demand for throughput, a trend that is likely to increase further in the future. A key aspect of these challenges is to develop low complexity algorithms and architectures that can take advantage of the nature of the wireless medium like broadcasting and physical layer cooperation. In this thesis, we consider several problems in the domain of low complexity coding, relaying and scheduling for wireless networks. We formulate the Pliable Index Coding problem that models a server trying to send one or more new messages over a noiseless broadcast channel to a set of clients that already have a subset of messages as side information. We show through theoretical bounds and algorithms, that it is possible to design short length codes, poly-logarithmic in the number of clients, to solve this problem. The length of the codes are exponentially better than those possible in a traditional index coding setup. Next, we consider several aspects of low complexity relaying in half-duplex diamond networks. In such networks, the source transmits information to the destination through nn half-duplex intermediate relays arranged in a single layer. The half-duplex nature of the relays implies that they can either be in a listening or transmitting state at any point of time. To achieve high rates, there is an additional complexity of optimizing the schedule (i.e. the relative time fractions) of the relaying states, which can be 2n2^n in number. Using approximate capacity expressions derived from the quantize-map-forward scheme for physical layer cooperation, we show that for networks with n≤6n\leq 6 relays, the optimal schedule has atmost n+1n+1 active states. This is an exponential improvement over the possible 2n2^n active states in a schedule. We also show that it is possible to achieve at least half the capacity of such networks (approximately) by employing simple routing strategies that use only two relays and two scheduling states. These results imply that the complexity of relaying in half-duplex diamond networks can be significantly reduced by using fewer scheduling states or fewer relays without adversely affecting throughput. Both these results assume centralized processing of the channel state information of all the relays. We take the first steps in analyzing the performance of relaying schemes where each relay switches between listening and transmitting states randomly and optimizes their relative fractions using only local channel state information. We show that even with such simple scheduling, we can achieve a significant fraction of the capacity of the network. Next, we look at the dual problem of selecting the subset of relays of a given size that has the highest capacity for a general layered full-duplex relay network. We formulate this as an optimization problem and derive efficient approximation algorithms to solve them. We end the thesis with the design and implementation of a practical relaying scheme called QUILT. In it the relay opportunistically decodes or quantizes its received signal and transmits the resulting sequence in cooperation with the source. To keep the complexity of the system low, we use LDPC codes at the source, interleaving at the relays and belief propagation decoding at the destination. We evaluate our system through testbed experiments over WiFi

    Scaling up virtual MIMO systems

    Get PDF
    Multiple-input multiple-output (MIMO) systems are a mature technology that has been incorporated into current wireless broadband standards to improve the channel capacity and link reliability. Nevertheless, due to the continuous increasing demand for wireless data traffic new strategies are to be adopted. Very large MIMO antenna arrays represents a paradigm shift in terms of theory and implementation, where the use of tens or hundreds of antennas provides significant improvements in throughput and radiated energy efficiency compared to single antennas setups. Since design constraints limit the number of usable antennas, virtual systems can be seen as a promising technique due to their ability to mimic and exploit the gains of multi-antenna systems by means of wireless cooperation. Considering these arguments, in this work, energy efficient coding and network design for large virtual MIMO systems are presented. Firstly, a cooperative virtual MIMO (V-MIMO) system that uses a large multi-antenna transmitter and implements compress-and-forward (CF) relay cooperation is investigated. Since constructing a reliable codebook is the most computationally complex task performed by the relay nodes in CF cooperation, reduced complexity quantisation techniques are introduced. The analysis is focused on the block error probability (BLER) and the computational complexity for the uniform scalar quantiser (U-SQ) and the Lloyd-Max algorithm (LM-SQ). Numerical results show that the LM-SQ is simpler to design and can achieve a BLER performance comparable to the optimal vector quantiser. Furthermore, due to its low complexity, U-SQ could be consider particularly suitable for very large wireless systems. Even though very large MIMO systems enhance the spectral efficiency of wireless networks, this comes at the expense of linearly increasing the power consumption due to the use of multiple radio frequency chains to support the antennas. Thus, the energy efficiency and throughput of the cooperative V-MIMO system are analysed and the impact of the imperfect channel state information (CSI) on the system’s performance is studied. Finally, a power allocation algorithm is implemented to reduce the total power consumption. Simulation results show that wireless cooperation between users is more energy efficient than using a high modulation order transmission and that the larger the number of transmit antennas the lower the impact of the imperfect CSI on the system’s performance. Finally, the application of cooperative systems is extended to wireless self-backhauling heterogeneous networks, where the decode-and-forward (DF) protocol is employed to provide a cost-effective and reliable backhaul. The associated trade-offs for a heterogeneous network with inhomogeneous user distributions are investigated through the use of sleeping strategies. Three different policies for switching-off base stations are considered: random, load-based and greedy algorithms. The probability of coverage for the random and load-based sleeping policies is derived. Moreover, an energy efficient base station deployment and operation approach is presented. Numerical results show that the average number of base stations required to support the traffic load at peak-time can be reduced by using the greedy algorithm for base station deployment and that highly clustered networks exhibit a smaller average serving distance and thus, a better probability of coverage

    Achievable schemes for cost/performance trade-offs in networks

    Get PDF
    A common pattern in communication networks (both wired and wireless) is the collection of distributed state information from various network elements. This network state is needed for both analytics and operator policy and its collection consumes network resources, both to measure the relevant state and to transmit the measurements back to the data sink. The design of simple achievable schemes are considered with the goal of minimizing the overhead from data collection and/or trading off performance for overhead. Where possible, these schemes are compared with the optimal trade-off curve. The optimal transmission of distributed correlated discrete memoryless sources across a network with capacity constraints is considered first. Previously unreported properties of jointly optimal compression rates and transmission schemes are established. Additionally, an explicit relationship between the conditional independence relationships of the distributed sources and the number of vertices for the Slepian-Wolf rate region is given. Motivated by recent work applying rate-distortion theory to computing the optimal performance-overhead trade-off, the use of distributed scalar quantization is investigated for lossy encoding of state, where a central estimation officer (CEO) wishes to compute an extremization function of a collection of sources. The superiority of a simple heterogeneous (across users) quantizer design over the optimal homogeneous quantizer design is proven. Interactive communication enables an alternative framework where communicating parties can send messages back-and-forth over multiple rounds. This back-and-forth messaging can reduce the rate required to compute an extremum/extrema of the sources at the cost of increased delay. Again scalar quantization followed by entropy encoding is considered as an achievable scheme for a collection of distributed users talking to a CEO in the context of interactive communication. The design of optimal quantizers is formulated as the solution of a minimum cost dynamic program. It is established that, asymptotically, the costs for the CEO to compute the different extremization functions are equal. The existence of a simpler search space, which is asymptotically sufficient for minimizing the cost of computing the selected extremization functions, is proven.Ph.D., Electrical Engineering -- Drexel University, 201
    corecore