13 research outputs found

    Interference Channel with Intermittent Feedback

    Full text link
    We investigate how to exploit intermittent feedback for interference management. Focusing on the two-user linear deterministic interference channel, we completely characterize the capacity region. We find that the characterization only depends on the forward channel parameters and the marginal probability distribution of each feedback link. The scheme we propose makes use of block Markov encoding and quantize-map-and-forward at the transmitters, and backward decoding at the receivers. Matching outer bounds are derived based on novel genie-aided techniques. As a consequence, the perfect-feedback capacity can be achieved once the two feedback links are active with large enough probabilities.Comment: Extended version of the same-titled paper that appears in IEEE International Symposium on Information Theory (ISIT) 201

    Two-Way Interference Channel Capacity: How to Have the Cake and Eat it Too

    Full text link
    Two-way communication is prevalent and its fundamental limits are first studied in the point-to-point setting by Shannon [1]. One natural extension is a two-way interference channel (IC) with four independent messages: two associated with each direction of communication. In this work, we explore a deterministic two-way IC which captures key properties of the wireless Gaussian channel. Our main contribution lies in the complete capacity region characterization of the two-way IC (w.r.t. the forward and backward sum-rate pair) via a new achievable scheme and a new converse. One surprising consequence of this result is that not only we can get an interaction gain over the one-way non-feedback capacities, we can sometimes get all the way to perfect feedback capacities in both directions simultaneously. In addition, our novel outer bound characterizes channel regimes in which interaction has no bearing on capacity.Comment: Presented in part in the IEEE International Symposium on Information Theory 201

    On Constant Gaps for the Two-way Gaussian Interference Channel

    Full text link
    We introduce the two-way Gaussian interference channel in which there are four nodes with four independent messages: two-messages to be transmitted over a Gaussian interference channel in the \rightarrow direction, simultaneously with two-messages to be transmitted over an interference channel (in-band, full-duplex) in the \leftarrow direction. In such a two-way network, all nodes are transmitters and receivers of messages, allowing them to adapt current channel inputs to previously received channel outputs. We propose two new outer bounds on the symmetric sum-rate for the two-way Gaussian interference channel with complex channel gains: one under full adaptation (all 4 nodes are permitted to adapt inputs to previous outputs), and one under partial adaptation (only 2 nodes are permitted to adapt, the other 2 are restricted). We show that simple non-adaptive schemes such as the Han and Kobayashi scheme, where inputs are functions of messages only and not past outputs, utilized in each direction are sufficient to achieve within a constant gap of these fully or partially adaptive outer bounds for all channel regimes.Comment: presented at 50th Annual Allerton Conference on Communication, Control, and Computing, Monticello, IL, October 201

    Feedback through Overhearing

    Full text link
    In this paper we examine the value of feedback that comes from overhearing, without dedicated feedback resources. We focus on a simple model for this purpose: a deterministic two-hop interference channel, where feedback comes from overhearing the forward-links. A new aspect brought by this setup is the dual-role of the relay signal. While the relay signal needs to convey the source message to its corresponding destination, it can also provide a feedback signal which can potentially increase the capacity of the first hop. We derive inner and outer bounds on the sum capacity which match for a large range of the parameter values. Our results identify the parameter ranges where overhearing can provide non-negative capacity gain and can even achieve the performance with dedicated-feedback resources. The results also provide insights into which transmissions are most useful to overhear

    Computation in Multicast Networks: Function Alignment and Converse Theorems

    Full text link
    The classical problem in network coding theory considers communication over multicast networks. Multiple transmitters send independent messages to multiple receivers which decode the same set of messages. In this work, computation over multicast networks is considered: each receiver decodes an identical function of the original messages. For a countably infinite class of two-transmitter two-receiver single-hop linear deterministic networks, the computing capacity is characterized for a linear function (modulo-2 sum) of Bernoulli sources. Inspired by the geometric concept of interference alignment in networks, a new achievable coding scheme called function alignment is introduced. A new converse theorem is established that is tighter than cut-set based and genie-aided bounds. Computation (vs. communication) over multicast networks requires additional analysis to account for multiple receivers sharing a network's computational resources. We also develop a network decomposition theorem which identifies elementary parallel subnetworks that can constitute an original network without loss of optimality. The decomposition theorem provides a conceptually-simpler algebraic proof of achievability that generalizes to LL-transmitter LL-receiver networks.Comment: to appear in the IEEE Transactions on Information Theor
    corecore