1,122 research outputs found

    Topological Interference Management With Transmitter Cooperation

    Get PDF
    Interference networks with no channel state information at the transmitter except for the knowledge of the connectivity graph have been recently studied under the topological interference management framework. In this paper, we consider a similar problem with topological knowledge but in a distributed broadcast channel setting, i.e., a network where transmitter cooperation is enabled. We show that the topological information can also be exploited in this case to strictly improve the degrees of freedom (DoF) as long as the network is not fully connected, which is a reasonable assumption in practice. Achievability schemes from graph theoretic and interference alignment perspectives are proposed. Together with outer bounds built upon generator sequence, the concept of compound channel settings, and the relation to index coding, we characterize the symmetric DoF for the so-called regular networks with constant number of interfering links, and identify the sufficient and/or necessary conditions for the arbitrary network topologies to achieve a certain amount of symmetric DoF

    Topological Interference Management with Transmitter Cooperation

    Get PDF
    Interference networks with no channel state information at the transmitter (CSIT) except for the knowledge of the connectivity graph have been recently studied under the topological interference management (TIM) framework. In this paper, we consider a similar problem with topological knowledge but in a distributed broadcast channel setting, i.e. a network where transmitter cooperation is enabled. We show that the topological information can also be exploited in this case to strictly improve the degrees of freedom (DoF) as long as the network is not fully connected, which is a reasonable assumption in practice. Achievability schemes based on selective graph coloring, interference alignment, and hypergraph covering, are proposed. Together with outer bounds built upon generator sequence, the concept of compound channel settings, and the relation to index coding, we characterize the symmetric DoF for so-called regular networks with constant number of interfering links, and identify the sufficient and/or necessary conditions for the arbitrary network topologies to achieve a certain amount of symmetric DoF.Comment: 46 pages, 10 figures, short version presented at the International Symposium on Information Theory 201

    Cooperative Compute-and-Forward

    Full text link
    We examine the benefits of user cooperation under compute-and-forward. Much like in network coding, receivers in a compute-and-forward network recover finite-field linear combinations of transmitters' messages. Recovery is enabled by linear codes: transmitters map messages to a linear codebook, and receivers attempt to decode the incoming superposition of signals to an integer combination of codewords. However, the achievable computation rates are low if channel gains do not correspond to a suitable linear combination. In response to this challenge, we propose a cooperative approach to compute-and-forward. We devise a lattice-coding approach to block Markov encoding with which we construct a decode-and-forward style computation strategy. Transmitters broadcast lattice codewords, decode each other's messages, and then cooperatively transmit resolution information to aid receivers in decoding the integer combinations. Using our strategy, we show that cooperation offers a significant improvement both in the achievable computation rate and in the diversity-multiplexing tradeoff.Comment: submitted to IEEE Transactions on Information Theor

    Dynamical structure in neural population activity

    Get PDF
    The question of how the collective activity of neural populations in the brain gives rise to complex behaviour is fundamental to neuroscience. At the core of this question lie considerations about how neural circuits can perform computations that enable sensory perception, motor control, and decision making. It is thought that such computations are implemented by the dynamical evolution of distributed activity in recurrent circuits. Thus, identifying and interpreting dynamical structure in neural population activity is a key challenge towards a better understanding of neural computation. In this thesis, I make several contributions in addressing this challenge. First, I develop two novel methods for neural data analysis. Both methods aim to extract trajectories of low-dimensional computational state variables directly from the unbinned spike-times of simultaneously recorded neurons on single trials. The first method separates inter-trial variability in the low-dimensional trajectory from variability in the timing of progression along its path, and thus offers a quantification of inter-trial variability in the underlying computational process. The second method simultaneously learns a low-dimensional portrait of the underlying nonlinear dynamics of the circuit, as well as the system's fixed points and locally linearised dynamics around them. This approach facilitates extracting interpretable low-dimensional hypotheses about computation directly from data. Second, I turn to the question of how low-dimensional dynamical structure may be embedded within a high-dimensional neurobiological circuit with excitatory and inhibitory cell-types. I analyse how such circuit-level features shape population activity, with particular focus on responses to targeted optogenetic perturbations of the circuit. Third, I consider the problem of implementing multiple computations in a single dynamical system. I address this in the framework of multi-task learning in recurrently connected networks and demonstrate that a careful organisation of low-dimensional, activity-defined subspaces within the network can help to avoid interference across tasks

    Interference Mitigation in Large Random Wireless Networks

    Full text link
    A central problem in the operation of large wireless networks is how to deal with interference -- the unwanted signals being sent by transmitters that a receiver is not interested in. This thesis looks at ways of combating such interference. In Chapters 1 and 2, we outline the necessary information and communication theory background, including the concept of capacity. We also include an overview of a new set of schemes for dealing with interference known as interference alignment, paying special attention to a channel-state-based strategy called ergodic interference alignment. In Chapter 3, we consider the operation of large regular and random networks by treating interference as background noise. We consider the local performance of a single node, and the global performance of a very large network. In Chapter 4, we use ergodic interference alignment to derive the asymptotic sum-capacity of large random dense networks. These networks are derived from a physical model of node placement where signal strength decays over the distance between transmitters and receivers. (See also arXiv:1002.0235 and arXiv:0907.5165.) In Chapter 5, we look at methods of reducing the long time delays incurred by ergodic interference alignment. We analyse the tradeoff between reducing delay and lowering the communication rate. (See also arXiv:1004.0208.) In Chapter 6, we outline a problem that is equivalent to the problem of pooled group testing for defective items. We then present some new work that uses information theoretic techniques to attack group testing. We introduce for the first time the concept of the group testing channel, which allows for modelling of a wide range of statistical error models for testing. We derive new results on the number of tests required to accurately detect defective items, including when using sequential `adaptive' tests.Comment: PhD thesis, University of Bristol, 201
    corecore