35,376 research outputs found
Interference Mitigation in Large Random Wireless Networks
A central problem in the operation of large wireless networks is how to deal
with interference -- the unwanted signals being sent by transmitters that a
receiver is not interested in. This thesis looks at ways of combating such
interference.
In Chapters 1 and 2, we outline the necessary information and communication
theory background, including the concept of capacity. We also include an
overview of a new set of schemes for dealing with interference known as
interference alignment, paying special attention to a channel-state-based
strategy called ergodic interference alignment.
In Chapter 3, we consider the operation of large regular and random networks
by treating interference as background noise. We consider the local performance
of a single node, and the global performance of a very large network.
In Chapter 4, we use ergodic interference alignment to derive the asymptotic
sum-capacity of large random dense networks. These networks are derived from a
physical model of node placement where signal strength decays over the distance
between transmitters and receivers. (See also arXiv:1002.0235 and
arXiv:0907.5165.)
In Chapter 5, we look at methods of reducing the long time delays incurred by
ergodic interference alignment. We analyse the tradeoff between reducing delay
and lowering the communication rate. (See also arXiv:1004.0208.)
In Chapter 6, we outline a problem that is equivalent to the problem of
pooled group testing for defective items. We then present some new work that
uses information theoretic techniques to attack group testing. We introduce for
the first time the concept of the group testing channel, which allows for
modelling of a wide range of statistical error models for testing. We derive
new results on the number of tests required to accurately detect defective
items, including when using sequential `adaptive' tests.Comment: PhD thesis, University of Bristol, 201
Beyond Support in Two-Stage Variable Selection
Numerous variable selection methods rely on a two-stage procedure, where a
sparsity-inducing penalty is used in the first stage to predict the support,
which is then conveyed to the second stage for estimation or inference
purposes. In this framework, the first stage screens variables to find a set of
possibly relevant variables and the second stage operates on this set of
candidate variables, to improve estimation accuracy or to assess the
uncertainty associated to the selection of variables. We advocate that more
information can be conveyed from the first stage to the second one: we use the
magnitude of the coefficients estimated in the first stage to define an
adaptive penalty that is applied at the second stage. We give two examples of
procedures that can benefit from the proposed transfer of information, in
estimation and inference problems respectively. Extensive simulations
demonstrate that this transfer is particularly efficient when each stage
operates on distinct subsamples. This separation plays a crucial role for the
computation of calibrated p-values, allowing to control the False Discovery
Rate. In this setup, the proposed transfer results in sensitivity gains ranging
from 50% to 100% compared to state-of-the-art
- …