7,198 research outputs found
Degrees-of-Freedom of the MIMO Three-Way Channel with Node-Intermittency
The characterization of fundamental performance bounds of many-to-many
communication systems in which participating nodes are active in an
intermittent way is one of the major challenges in communication theory. In
order to address this issue, we introduce the multiple-input multiple-output
(MIMO) three-way channel (3WC) with an intermittent node and study its
degrees-of-freedom (DoF) region and sum-DoF. We devise a non-adaptive encoding
scheme based on zero-forcing, interference alignment and erasure coding, and
show its DoF region (and thus sum-DoF) optimality for non-intermittent 3WCs and
its sum-DoF optimality for (node-)intermittent 3WCs. However, we show by
example that in general some DoF tuples in the intermittent 3WC can only be
achieved by adaptive schemes, such as decode-forward relaying. This shows that
non-adaptive encoding is sufficient for the non-intermittent 3WC and for the
sum-DoF of intermittent 3WCs, but adaptive encoding is necessary for the DoF
region of intermittent 3WCs. Our work contributes to a better understanding of
the fundamental limits of multi-way communication systems with intermittency
and the impact of adaptation therein
Quantum Achievability Proof via Collision Relative Entropy
In this paper, we provide a simple framework for deriving one-shot achievable
bounds for some problems in quantum information theory. Our framework is based
on the joint convexity of the exponential of the collision relative entropy,
and is a (partial) quantum generalization of the technique of Yassaee et al.
(2013) from classical information theory. Based on this framework, we derive
one-shot achievable bounds for the problems of communication over
classical-quantum channels, quantum hypothesis testing, and classical data
compression with quantum side information. We argue that our one-shot
achievable bounds are strong enough to give the asymptotic achievable rates of
these problems even up to the second order.Comment: 12 page
Uplink CoMP under a Constrained Backhaul and Imperfect Channel Knowledge
Coordinated Multi-Point (CoMP) is known to be a key technology for next
generation mobile communications systems, as it allows to overcome the burden
of inter-cell interference. Especially in the uplink, it is likely that
interference exploitation schemes will be used in the near future, as they can
be used with legacy terminals and require no or little changes in
standardization. Major drawbacks, however, are the extent of additional
backhaul infrastructure needed, and the sensitivity to imperfect channel
knowledge. This paper jointly addresses both issues in a new framework
incorporating a multitude of proposed theoretical uplink CoMP concepts, which
are then put into perspective with practical CoMP algorithms. This
comprehensive analysis provides new insight into the potential usage of uplink
CoMP in next generation wireless communications systems.Comment: Submitted to IEEE Transactions on Wireless Communications in February
201
Sketching and Neural Networks
High-dimensional sparse data present computational and statistical challenges
for supervised learning. We propose compact linear sketches for reducing the
dimensionality of the input, followed by a single layer neural network. We show
that any sparse polynomial function can be computed, on nearly all sparse
binary vectors, by a single layer neural network that takes a compact sketch of
the vector as input. Consequently, when a set of sparse binary vectors is
approximately separable using a sparse polynomial, there exists a single-layer
neural network that takes a short sketch as input and correctly classifies
nearly all the points. Previous work has proposed using sketches to reduce
dimensionality while preserving the hypothesis class. However, the sketch size
has an exponential dependence on the degree in the case of polynomial
classifiers. In stark contrast, our approach of using improper learning, using
a larger hypothesis class allows the sketch size to have a logarithmic
dependence on the degree. Even in the linear case, our approach allows us to
improve on the pesky dependence of random projections, on
the margin . We empirically show that our approach leads to more
compact neural networks than related methods such as feature hashing at equal
or better performance
Performance Limits of a Cloud Radio
Cooperation in a cellular network is seen as a key technique in managing
other cell interference to observe a gain in achievable rate. In this paper, we
present the achievable rate regions for a cloud radio network using a
sub-optimal zero forcing equalizer with dirty paper precoding. We show that
when complete channel state information is available at the cloud, rates close
to those achievable with total interference cancellation can be achieved. With
mean capacity gains, of up to 2 fold over the conventional cellular network in
both uplink and downlink, this precoding scheme shows great promise for
implementation in a cloud radio network. To simplify the analysis, we use a
stochastic geometric framework based of Poisson point processes instead of the
traditional grid based cellular network model.
We also study the impact of limiting the channel state information and
geographical clustering to limit the cloud size on the achievable rate. We have
observed that using this zero forcing-dirty paper coding technique, the adverse
effect of inter-cluster interference can be minimized thereby transforming an
interference limited network into a noise limited network as experienced by an
average user in the network for low operating signal-to-noise-ratios. However,
for higher signal-to-noise-ratios, both the average achievable rate and
cell-edge achievable rate saturate as observed in literature. As the
implementation of dirty paper coding is practically not feasible, we present a
practical design of a cloud radio network using cloud a minimum mean square
equalizer for processing the uplink streams and use Tomlinson-Harashima
precoder as a sub-optimal substitute for a dirty paper precoder in downlink
Convolutional Codes in Rank Metric with Application to Random Network Coding
Random network coding recently attracts attention as a technique to
disseminate information in a network. This paper considers a non-coherent
multi-shot network, where the unknown and time-variant network is used several
times. In order to create dependencies between the different shots, particular
convolutional codes in rank metric are used. These codes are so-called
(partial) unit memory ((P)UM) codes, i.e., convolutional codes with memory one.
First, distance measures for convolutional codes in rank metric are shown and
two constructions of (P)UM codes in rank metric based on the generator matrices
of maximum rank distance codes are presented. Second, an efficient
error-erasure decoding algorithm for these codes is presented. Its guaranteed
decoding radius is derived and its complexity is bounded. Finally, it is shown
how to apply these codes for error correction in random linear and affine
network coding.Comment: presented in part at Netcod 2012, submitted to IEEE Transactions on
Information Theor
What is needed to exploit knowledge of primary transmissions?
Recently, Tarokh and others have raised the possibility that a cognitive
radio might know the interference signal being transmitted by a strong primary
user in a non-causal way, and use this knowledge to increase its data rates.
However, there is a subtle difference between knowing the signal transmitted by
the primary and the actual interference at our receiver since there is a
wireless channel between these two points. We show that even an unknown phase
results in a substantial decrease in the data rates that can be achieved, and
thus there is a need to feedback interference channel estimates to the
cognitive transmitter. We then consider the case of fading channels. We derive
an upper bound on the rate for given outage error probability for faded dirt.
We give a scheme that uses appropriate "training" to obtain such estimates and
quantify this scheme's required overhead as a function of the relevant
coherence time and interference power.Comment: Combination of two papers submitted to ISIT'07 and DySPAN '0
On The Performance of Random Block Codes over Finite-State Fading Channels
As the mobile application landscape expands, wireless networks are tasked
with supporting various connection profiles, including real-time communications
and delay-sensitive traffic. Among many ensuing engineering challenges is the
need to better understand the fundamental limits of forward error correction in
non-asymptotic regimes. This article seeks to characterize the performance of
block codes over finite-state channels with memory. In particular, classical
results from information theory are revisited in the context of channels with
rate transitions, and bounds on the probabilities of decoding failure are
derived for random codes. This study offers new insights about the potential
impact of channel correlation over time on overall performance
Relay Channel with Orthogonal Components and Structured Interference Known at the Source
A relay channel with orthogonal components that is affected by an
interference signal that is noncausally available only at the source is
studied. The interference signal has structure in that it is produced by
another transmitter communicating with its own destination. Moreover, the
interferer is not willing to adjust its communication strategy to minimize the
interference. Knowledge of the interferer's signal may be acquired by the
source, for instance, by exploiting HARQ retransmissions on the interferer's
link. The source can then utilize the relay not only for communicating its own
message, but also for cooperative interference mitigation at the destination by
informing the relay about the interference signal. Proposed transmission
strategies are based on partial decode-and-forward (PDF) relaying and leverage
the interference structure. Achievable schemes are derived for discrete
memoryless models, Gaussian and Ricean fading channels. Furthermore, optimal
strategies are identified in some special cases. Finally, numerical results
bring insight into the advantages of utilizing the interference structure at
the source, relay or destination.Comment: Submitted to the IEEE Transactions on Communications, 28 pages, 11
figure
Learning Immune-Defectives Graph through Group Tests
This paper deals with an abstraction of a unified problem of drug discovery
and pathogen identification. Pathogen identification involves identification of
disease-causing biomolecules. Drug discovery involves finding chemical
compounds, called lead compounds, that bind to pathogenic proteins and
eventually inhibit the function of the protein. In this paper, the lead
compounds are abstracted as inhibitors, pathogenic proteins as defectives, and
the mixture of "ineffective" chemical compounds and non-pathogenic proteins as
normal items. A defective could be immune to the presence of an inhibitor in a
test. So, a test containing a defective is positive iff it does not contain its
"associated" inhibitor. The goal of this paper is to identify the defectives,
inhibitors, and their "associations" with high probability, or in other words,
learn the Immune Defectives Graph (IDG) efficiently through group tests. We
propose a probabilistic non-adaptive pooling design, a probabilistic two-stage
adaptive pooling design and decoding algorithms for learning the IDG. For the
two-stage adaptive-pooling design, we show that the sample complexity of the
number of tests required to guarantee recovery of the inhibitors, defectives,
and their associations with high probability, i.e., the upper bound, exceeds
the proposed lower bound by a logarithmic multiplicative factor in the number
of items. For the non-adaptive pooling design too, we show that the upper bound
exceeds the proposed lower bound by at most a logarithmic multiplicative factor
in the number of items.Comment: Double column, 17 pages. Updated with tighter lower bounds and other
minor edit
- …