13,431 research outputs found
Performance Limits of Compressive Sensing Channel Estimation in Dense Cloud RAN
Towards reducing the training signaling overhead in large scale and dense
cloud radio access networks (CRAN), various approaches have been proposed based
on the channel sparsification assumption, namely, only a small subset of the
deployed remote radio heads (RRHs) are of significance to any user in the
system. Motivated by the potential of compressive sensing (CS) techniques in
this setting, this paper provides a rigorous description of the performance
limits of many practical CS algorithms by considering the performance of the,
so called, oracle estimator, which knows a priori which RRHs are of
significance but not their corresponding channel values. By using tools from
stochastic geometry, a closed form analytical expression of the oracle
estimator performance is obtained, averaged over distribution of RRH positions
and channel statistics. Apart from a bound on practical CS algorithms, the
analysis provides important design insights, e.g., on how the training sequence
length affects performance, and identifies the operational conditions where the
channel sparsification assumption is valid. It is shown that the latter is true
only in operational conditions with sufficiently large path loss exponents.Comment: 6 pages, two-column format; ICC 201
Decentralized Erasure Codes for Distributed Networked Storage
We consider the problem of constructing an erasure code for storage over a
network when the data sources are distributed. Specifically, we assume that
there are n storage nodes with limited memory and k<n sources generating the
data. We want a data collector, who can appear anywhere in the network, to
query any k storage nodes and be able to retrieve the data. We introduce
Decentralized Erasure Codes, which are linear codes with a specific randomized
structure inspired by network coding on random bipartite graphs. We show that
decentralized erasure codes are optimally sparse, and lead to reduced
communication, storage and computation cost over random linear coding.Comment: to appear in IEEE Transactions on Information Theory, Special Issue:
Networking and Information Theor
Deploy-As-You-Go Wireless Relay Placement: An Optimal Sequential Decision Approach using the Multi-Relay Channel Model
We use information theoretic achievable rate formulas for the multi-relay
channel to study the problem of as-you-go deployment of relay nodes. The
achievable rate formulas are for full-duplex radios at the relays and for
decode-and-forward relaying. Deployment is done along the straight line joining
a source node and a sink node at an unknown distance from the source. The
problem is for a deployment agent to walk from the source to the sink,
deploying relays as he walks, given that the distance to the sink is
exponentially distributed with known mean. As a precursor, we apply the
multi-relay channel achievable rate formula to obtain the optimal power
allocation to relays placed along a line, at fixed locations. This permits us
to obtain the optimal placement of a given number of nodes when the distance
between the source and sink is given. Numerical work suggests that, at low
attenuation, the relays are mostly clustered near the source in order to be
able to cooperate, whereas at high attenuation they are uniformly placed and
work as repeaters. We also prove that the effect of path-loss can be entirely
mitigated if a large enough number of relays are placed uniformly between the
source and the sink. The structure of the optimal power allocation for a given
placement of the nodes, then motivates us to formulate the problem of as-you-go
placement of relays along a line of exponentially distributed length, and with
the exponential path-loss model, so as to minimize a cost function that is
additive over hops. The hop cost trades off a capacity limiting term, motivated
from the optimal power allocation solution, against the cost of adding a relay
node. We formulate the problem as a total cost Markov decision process,
establish results for the value function, and provide insights into the
placement policy and the performance of the deployed network via numerical
exploration.Comment: 21 pages. arXiv admin note: substantial text overlap with
arXiv:1204.432
Adaptive Matching for Expert Systems with Uncertain Task Types
A matching in a two-sided market often incurs an externality: a matched
resource may become unavailable to the other side of the market, at least for a
while. This is especially an issue in online platforms involving human experts
as the expert resources are often scarce. The efficient utilization of experts
in these platforms is made challenging by the fact that the information
available about the parties involved is usually limited.
To address this challenge, we develop a model of a task-expert matching
system where a task is matched to an expert using not only the prior
information about the task but also the feedback obtained from the past
matches. In our model the tasks arrive online while the experts are fixed and
constrained by a finite service capacity. For this model, we characterize the
maximum task resolution throughput a platform can achieve. We show that the
natural greedy approaches where each expert is assigned a task most suitable to
her skill is suboptimal, as it does not internalize the above externality. We
develop a throughput optimal backpressure algorithm which does so by accounting
for the `congestion' among different task types. Finally, we validate our model
and confirm our theoretical findings with data-driven simulations via logs of
Math.StackExchange, a StackOverflow forum dedicated to mathematics.Comment: A part of it presented at Allerton Conference 2017, 18 page
- …