663 research outputs found
Broadcast Caching Networks with Two Receivers and Multiple Correlated Sources
The correlation among the content distributed across a cache-aided broadcast
network can be exploited to reduce the delivery load on the shared wireless
link. This paper considers a two-user three-file network with correlated
content, and studies its fundamental limits for the worst-case demand. A class
of achievable schemes based on a two-step source coding approach is proposed.
Library files are first compressed using Gray-Wyner source coding, and then
cached and delivered using a combination of correlation-unaware cache-aided
coded multicast schemes. The second step is interesting in its own right and
considers a multiple-request caching problem, whose solution requires coding in
the placement phase. A lower bound on the optimal peak rate-memory trade-off is
derived, which is used to evaluate the performance of the proposed scheme. It
is shown that for symmetric sources the two-step strategy achieves the lower
bound for large cache capacities, and it is within half of the joint entropy of
two of the sources conditioned on the third source for all other cache sizes.Comment: in Proceedings of Asilomar Conference on Signals, Systems and
Computers, Pacific Grove, California, November 201
Beyond the Cut-Set Bound: Uncertainty Computations in Network Coding with Correlated Sources
Cut-set bounds on achievable rates for network communication protocols are
not in general tight. In this paper we introduce a new technique for proving
converses for the problem of transmission of correlated sources in networks,
that results in bounds that are tighter than the corresponding cut-set bounds.
We also define the concept of "uncertainty region" which might be of
independent interest. We provide a full characterization of this region for the
case of two correlated random variables. The bounding technique works as
follows: on one hand we show that if the communication problem is solvable, the
uncertainty of certain random variables in the network with respect to
imaginary parties that have partial knowledge of the sources must satisfy some
constraints that depend on the network architecture. On the other hand, the
same uncertainties have to satisfy constraints that only depend on the joint
distribution of the sources. Matching these two leads to restrictions on the
statistical joint distribution of the sources in communication problems that
are solvable over a given network architecture.Comment: 12 pages, A short version appears in ISIT 201
On the rate loss and construction of source codes for broadcast channels
In this paper, we first define and bound the rate loss of source codes for broadcast channels. Our broadcast channel model comprises one transmitter and two receivers; the transmitter is connected to each receiver by a private channel and to both receivers by a common channel. The transmitter sends a description of source (X, Y) through these channels, receiver 1 reconstructs X with distortion D1, and receiver 2 reconstructs Y with distortion D2. Suppose the rates of the common channel and private channels 1 and 2 are R0, R1, and R2, respectively. The work of Gray and Wyner gives a complete characterization of all achievable rate triples (R0,R1,R2) given any distortion pair (D1,D2). In this paper, we define the rate loss as the gap between the achievable region and the outer bound composed by the rate-distortion functions, i.e., R0+R1+R2 ≥ RX,Y (D1,D2), R0 + R1 ≥ RX(D1), and R0 + R2 ≥ RY (D2). We upper bound the rate loss for general sources by functions of distortions and upper bound the rate loss for Gaussian sources by constants, which implies that though the outer bound is generally not achievable, it may be quite close to the achievable region. This also bounds the gap between the achievable region and the inner bound proposed by Gray and Wyner and bounds the performance penalty associated with using separate decoders rather than joint decoders. We then construct such source codes using entropy-constrained dithered quantizers. The resulting implementation has low complexity and performance close to the theoretical optimum. In particular, the gap between its performance and the theoretical optimum can be bounded from above by constants for Gaussian sources
On rate-distortion with mixed types of side information
In this correspondence, we consider rate-distortion examples in the presence of side information. For a system with some side information known at both the encoder and decoder, and some known only at the decoder, we evaluate the rate distortion function for both Gaussian and binary sources. While the Gaussian example is a straightforward generalization of the corresponding result by Wyner, the binary example proves more difficult and is solved using a multidimensional optimization approach. Leveraging the insights gained from the binary example, we then solve the more complicated binary Heegard and Berger problem of decoding when side information may be present. The results demonstrate the existence of a new type of successive refinement in which the refinement information is decoded together with side information that is not available for the initial description
- …