69,878 research outputs found

    Refinement of the random coding bound

    Full text link
    An improved pre-factor for the random coding bound is proved. Specifically, for channels with critical rate not equal to capacity, if a regularity condition is satisfied (resp. not satisfied), then for any Ο΅>0\epsilon >0 a pre-factor of O(Nβˆ’12(1βˆ’Ο΅+ρˉRβˆ—))O(N^{-\frac{1}{2}\left( 1 - \epsilon + \bar{\rho}^\ast_R \right)}) (resp. O(Nβˆ’12)O(N^{-\frac{1}{2}})) is achievable for rates above the critical rate, where NN and RR is the blocklength and rate, respectively. The extra term ρˉRβˆ—\bar{\rho}^\ast_R is related to the slope of the random coding exponent. Further, the relation of these bounds with the authors' recent refinement of the sphere-packing bound, as well as the pre-factor for the random coding bound below the critical rate, is discussed.Comment: Submitted to IEEE Trans. Inform. Theor

    A New Stable Peer-to-Peer Protocol with Non-persistent Peers

    Full text link
    Recent studies have suggested that the stability of peer-to-peer networks may rely on persistent peers, who dwell on the network after they obtain the entire file. In the absence of such peers, one piece becomes extremely rare in the network, which leads to instability. Technological developments, however, are poised to reduce the incidence of persistent peers, giving rise to a need for a protocol that guarantees stability with non-persistent peers. We propose a novel peer-to-peer protocol, the group suppression protocol, to ensure the stability of peer-to-peer networks under the scenario that all the peers adopt non-persistent behavior. Using a suitable Lyapunov potential function, the group suppression protocol is proven to be stable when the file is broken into two pieces, and detailed experiments demonstrate the stability of the protocol for arbitrary number of pieces. We define and simulate a decentralized version of this protocol for practical applications. Straightforward incorporation of the group suppression protocol into BitTorrent while retaining most of BitTorrent's core mechanisms is also presented. Subsequent simulations show that under certain assumptions, BitTorrent with the official protocol cannot escape from the missing piece syndrome, but BitTorrent with group suppression does.Comment: There are only a couple of minor changes in this version. Simulation tool is specified this time. Some repetitive figures are remove

    A Rate-Distortion Approach to Index Coding

    Full text link
    We approach index coding as a special case of rate-distortion with multiple receivers, each with some side information about the source. Specifically, using techniques developed for the rate-distortion problem, we provide two upper bounds and one lower bound on the optimal index coding rate. The upper bounds involve specific choices of the auxiliary random variables in the best existing scheme for the rate-distortion problem. The lower bound is based on a new lower bound for the general rate-distortion problem. The bounds are shown to coincide for a number of (groupcast) index coding instances, including all instances for which the number of decoders does not exceed three.Comment: Substantially extended version. Submitted to IEEE Transactions on Information Theor

    The third-order term in the normal approximation for singular channels

    Full text link
    For a singular and symmetric discrete memoryless channel with positive dispersion, the third-order term in the normal approximation is shown to be upper bounded by a constant. This finding completes the characterization of the third-order term for symmetric discrete memoryless channels. The proof method is extended to asymmetric and singular channels with constant composition codes, and its connection to existing results, as well as its limitation in the error exponents regime, are discussed.Comment: Submitted to IEEE Trans. Inform. Theor

    Erasure Multiple Descriptions

    Full text link
    We consider a binary erasure version of the n-channel multiple descriptions problem with symmetric descriptions, i.e., the rates of the n descriptions are the same and the distortion constraint depends only on the number of messages received. We consider the case where there is no excess rate for every k out of n descriptions. Our goal is to characterize the achievable distortions D_1, D_2,...,D_n. We measure the fidelity of reconstruction using two distortion criteria: an average-case distortion criterion, under which distortion is measured by taking the average of the per-letter distortion over all source sequences, and a worst-case distortion criterion, under which distortion is measured by taking the maximum of the per-letter distortion over all source sequences. We present achievability schemes, based on random binning for average-case distortion and systematic MDS (maximum distance separable) codes for worst-case distortion, and prove optimality results for the corresponding achievable distortion regions. We then use the binary erasure multiple descriptions setup to propose a layered coding framework for multiple descriptions, which we then apply to vector Gaussian multiple descriptions and prove its optimality for symmetric scalar Gaussian multiple descriptions with two levels of receivers and no excess rate for the central receiver. We also prove a new outer bound for the general multi-terminal source coding problem and use it to prove an optimality result for the robust binary erasure CEO problem. For the latter, we provide a tight lower bound on the distortion for \ell messages for any coding scheme that achieves the minimum achievable distortion for k messages where k is less than or equal to \ell.Comment: 48 pages, 2 figures, submitted to IEEE Trans. Inf. Theor
    • …
    corecore