4,445 research outputs found

    A Universal Decoder Relative to a Given Family of Metrics

    Full text link
    Consider the following framework of universal decoding suggested in [MerhavUniversal]. Given a family of decoding metrics and random coding distribution (prior), a single, universal, decoder is optimal if for any possible channel the average error probability when using this decoder is better than the error probability attained by the best decoder in the family up to a subexponential multiplicative factor. We describe a general universal decoder in this framework. The penalty for using this universal decoder is computed. The universal metric is constructed as follows. For each metric, a canonical metric is defined and conditions for the given prior to be normal are given. A sub-exponential set of canonical metrics of normal prior can be merged to a single universal optimal metric. We provide an example where this decoder is optimal while the decoder of [MerhavUniversal] is not.Comment: Accepted to ISIT 201

    Coding and Decoding Schemes for MSE and Image Transmission

    Full text link
    In this work we explore possibilities for coding and decoding tailor-made for mean squared error evaluation of error in contexts such as image transmission. To do so, we introduce a loss function that expresses the overall performance of a coding and decoding scheme for discrete channels and that exchanges the usual goal of minimizing the error probability to that of minimizing the expected loss. In this environment we explore the possibilities of using ordered decoders to create a message-wise unequal error protection (UEP), where the most valuable information is protected by placing in its proximity information words that differ by a small valued error. We give explicit examples, using scale-of-gray images, including small-scale performance analysis and visual simulations for the BSMC.Comment: Submitted to IEEE Transactions on Information Theor

    Improving Variational Encoder-Decoders in Dialogue Generation

    Full text link
    Variational encoder-decoders (VEDs) have shown promising results in dialogue generation. However, the latent variable distributions are usually approximated by a much simpler model than the powerful RNN structure used for encoding and decoding, yielding the KL-vanishing problem and inconsistent training objective. In this paper, we separate the training step into two phases: The first phase learns to autoencode discrete texts into continuous embeddings, from which the second phase learns to generalize latent representations by reconstructing the encoded embedding. In this case, latent variables are sampled by transforming Gaussian noise through multi-layer perceptrons and are trained with a separate VED model, which has the potential of realizing a much more flexible distribution. We compare our model with current popular models and the experiment demonstrates substantial improvement in both metric-based and human evaluations.Comment: Accepted by AAAI201

    A B-ISDN-compatible modem/codec

    Get PDF
    Coded modulation techniques for development of a broadband integrated services digital network (B-ISDN)-compatible modem/codec are investigated. The selected baseband processor system must support transmission of 155.52 Mbit/s of data over an INTELSAT 72-MHz transponder. Performance objectives and fundamental system parameters, including channel symbol rate, code rate, and the modulation scheme are determined. From several candidate codes, a concatenated coding system consisting of a coded octal phase shift keying modulation as the inner code and a high rate Reed-Solomon as the outer code is selected and its bit error rate performance is analyzed by computer simulation. The hardware implementation of the decoder for the selected code is also described

    Minimum Rates of Approximate Sufficient Statistics

    Full text link
    Given a sufficient statistic for a parametric family of distributions, one can estimate the parameter without access to the data. However, the memory or code size for storing the sufficient statistic may nonetheless still be prohibitive. Indeed, for nn independent samples drawn from a kk-nomial distribution with d=k1d=k-1 degrees of freedom, the length of the code scales as dlogn+O(1)d\log n+O(1). In many applications, we may not have a useful notion of sufficient statistics (e.g., when the parametric family is not an exponential family) and we also may not need to reconstruct the generating distribution exactly. By adopting a Shannon-theoretic approach in which we allow a small error in estimating the generating distribution, we construct various {\em approximate sufficient statistics} and show that the code length can be reduced to d2logn+O(1)\frac{d}{2}\log n+O(1). We consider errors measured according to the relative entropy and variational distance criteria. For the code constructions, we leverage Rissanen's minimum description length principle, which yields a non-vanishing error measured according to the relative entropy. For the converse parts, we use Clarke and Barron's formula for the relative entropy of a parametrized distribution and the corresponding mixture distribution. However, this method only yields a weak converse for the variational distance. We develop new techniques to achieve vanishing errors and we also prove strong converses. The latter means that even if the code is allowed to have a non-vanishing error, its length must still be at least d2logn\frac{d}{2}\log n.Comment: To appear in the IEEE Transactions on Information Theor
    corecore