7 research outputs found

    A Sphere-Packing Error Exponent for Mismatched Decoding

    Full text link
    We derive a sphere-packing error exponent for coded transmission over discrete memoryless channels with a fixed decoding metric. By studying the error probability of the code over an auxiliary channel, we find a lower bound to the probability of error of mismatched decoding. The bound is shown to decay exponentially for coding rates smaller than a new upper bound to the mismatch capacity. For rates higher than the new upper bound, the error probability is shown to be bounded away from zero. The new upper bound is shown to improve over previous upper bounds to the mismatch capacity

    Let's be Honest: An Optimal No-Regret Framework for Zero-Sum Games

    Get PDF
    We revisit the problem of solving two-player zero-sum games in the decentralized setting. We propose a simple algorithmic framework that simultaneously achieves the best rates for honest regret as well as adversarial regret, and in addition resolves the open problem of removing the logarithmic terms in convergence to the value of the game. We achieve this goal in three steps. First, we provide a novel analysis of the optimistic mirror descent (OMD), showing that it can be modified to guarantee fast convergence for both honest regret and value of the game, when the players are playing collaboratively. Second, we propose a new algorithm, dubbed as robust optimistic mirror descent (ROMD), which attains optimal adversarial regret without knowing the time horizon beforehand. Finally, we propose a simple signaling scheme, which enables us to bridge OMD and ROMD to achieve the best of both worlds. Numerical examples are presented to support our theoretical claims and show that our non-adaptive ROMD algorithm can be competitive to OMD with adaptive step-size selection.Comment: Proceedings of the 35th International Conference on Machine Learnin

    Let’s be honest: An optimal no-regret framework for zero-sum games

    Get PDF
    We revisit the problem of solving two-player zero- sum games in the decentralized setting. We pro- pose a simple algorithmic framework that simulta- neously achieves the best rates for honest regret as well as adversarial regret, and in addition resolves the open problem of removing the logarithmic terms in convergence to the value of the game. We achieve this goal in three steps. First, we provide a novel analysis of the optimistic mirror descent (OMD), showing that it can be modified to guarantee fast convergence for both honest re- gret and value of the game, when the players are playing collaboratively. Second, we propose a new algorithm, dubbed as robust optimistic mir- ror descent (ROMD), which attains optimal ad- versarial regret without knowing the time horizon beforehand. Finally, we propose a simple signal- ing scheme, which enables us to bridge OMD and ROMD to achieve the best of both worlds. Numerical examples are presented to support our theoretical claims and show that our non-adaptive ROMD algorithm can be competitive to OMD with adaptive step-size selection

    A sphere-packing exponent for mismatched decoding

    No full text
    ComunicaciĂł presentada a 2021 IEEE International Symposium on Information Theory (ISIT), celebrat del 12 al 20 de juliol de 2021 de manera virtual.We derive a sphere-packing error exponent for mismatched decoding over discrete memoryless channels. We find a lower bound to the probability of error of mismatched decoding that decays exponentially for coding rates smaller than a new upper bound to the mismatch capacity. For rates higher than the new upper bound, the error probability is shown to be bounded away from zero. The new upper bound is shown to improve over previous upper bounds to the mismatch capacity.This work was supported in part by the European Research Council under Grant 725411

    Minimum probability of error of list M-ary hypothesis testing

    No full text
    We study a variation of Bayesian M-ary hypothesis testing in which the test outputs a list of L candidates out of the M possible upon processing the observation. We study the minimum error probability of list hypothesis testing, where an error is defined as the event where the true hypothesis is not in the list output by the test. We derive two exact expressions of the minimum probability or error. The first is expressed as the error probability of a certain non-Bayesian binary hypothesis test and is reminiscent of the meta-converse bound by Polyanskiy, Poor and VerdĂş (2010). The second, is expressed as the tail probability of the likelihood ratio between the two distributions involved in the aforementioned non-Bayesian binary hypothesis test. Hypothesis testing, error probability, information theory.European Research Council (Grant 725411); Spanish Ministry of Economy and Competitiveness (Grant PID2020-116683GB-C22)
    corecore