1,194 research outputs found

    R\'enyi Divergence and Kullback-Leibler Divergence

    Full text link
    R\'enyi divergence is related to R\'enyi entropy much like Kullback-Leibler divergence is related to Shannon's entropy, and comes up in many settings. It was introduced by R\'enyi as a measure of information that satisfies almost the same axioms as Kullback-Leibler divergence, and depends on a parameter that is called its order. In particular, the R\'enyi divergence of order 1 equals the Kullback-Leibler divergence. We review and extend the most important properties of R\'enyi divergence and Kullback-Leibler divergence, including convexity, continuity, limits of σ\sigma-algebras and the relation of the special order 0 to the Gaussian dichotomy and contiguity. We also show how to generalize the Pythagorean inequality to orders different from 1, and we extend the known equivalence between channel capacity and minimax redundancy to continuous channel inputs (for all orders) and present several other minimax results.Comment: To appear in IEEE Transactions on Information Theor

    Uniform Continuity of the Value of Zero-Sum Games with Differential Information

    Get PDF
    We establish uniform continuity of the value for zero-sum games with differential information, when the distance between changing information fields of each player is measured by the Boylan (1971) pseudo-metric. We also show that the optimal strategy correspondence is upper semi-continuous when the information fields of players change (even with the weak topology on players' strategy sets), and is approximately lower semi-continuous.Zero-Sum Games, Differential Information, Value, Op-timal Strategies, Uniform Continuity

    A non-convex relaxed version of minimax theorems

    Full text link
    Given a subset A×BA\times B of a locally convex space X×YX\times Y (with AA compact) and a function f:A×BRf:A\times B\rightarrow\overline{\mathbb{R}} such that f(,y),f(\cdot,y), yB,y\in B, are concave and upper semicontinuous, the minimax inequality maxxAinfyBf(x,y)infyBsupxA0f(x,y)\max_{x\in A} \inf_{y\in B} f(x,y) \geq \inf_{y\in B} \sup_{x\in A_{0}} f(x,y) is shown to hold provided that A0A_{0} be the set of xAx\in A such that f(x,)f(x,\cdot) is proper, convex and lower semi-contiuous. Moreover, if in addition A×Bf1(R)A\times B\subset f^{-1}(\mathbb{R}), then we can take as A0A_{0} the set of xAx\in A such that f(x,)f(x,\cdot) is convex. The relation to Moreau's biconjugate representation theorem is discussed, and some applications to\ convex duality are provided. Key words. Minimax theorem, Moreau theorem, conjugate function, convex optimization

    Advance research on control systems for the Saturn launch vehicle Final report, Jan., 1964 - May, 1965

    Get PDF
    Minimax problem in control systems for Saturn launch vehicl

    Statistical minimax theorems via nonstandard analysis

    Full text link
    For statistical decision problems with finite parameter space, it is well-known that the upper value (minimax value) agrees with the lower value (maximin value). Only under a generalized notion of prior does such an equivalence carry over to the case infinite parameter spaces, provided nature can play a prior distribution and the statistician can play a randomized strategy. Various such extensions of this classical result have been established, but they are subject to technical conditions such as compactness of the parameter space or continuity of the risk functions. Using nonstandard analysis, we prove a minimax theorem for arbitrary statistical decision problems. Informally, we show that for every statistical decision problem, the standard upper value equals the lower value when the sup\sup is taken over the collection of all internal priors, which may assign infinitesimal probability to (internal) events. Applying our nonstandard minimax theorem, we derive several standard minimax theorems: a minimax theorem on compact parameter space with continuous risk functions, a finitely additive minimax theorem with bounded risk functions and a minimax theorem on totally bounded metric parameter spaces with Lipschitz risk functions
    corecore