13 research outputs found

    Strict Monotonicity and Convergence Rate of Titterington's Algorithm for Computing D-optimal Designs

    Full text link
    We study a class of multiplicative algorithms introduced by Silvey et al. (1978) for computing D-optimal designs. Strict monotonicity is established for a variant considered by Titterington (1978). A formula for the rate of convergence is also derived. This is used to explain why modifications considered by Titterington (1978) and Dette et al. (2008) usually converge faster

    Critical Slowing Down Near Topological Transitions in Rate-Distortion Problems

    Full text link
    In Rate Distortion (RD) problems one seeks reduced representations of a source that meet a target distortion constraint. Such optimal representations undergo topological transitions at some critical rate values, when their cardinality or dimensionality change. We study the convergence time of the Arimoto-Blahut alternating projection algorithms, used to solve such problems, near those critical points, both for the Rate Distortion and Information Bottleneck settings. We argue that they suffer from Critical Slowing Down -- a diverging number of iterations for convergence -- near the critical points. This phenomenon can have theoretical and practical implications for both Machine Learning and Data Compression problems.Comment: 9 pages, 2 figures, ISIT 2021 submissio

    Analytical calculation formulas for capacities of classical and classical-quantum channels

    Full text link
    We derive an analytical calculation formula for the channel capacity of a classical channel without any iteration while its existing algorithms require iterations and the number of iteration depends on the required precision level. Hence, our formula is its first analytical formula without any iteration. We apply the obtained formula to examples and see how the obtained formula works in these examples. Then, we extend it to the channel capacity of a classical-quantum (cq-) channel. Many existing studies proposed algorithms for a cq-channel and all of them require iterations. Our extended analytical algorithm have also no iteration and output the exactly optimum values

    A Constrained BA Algorithm for Rate-Distortion and Distortion-Rate Functions

    Full text link
    The Blahut-Arimoto (BA) algorithm has played a fundamental role in the numerical computation of rate-distortion (RD) functions. This algorithm possesses a desirable monotonic convergence property by alternatively minimizing its Lagrangian with a fixed multiplier. In this paper, we propose a novel modification of the BA algorithm, wherein the multiplier is updated through a one-dimensional root-finding step using a monotonic univariate function, efficiently implemented by Newton's method in each iteration. Consequently, the modified algorithm directly computes the RD function for a given target distortion, without exploring the entire RD curve as in the original BA algorithm. Moreover, this modification presents a versatile framework, applicable to a wide range of problems, including the computation of distortion-rate (DR) functions. Theoretical analysis shows that the outputs of the modified algorithms still converge to the solutions of the RD and DR functions with rate O(1/n)O(1/n), where nn is the number of iterations. Additionally, these algorithms provide ε\varepsilon-approximation solutions with O(MNlogNε(1+loglogε))O\left(\frac{MN\log N}{\varepsilon}(1+\log |\log \varepsilon|)\right) arithmetic operations, where M,NM,N are the sizes of source and reproduced alphabets respectively. Numerical experiments demonstrate that the modified algorithms exhibit significant acceleration compared with the original BA algorithms and showcase commendable performance across classical source distributions such as discretized Gaussian, Laplacian and uniform sources.Comment: Version_

    Deciding What to Model: Value-Equivalent Sampling for Reinforcement Learning

    Full text link
    The quintessential model-based reinforcement-learning agent iteratively refines its estimates or prior beliefs about the true underlying model of the environment. Recent empirical successes in model-based reinforcement learning with function approximation, however, eschew the true model in favor of a surrogate that, while ignoring various facets of the environment, still facilitates effective planning over behaviors. Recently formalized as the value equivalence principle, this algorithmic technique is perhaps unavoidable as real-world reinforcement learning demands consideration of a simple, computationally-bounded agent interacting with an overwhelmingly complex environment, whose underlying dynamics likely exceed the agent's capacity for representation. In this work, we consider the scenario where agent limitations may entirely preclude identifying an exactly value-equivalent model, immediately giving rise to a trade-off between identifying a model that is simple enough to learn while only incurring bounded sub-optimality. To address this problem, we introduce an algorithm that, using rate-distortion theory, iteratively computes an approximately-value-equivalent, lossy compression of the environment which an agent may feasibly target in lieu of the true model. We prove an information-theoretic, Bayesian regret bound for our algorithm that holds for any finite-horizon, episodic sequential decision-making problem. Crucially, our regret bound can be expressed in one of two possible forms, providing a performance guarantee for finding either the simplest model that achieves a desired sub-optimality gap or, alternatively, the best model given a limit on agent capacity.Comment: Accepted to Neural Information Processing Systems (NeurIPS) 202
    corecore