2 research outputs found

    Relative entropy under mappings by stochastic matrices

    Get PDF
    AbstractThe relative g-entropy of two finite, discrete probability distributions x = (x1,…,xn) and y = (y1,…,yn) is defined as Hg(x,y) = Σkxkg (yk/kk - 1), where g:(-1,∞)→R is convex and g(0) = 0. When g(t) = -log(1 + t), then Hg(x,y) = Σkxklog(xk/yk), the usual relative entropy. Let Pn = {x ∈ Rn : σixi = 1, xi > 0 ∀i}. Our major results is that, for any m × n column-stochastic matrix A, the contraction coefficient defined as ηğ(A) = sup{Hg(Ax,Ay)/Hg(x,y) : x,y ∈ Pn, x ≠ y} satisfies ηg(A) ⩽1 - α(A), where α(A) = minj,kΣi min(aij, aik) is Dobrushin's coefficient of ergodicity. Consequently, ηg(A) < 1 if and only if A is scrambling. Upper and lower bounds on αg(A) are established. Analogous results hold for Markov chains in continuous time
    corecore