77 research outputs found
Convergence rate to a lower tail dependence coefficient of a skew-t distribution
We examine the rate of decay to the limit of the tail dependence coefficient
of a bivariate skew t distribution which always displays asymptotic tail
dependence. It contains as a special case the usual bivariate symmetric t
distribution, and hence is an appropriate (skew) extension. The rate is
asymptotically power-law. The second-order structure of the univariate quantile
function for such a skew-t distribution is a central issue.Comment: 14 page
Explicit forms for ergodicity coefficients of stochastic matrices
AbstractMotivated by explicit expressions appearing in the work of A. Rhodius (1993) for n×n stochastic matrices P, it is shown that ordinary matrix norms on Rn−1 for (n−1)×(n−1) matrices of the form APB can be used to generate results of this kind
Tail asymptotics for the bivariate skew normal in the general case
The present paper is a sequel to and generalization of Fung and Seneta (2016)
whose main result gives the asymptotic behaviour as of
when with that
is: for the bivariate skew normal distribution in the equi-skew case, where
is the correlation matrix, with off-diagonal entries and are the marginal cdf's of . A paper of Beranger et al.
(2017) enunciates an upper-tail version which does not contain the constraint
but requires the constraint in
particular. The proof, in their Appendix A.3, is very condensed. When
translated to the lower tail setting of Fung and Seneta (2016), we find that
when the exponents of in the regularly varying
function asymptotic expressions do agree, but the slowly varying components,
always of asymptotic form , are not asymptotically
equivalent. Our general approach encompasses the case , and
covers all possibilities.Comment: 82 page
Chapitre 10 : entretien avec Eugene Seneta
Eugene Seneta [[Seneta]] – professeur émérite à l’université de Sydney (School of Mathematics and Statistics) – est réputé pour ses contributions en probabilités et statistiques dont certaines ont débouché sur des applications aux domaines de la finance. Membre de l’Australian Academy of Sciences depuis 1985, il a aussi beaucoup contribué à l’histoire des probabilités et statistiques ; il revient dans cet entretien sur ses collaborations avec François Jongmans ainsi qu’avec Henri Breny, Be..
Topics in the theory and applications of Markov chains
The dissertation which follows is concerned with various aspects
of behaviour within a set of states, J, of a discrete-time Markov
chain, {Xn }, on a denumerable state space, S. A basic assumption
with regard to J is that escape from any state of J into S-J may
occur in a finite number of steps with positive probability. Since we
are concerned only with behaviour within J, we may in general take
J = {1,2,3,...} and represent S-J as a single absorbing state {0}.
Thus without loss of generality S = {0,1,2,,..} with the states
1,2,3,... being transient. In addition, we frequently assume in
the sequel that J is a single irreducible (i.e. intercommunicating)
class, and sometimes that this class is aperiodic, these assumptions
corresponding to the situations of greatest theoretical and practical
importance. The subject matter which we treat falls naturally into two parts,
according to which the thesis is divided. The aim of Part One is to
develop results and techniques applicable to a wide class of problems,
under general assumptions. This is done in the first three chapters, in
which specific chains enter only as examples. On the other hand,
specialized techniques are often applicable to specific chains of wide
interest, such as certain models in genetics. This is particularly true of the Galton-Watson process, which is the subject of the following
three chapters (Part Two) of the thesis.
Three distinct but related aspects of transient behaviour within J
are studied in the first part, each corresponding to a chapter
Relative entropy under mappings by stochastic matrices
AbstractThe relative g-entropy of two finite, discrete probability distributions x = (x1,…,xn) and y = (y1,…,yn) is defined as Hg(x,y) = Σkxkg (yk/kk - 1), where g:(-1,∞)→R is convex and g(0) = 0. When g(t) = -log(1 + t), then Hg(x,y) = Σkxklog(xk/yk), the usual relative entropy. Let Pn = {x ∈ Rn : σixi = 1, xi > 0 ∀i}. Our major results is that, for any m × n column-stochastic matrix A, the contraction coefficient defined as ηğ(A) = sup{Hg(Ax,Ay)/Hg(x,y) : x,y ∈ Pn, x ≠ y} satisfies ηg(A) ⩽1 - α(A), where α(A) = minj,kΣi min(aij, aik) is Dobrushin's coefficient of ergodicity. Consequently, ηg(A) < 1 if and only if A is scrambling. Upper and lower bounds on αg(A) are established. Analogous results hold for Markov chains in continuous time
- …