25 research outputs found

    Extended generalised variances, with applications

    Get PDF
    We consider a measure ψk of dispersion which extends the notion of Wilk’s generalised variance for a d-dimensional distribution, and is based on the mean squared volume of simplices of dimension k≤d formed by k+1 independent copies. We show how ψk can be expressed in terms of the eigenvalues of the covariance matrix of the distribution, also when a n-point sample is used for its estimation, and prove its concavity when raised at a suitable power. Some properties of dispersion-maximising distributions are derived, including a necessary and sufficient condition for optimality. Finally, we show how this measure of dispersion can be used for the design of optimal experiments, with equivalence to A and D-optimal design for k=1 and k=d, respectively. Simple illustrative examples are presented

    Simplicial variances, potentials and Mahalanobis distances

    Get PDF
    The average squared volume of simplices formed by k independent copies from the sameprobability measure µ on Rddefines an integral measure of dispersion k(µ), which is aconcave functional of µ after suitable normalization. When k = 1 it corresponds to tr(Σµ)and when k = d we obtain the usual generalized variance det(Σµ), with Σµthe covariancematrix of µ. The dispersion k(µ) generates a notion of simplicial potential at any x ∈ Rd,dependent on µ. We show that this simplicial potential is a quadratic convex function ofx, with minimum value at the mean aµfor µ, and that the potential at aµdefines a centralmeasure of scatter similar to k(µ), thereby generalizing results by Wilks (1960) andvan der Vaart (1965) for the generalized variance. Simplicial potentials define generalizedMahalanobis distances, expressed as weighted sums of such distances in every k-margin,and we show that the matrix involved in the generalized distance is a particular generalizedinverse of Σµ, constructed from its characteristic polynomial, when k = rank(Σµ). Finally,we show how simplicial potentials can be used to define simplicial distances between twodistributions, depending on their means and covariances, with interesting features whenthe distributions are close to singularit

    Extended generalised variances, with applications

    Full text link

    Stochastic global optimization

    No full text

    Optimal Design and Related Areas in Optimization and Statistics

    No full text
    This edited volume, dedicated to Henry P. Wynn, reflects his broad range of research interests, focusing in particular on the applications of optimal design theory in optimization and statistics. It covers algorithms for constructing optimal experimental designs, general gradient-type algorithms for convex optimization, majorization and stochastic ordering, algebraic statistics, Bayesian networks and nonlinear regression. Written by leading specialists in the field, each chapter contains a survey of the existing literature along with substantial new material. This work will appeal to both th

    Algorithmic construction of optimal designs on compact sets for concave and differentiable criteria

    Get PDF
    We consider the problem of construction of optimal experimental designs (approximate theory) on a compact subset XX of RdRd with nonempty interior, for a concave and Lipschitz differentiable design criterion ϕ(·)ϕ(·) based on the information matrix. The proposed algorithm combines (a) convex optimization for the determination of optimal weights on a support set, (b) sequential updating of this support using local optimization, and (c) finding new support candidates using properties of the directional derivative of ϕ(·)ϕ(·). The algorithm makes use of the compactness of XX and relies on a finite grid Xℓ⊂XXℓ⊂X for checking optimality. By exploiting the Lipschitz continuity of the directional derivatives of ϕ(·)ϕ(·), efficiency bounds on XX are obtained and ϵ -optimality on XX is guaranteed. The effectiveness of the method is illustrated on a series of examples

    Self-regenerative Markov chain Monte Carlo with adaptation

    No full text
    A new method of construction of Markov chains with a given stationary distribution is proposed. The method is based on constructing an auxiliary chain with some other stationary distribution and picking elements of this auxiliary chain a suitable number of times. The proposed method is easy to implement and analyse; it may be more efficient than other related Markov chain Monte Carlo techniques. The main attractive feature of the associated Markov chain is that it regenerates whenever it accepts a new proposed point. This makes the algorithm easy to adapt and tune for practical problems. A theoretical study and numerical comparisons with some other available Markov chain Monte Carlo techniques are presented

    Estimation of spectral bounds in gradient algorithms

    No full text
    We consider the solution of linear systems of equations Ax=b, with A a symmetric positive-definite matrix in ℝ n×n , through Richardson-type iterations or, equivalently, the minimization of convex quadratic functions(1/2)(Ax,x)−(b,x) with a gradient algorithm. The use of step-sizes asymptotically distributed with the arcsine distribution on the spectrum of A then yields an asymptotic rate of convergence after k<n iterations, k→∞, that coincides with that of the conjugate-gradient algorithm in the worst case. However, the spectral bounds m and M are generally unknown and thus need to be estimated to allow the construction of simple and cost-effective gradient algorithms with fast convergence. It is the purpose of this paper to analyse the properties of estimators of m and M based on moments of probability measures ν k defined on the spectrum of A and generated by the algorithm on its way towards the optimal solution. A precise analysis of the behavior of the rate of convergence of the algorithm is also given. Two situations are considered: (i) the sequence of step-sizes corresponds to i.i.d. random variables, (ii) they are generated through a dynamical system (fractional parts of the golden ratio) producing a low-discrepancy sequence. In the first case, properties of random walk can be used to prove the convergence of simple spectral bound estimators based on the first moment of ν k . The second option requires a more careful choice of spectral bounds estimators but is shown to produce much less fluctuations for the rate of convergence of the algorithm

    Self Regenerative Markov Chain Monte Carlo

    No full text
    this article we propose a new algorithm, called SR (Self Regenerative), with a different philosophy for MCMC computations. Given a draw from the proposal density we compute how many times we want to keep the proposed point in the sample. This is a draw from the geometric distribution with an appropriate success probability. Once this has been performed we go on to simulate another independent candidate point from the proposal distribution and iterate. Example 1. Consider the target distribution ß(x) =
    corecore