100 research outputs found

    Extended generalised variances, with applications

    Get PDF
    We consider a measure ψk of dispersion which extends the notion of Wilk’s generalised variance for a d-dimensional distribution, and is based on the mean squared volume of simplices of dimension k≤d formed by k+1 independent copies. We show how ψk can be expressed in terms of the eigenvalues of the covariance matrix of the distribution, also when a n-point sample is used for its estimation, and prove its concavity when raised at a suitable power. Some properties of dispersion-maximising distributions are derived, including a necessary and sufficient condition for optimality. Finally, we show how this measure of dispersion can be used for the design of optimal experiments, with equivalence to A and D-optimal design for k=1 and k=d, respectively. Simple illustrative examples are presented

    Stochastic algorithms for solving structured low-rank approximation problems

    Get PDF
    In this paper, we investigate the complexity of the numerical construction of the Hankel structured low-rank approximation (HSLRA) problem, and develop a family of algorithms to solve this problem. Briefly, HSLRA is the problem of finding the closest (in some pre-defined norm) rank r approximation of a given Hankel matrix, which is also of Hankel structure. We demonstrate that finding optimal solutions of this problem is very hard. For example, we argue that if HSLRA is considered as a problem of estimating parameters of damped sinusoids, then the associated optimization problem is basically unsolvable. We discuss what is known as the orthogonality condition, which solutions to the HSLRA problem should satisfy, and describe how any approximation may be corrected to achieve this orthogonality. Unlike many other methods described in the literature the family of algorithms we propose has the property of guaranteed convergence

    Simplicial variances, potentials and Mahalanobis distances

    Get PDF
    The average squared volume of simplices formed by k independent copies from the sameprobability measure µ on Rddefines an integral measure of dispersion k(µ), which is aconcave functional of µ after suitable normalization. When k = 1 it corresponds to tr(Σµ)and when k = d we obtain the usual generalized variance det(Σµ), with Σµthe covariancematrix of µ. The dispersion k(µ) generates a notion of simplicial potential at any x ∈ Rd,dependent on µ. We show that this simplicial potential is a quadratic convex function ofx, with minimum value at the mean aµfor µ, and that the potential at aµdefines a centralmeasure of scatter similar to k(µ), thereby generalizing results by Wilks (1960) andvan der Vaart (1965) for the generalized variance. Simplicial potentials define generalizedMahalanobis distances, expressed as weighted sums of such distances in every k-margin,and we show that the matrix involved in the generalized distance is a particular generalizedinverse of Σµ, constructed from its characteristic polynomial, when k = rank(Σµ). Finally,we show how simplicial potentials can be used to define simplicial distances between twodistributions, depending on their means and covariances, with interesting features whenthe distributions are close to singularit

    Extended generalised variances, with applications

    Full text link

    Extension of the Schoenberg theorem to integrally conditionally positive definite functions

    Get PDF
    The celebrated Schoenberg theorem establishes a relation between positive definite and conditionally positive definite functions. In this paper, we consider the classes of real-valued functions P(J) and CP(J), which are positive definite and respectively, conditionally positive definite, with respect to a given class of test functions J. For suitably chosen J, the classes P(J) and CP(J) contain classically positive definite (respectively, conditionally positive definite) functions, as well as functions which are singular at the origin. The main result of the paper is a generalization of Schoenberg's theorem to such function classes

    Self-adaptive combination of global tabu search and local search for nonlinear equations

    Get PDF
    Solving systems of nonlinear equations is a very important task since the problems emerge mostly through the mathematical modeling of real problems that arise naturally in many branches of engineering and in the physical sciences. The problem can be naturally reformulated as a global optimization problem. In this paper, we show that a self-adaptive combination of a metaheuristic with a classical local search method is able to converge to some difficult problems that are not solved by Newton-type methodsFundação para a Ciência e a Tecnologia (FCT

    Bayesian Optimization Approaches for Massively Multi-modal Problems

    Get PDF
    The optimization of massively multi-modal functions is a challenging task, particularly for problems where the search space can lead the op- timization process to local optima. While evolutionary algorithms have been extensively investigated for these optimization problems, Bayesian Optimization algorithms have not been explored to the same extent. In this paper, we study the behavior of Bayesian Optimization as part of a hybrid approach for solving several massively multi-modal functions. We use well-known benchmarks and metrics to evaluate how different variants of Bayesian Optimization deal with multi-modality.TIN2016-78365-
    corecore