122,593 research outputs found

    Fast algorithms for computing defects and their derivatives in the Regge calculus

    Full text link
    Any practical attempt to solve the Regge equations, these being a large system of non-linear algebraic equations, will almost certainly employ a Newton-Raphson like scheme. In such cases it is essential that efficient algorithms be used when computing the defect angles and their derivatives with respect to the leg-lengths. The purpose of this paper is to present details of such an algorithm.Comment: 38 pages, 10 figure

    Numerical resolution of some BVP using Bernstein polynomials

    Full text link
    In this work we present a method, based on the use of Bernstein polynomials, for the numerical resolution of some boundary values problems. The computations have not need of particular approximations of derivatives, such as finite differences, or particular techniques, such as finite elements. Also, the method doesn't require the use of matrices, as in resolution of linear algebraic systems, nor the use of like-Newton algorithms, as in resolution of non linear sets of equations. An initial equation is resolved only once, then the method is based on iterated evaluations of appropriate polynomials.Comment: 7 pages, 3 figure

    Variable selection using MM algorithms

    Full text link
    Variable selection is fundamental to high-dimensional statistical modeling. Many variable selection techniques may be implemented by maximum penalized likelihood using various penalty functions. Optimizing the penalized likelihood function is often challenging because it may be nondifferentiable and/or nonconcave. This article proposes a new class of algorithms for finding a maximizer of the penalized likelihood for a broad class of penalty functions. These algorithms operate by perturbing the penalty function slightly to render it differentiable, then optimizing this differentiable function using a minorize-maximize (MM) algorithm. MM algorithms are useful extensions of the well-known class of EM algorithms, a fact that allows us to analyze the local and global convergence of the proposed algorithm using some of the techniques employed for EM algorithms. In particular, we prove that when our MM algorithms converge, they must converge to a desirable point; we also discuss conditions under which this convergence may be guaranteed. We exploit the Newton-Raphson-like aspect of these algorithms to propose a sandwich estimator for the standard errors of the estimators. Our method performs well in numerical tests.Comment: Published at http://dx.doi.org/10.1214/009053605000000200 in the Annals of Statistics (http://www.imstat.org/aos/) by the Institute of Mathematical Statistics (http://www.imstat.org

    Batch and median neural gas

    Full text link
    Neural Gas (NG) constitutes a very robust clustering algorithm given euclidian data which does not suffer from the problem of local minima like simple vector quantization, or topological restrictions like the self-organizing map. Based on the cost function of NG, we introduce a batch variant of NG which shows much faster convergence and which can be interpreted as an optimization of the cost function by the Newton method. This formulation has the additional benefit that, based on the notion of the generalized median in analogy to Median SOM, a variant for non-vectorial proximity data can be introduced. We prove convergence of batch and median versions of NG, SOM, and k-means in a unified formulation, and we investigate the behavior of the algorithms in several experiments.Comment: In Special Issue after WSOM 05 Conference, 5-8 september, 2005, Pari

    Computing coset leaders and leader codewords of binary codes

    Full text link
    In this paper we use the Gr\"obner representation of a binary linear code C\mathcal C to give efficient algorithms for computing the whole set of coset leaders, denoted by CL(C)\mathrm{CL}(\mathcal C) and the set of leader codewords, denoted by L(C)\mathrm L(\mathcal C). The first algorithm could be adapted to provide not only the Newton and the covering radius of C\mathcal C but also to determine the coset leader weight distribution. Moreover, providing the set of leader codewords we have a test-set for decoding by a gradient-like decoding algorithm. Another contribution of this article is the relation stablished between zero neighbours and leader codewords

    Approximate Newton Methods for Policy Search in Markov Decision Processes

    Get PDF
    Approximate Newton methods are standard optimization tools which aim to maintain the benefits of Newton's method, such as a fast rate of convergence, while alleviating its drawbacks, such as computationally expensive calculation or estimation of the inverse Hessian. In this work we investigate approximate Newton methods for policy optimization in Markov decision processes (MDPs). We first analyse the structure of the Hessian of the total expected reward, which is a standard objective function for MDPs. We show that, like the gradient, the Hessian exhibits useful structure in the context of MDPs and we use this analysis to motivate two Gauss-Newton methods for MDPs. Like the Gauss- Newton method for non-linear least squares, these methods drop certain terms in the Hessian. The approximate Hessians possess desirable properties, such as negative definiteness, and we demonstrate several important performance guarantees including guaranteed ascent directions, invariance to affine transformation of the parameter space and convergence guarantees. We finally provide a unifying perspective of key policy search algorithms, demonstrating that our second Gauss- Newton algorithm is closely related to both the EM-algorithm and natural gradient ascent applied to MDPs, but performs significantly better in practice on a range of challenging domains
    corecore