105,890 research outputs found

    A New Derivation and Recursive Algorithm Based on Wronskian Matrix for Vandermonde Inverse Matrix

    Get PDF
    For an analytical expression of Vandermonde inverse matrix, a new derivation process based on Wronskian matrix and Lagrange interpolation polynomial basis is presented. Recursive formula and implementation cases for the direct formula of Vandermonde inverse matrix are given based on deriving the unified formula of Wronskian inverse matrix. For the calculation of symbol-type Vandermonde inverse matrix, the direct formula and recursive method are verified to be more efficient than Mathematica which is good at symbolic computation by comparing the computing time in Mathematica. The process and steps of recursive algorithm are relatively simple. The derivation process and idea both have very important values in theory and practice of Vandermonde and generalized Vandermonde inverse matrix

    Quasi-Newton Steps for Efficient Online Exp-Concave Optimization

    Full text link
    The aim of this paper is to design computationally-efficient and optimal algorithms for the online and stochastic exp-concave optimization settings. Typical algorithms for these settings, such as the Online Newton Step (ONS), can guarantee a O(dlnT)O(d\ln T) bound on their regret after TT rounds, where dd is the dimension of the feasible set. However, such algorithms perform so-called generalized projections whenever their iterates step outside the feasible set. Such generalized projections require Ω(d3)\Omega(d^3) arithmetic operations even for simple sets such a Euclidean ball, making the total runtime of ONS of order d3Td^3 T after TT rounds, in the worst-case. In this paper, we side-step generalized projections by using a self-concordant barrier as a regularizer to compute the Newton steps. This ensures that the iterates are always within the feasible set without requiring projections. This approach still requires the computation of the inverse of the Hessian of the barrier at every step. However, using the stability properties of the Newton steps, we show that the inverse of the Hessians can be efficiently approximated via Taylor expansions for most rounds, resulting in a O(d2T+dωT)O(d^2 T +d^\omega \sqrt{T}) total computational complexity, where ω\omega is the exponent of matrix multiplication. In the stochastic setting, we show that this translates into a O(d3/ϵ)O(d^3/\epsilon) computational complexity for finding an ϵ\epsilon-suboptimal point, answering an open question by Koren 2013. We first show these new results for the simple case where the feasible set is a Euclidean ball. Then, to move to general convex set, we use a reduction to Online Convex Optimization over the Euclidean ball. Our final algorithm can be viewed as a more efficient version of ONS.Comment: First revision: presentation improvement

    An Efficient Implementation of the Gauss-Newton Method Via Generalized Krylov Subspaces

    Get PDF
    The solution of nonlinear inverse problems is a challenging task in numerical analysis. In most cases, this kind of problems is solved by iterative procedures that, at each iteration, linearize the problem in a neighborhood of the currently available approximation of the solution. The linearized problem is then solved by a direct or iterative method. Among this class of solution methods, the Gauss-Newton method is one of the most popular ones. We propose an efficient implementation of this method for large-scale problems. Our implementation is based on projecting the nonlinear problem into a sequence of nested subspaces, referred to as Generalized Krylov Subspaces, whose dimension increases with the number of iterations, except for when restarts are carried out. When the computation of the Jacobian matrix is expensive, we combine our iterative method with secant (Broyden) updates to further reduce the computational cost. We show convergence of the proposed solution methods and provide a few numerical examples that illustrate their performance
    corecore