2,893 research outputs found

    GPGCD: An iterative method for calculating approximate GCD of univariate polynomials

    Full text link
    We present an iterative algorithm for calculating approximate greatest common divisor (GCD) of univariate polynomials with the real or the complex coefficients. For a given pair of polynomials and a degree, our algorithm finds a pair of polynomials which has a GCD of the given degree and whose coefficients are perturbed from those in the original inputs, making the perturbations as small as possible, along with the GCD. The problem of approximate GCD is transfered to a constrained minimization problem, then solved with the so-called modified Newton method, which is a generalization of the gradient-projection method, by searching the solution iteratively. We demonstrate that, in some test cases, our algorithm calculates approximate GCD with perturbations as small as those calculated by a method based on the structured total least norm (STLN) method and the UVGCD method, while our method runs significantly faster than theirs by approximately up to 30 or 10 times, respectively, compared with their implementation. We also show that our algorithm properly handles some ill-conditioned polynomials which have a GCD with small or large leading coefficient.Comment: Preliminary versions have been presented as doi:10.1145/1576702.1576750 and arXiv:1007.183

    A Riemannian Trust Region Method for the Canonical Tensor Rank Approximation Problem

    Full text link
    The canonical tensor rank approximation problem (TAP) consists of approximating a real-valued tensor by one of low canonical rank, which is a challenging non-linear, non-convex, constrained optimization problem, where the constraint set forms a non-smooth semi-algebraic set. We introduce a Riemannian Gauss-Newton method with trust region for solving small-scale, dense TAPs. The novelty of our approach is threefold. First, we parametrize the constraint set as the Cartesian product of Segre manifolds, hereby formulating the TAP as a Riemannian optimization problem, and we argue why this parametrization is among the theoretically best possible. Second, an original ST-HOSVD-based retraction operator is proposed. Third, we introduce a hot restart mechanism that efficiently detects when the optimization process is tending to an ill-conditioned tensor rank decomposition and which often yields a quick escape path from such spurious decompositions. Numerical experiments show improvements of up to three orders of magnitude in terms of the expected time to compute a successful solution over existing state-of-the-art methods

    Second order adjoints for solving PDE-constrained optimization problems

    Get PDF
    Inverse problems are of utmost importance in many fields of science and engineering. In the variational approach inverse problems are formulated as PDE-constrained optimization problems, where the optimal estimate of the uncertain parameters is the minimizer of a certain cost functional subject to the constraints posed by the model equations. The numerical solution of such optimization problems requires the computation of derivatives of the model output with respect to model parameters. The first order derivatives of a cost functional (defined on the model output) with respect to a large number of model parameters can be calculated efficiently through first order adjoint sensitivity analysis. Second order adjoint models give second derivative information in the form of matrix-vector products between the Hessian of the cost functional and user defined vectors. Traditionally, the construction of second order derivatives for large scale models has been considered too costly. Consequently, data assimilation applications employ optimization algorithms that use only first order derivative information, like nonlinear conjugate gradients and quasi-Newton methods. In this paper we discuss the mathematical foundations of second order adjoint sensitivity analysis and show that it provides an efficient approach to obtain Hessian-vector products. We study the benefits of using of second order information in the numerical optimization process for data assimilation applications. The numerical studies are performed in a twin experiment setting with a two-dimensional shallow water model. Different scenarios are considered with different discretization approaches, observation sets, and noise levels. Optimization algorithms that employ second order derivatives are tested against widely used methods that require only first order derivatives. Conclusions are drawn regarding the potential benefits and the limitations of using high-order information in large scale data assimilation problems
    • …
    corecore