22 research outputs found

    Article the singular value expansion for arbitrary bounded linear operators

    Get PDF
    The singular value decomposition (SVD) is a basic tool for analyzing matrices. Regarding a general matrix as defining a linear operator and choosing appropriate orthonormal bases for the domain and co-domain allows the operator to be represented as multiplication by a diagonal matrix. It is well known that the SVD extends naturally to a compact linear operator mapping one Hilbert space to another; the resulting representation is known as the singular value expansion (SVE). It is less well known that a general bounded linear operator defined on Hilbert spaces also has a singular value expansion. This SVE allows a simple analysis of a variety of questions about the operator, such as whether it defines a well-posed linear operator equation and how to regularize the equation when it is not well posed

    An Infeasible Point Method for Minimizing the Lennard-Jones Potential

    Full text link
    Minimizing the Lennard-Jones potential, the most-studied modelproblem for molecular conformation, is an unconstrained globaloptimization problem with a large number of local minima. In thispaper, the problem is reformulated as an equality constrainednonlinear programming problem with only linear constraints. Thisformulation allows the solution to approached through infeasibleconfigurations, increasing the basin of attraction of the globalsolution. In this way the likelihood of finding a global minimizeris increased. An algorithm for solving this nonlinear program isdiscussed, and results of numerical tests are presented.Peer Reviewedhttp://deepblue.lib.umich.edu/bitstream/2027.42/44788/1/10589_2004_Article_140555.pd

    Finite-dimensional linear algebra / Mark S. Gockenbach.

    No full text
    "A Chapman & Hall book."Includes bibliographical references and index.Book fair 2013.xxi, 650 p.

    An Abstract Analysis of Differential Semblance Optimization

    No full text
    This work was also published as a Rice University thesis/dissertation: http://hdl.handle.net/1911/16728Differential Semblance Optimization (DSO) is a novel way of approaching a class of inverse problems arising in exploration seismology. The promising feature of the DSO method is that it replaces a nonsmooth, highly nonconvex cost function (the Output Least-Squares (OLS) objective function) with a smooth cost function that is amenable to standard (local) optimization algorithms. The OLS problem can be written abstractly as a partially linear least-squares problem with linear constraints. The DSO objective function is derived from the associated quadratic penalty function. It is shown that one way to view the DSO objective function is as a regularization of a function that is dual (in a certain sense) to the OLS objective function. By viewing the DSO problem as a perturbation of this dual problem, this method can be shown to be effective. In particular, it is demonstrated that, under suitable assumptions, the DSO method defines a parameterized path of minimizers converging to the desired solution, and that for certain values of the parameter, standard optimization techniques can be used to find a point on the path. The predictions of the theory are motivated and illustrated on two simple model problems for seismic velocity inversion, the plane wave detection problem and the "layer-over-half-space" problem. It is shown that the theory presented in this thesis extends the existing theory for the plane wave detection problem

    Object-Oriented Design for optimization and inversion software: a proposal

    No full text
    The usefulness of mathematics for solving problems of applied science is derived largely from the fact that many seemingly different problems lead to the same mathematical formulation. There is therefore an inherent efficiency in mathematical analysis: the solution of a single mathematical problem can potentially resolve many scientific questions. However, this efficiency is frequently not realized when mathematical software is designed, because abstract mathematical objects such as vectors and linear operators require concrete representations in the computer code. These representations frequently vary from problem to problem, with the effect that the codes must be rewritten when the representations change. In this paper, we explore the use of Object-Oriented Design principles to overcome the above problem. We propose a scheme for defining abstract classes of vectors and linear operators, and discuss their use in codes for optimization and inversion. The approach presented in this pape..

    Implementing functionals in HCL

    No full text
    this report, we explain how to implement a functional J in an HCL class. To make the discussion as useful as possible, we illustrate the complete process with two realistic examples. The companion report [2] explains how to implement operators in HCL classes

    Implementing nonlinear operators in HCL

    No full text
    this report, we explain, in some detail, how to implement a nonlinear operato

    An Abstract Framework for Elliptic Inverse Problems: Part 2. An Augmented Lagrangian Approach

    No full text
    The coefficient in a linear elliptic partial differential equation can be estimated from interior measurements of the solution. Posing the estimation problem as a constrained optimization problem with the PDE as the constraint allows the use of the augmented Lagrangian method, which is guaranteed to converge. Moreover, the convergence analysis encompasses discretization by finite element methods, so the proposed algorithm can be implemented and will produce a solution to the constrained minimization problem. All of these properties hold in an abstract framework that encompasses several interesting problems: the standard (scalar) elliptic BVP in divergence form, the system of isotropic elasticity, and others. Moreover, the analysis allows for the use of total variation regularization, so rapidly-varying or even discontinuous coefficients can be estimated

    Coherent Noise Suppression in Velocity Inversion

    No full text
    Data components with well-defined moveout other than primary reflections are sometimes called coherent noise. Coherent noise makes velocity analysis ambiguous, since no single velocity function explains incompatible moveouts simultaneously. Contemporary data processing treats the control of coherent noise influence on velocity as an interpretive step. Dual regularization theory suggests an alternative, automatic inversion algorithm for suppression of coherent noise when primary reflection phases dominate the data. Experiments with marine data illustrate the robustness and effectiveness of the algorithm
    corecore