41,679 research outputs found

    Discontinuous Parameter Estimates with Least Squares Estimators

    Get PDF
    We discuss weighted least squares estimates of ill-conditioned linear inverse problems where weights are chosen to be inverse error covariance matrices. Least squares estimators are the maximum likelihood estimate for normally distributed data and parameters, but here we do not assume particular probability distributions. Weights for the estimator are found by ensuring its minimum follows a χ2 distribution. Previous work with this approach has shown that it is competitive with regularization methods such as the L-curve and Generalized Cross Validation (GCV) [20]. In this work we extend the method to find diagonal weighting matrices, rather than a scalar regularization parameter. Diagonal weighting matrices are advantageous because they give piecewise smooth least squares estimates and hence are a mechanism through which least squares can be used to estimate discontinuous parameters. This is explained by viewing least squares estimation as a constrained optimization problem. Results with diagonal weighting matrices are given for a benchmark discontinuous inverse problem from [13]. In addition, the method is used to estimate soil moisture from data collected in the Dry Creek Watershed near Boise, Idaho. Parameter estimates are found that combine two different types of measurements, and weighting matrices are found that incorporate uncertainty due to spatial variation so that the parameters can be used over larger scales than those that were measured

    Hybrid Projection Methods for Large-scale Inverse Problems with Mixed Gaussian Priors

    Full text link
    When solving ill-posed inverse problems, a good choice of the prior is critical for the computation of a reasonable solution. A common approach is to include a Gaussian prior, which is defined by a mean vector and a symmetric and positive definite covariance matrix, and to use iterative projection methods to solve the corresponding regularized problem. However, a main challenge for many of these iterative methods is that the prior covariance matrix must be known and fixed (up to a constant) before starting the solution process. In this paper, we develop hybrid projection methods for inverse problems with mixed Gaussian priors where the prior covariance matrix is a convex combination of matrices and the mixing parameter and the regularization parameter do not need to be known in advance. Such scenarios may arise when data is used to generate a sample prior covariance matrix (e.g., in data assimilation) or when different priors are needed to capture different qualities of the solution. The proposed hybrid methods are based on a mixed Golub-Kahan process, which is an extension of the generalized Golub-Kahan bidiagonalization, and a distinctive feature of the proposed approach is that both the regularization parameter and the weighting parameter for the covariance matrix can be estimated automatically during the iterative process. Furthermore, for problems where training data are available, various data-driven covariance matrices (including those based on learned covariance kernels) can be easily incorporated. Numerical examples from tomographic reconstruction demonstrate the potential for these methods

    A Matrix Factorization Approach for Learning Semidefinite-Representable Regularizers

    Get PDF
    Regularization techniques are widely employed in optimization-based approaches for solving ill-posed inverse problems in data analysis and scientific computing. These methods are based on augmenting the objective with a penalty function, which is specified based on prior domain-specific expertise to induce a desired structure in the solution. We consider the problem of learning suitable regularization functions from data in settings in which precise domain knowledge is not directly available. Previous work under the title of `dictionary learning' or `sparse coding' may be viewed as learning a regularization function that can be computed via linear programming. We describe generalizations of these methods to learn regularizers that can be computed and optimized via semidefinite programming. Our framework for learning such semidefinite regularizers is based on obtaining structured factorizations of data matrices, and our algorithmic approach for computing these factorizations combines recent techniques for rank minimization problems along with an operator analog of Sinkhorn scaling. Under suitable conditions on the input data, our algorithm provides a locally linearly convergent method for identifying the correct regularizer that promotes the type of structure contained in the data. Our analysis is based on the stability properties of Operator Sinkhorn scaling and their relation to geometric aspects of determinantal varieties (in particular tangent spaces with respect to these varieties). The regularizers obtained using our framework can be employed effectively in semidefinite programming relaxations for solving inverse problems
    • …
    corecore