2,203 research outputs found

    Iterative regularization in nonparametric instrumental regression

    Get PDF
    We consider the nonparametric regression model with an additive error that is correlated with the explanatory variables. We suppose the existence of instrumental variables that are considered in this model for the identification and the estimation of the regression function. The nonparametric estimation by instrumental variables is an ill-posed linear inverse problem with an unknown but estimable operator. We provide a new estimator of the regression function using an iterative regularization method (the Landweber-Fridman method). The optimal number of iterations and the convergence of the mean square error of the resulting estimator are derived under both mild and severe degrees of ill-posedness. A Monte-Carlo exercise shows the impact of some parameters on the estimator and concludes on the reasonable finite sample performance of the new estimator.nonparametric estimation, instrumental variable, ill-posed inverse problem, iterative method, estimation by projection

    On the Singular Neumann Problem in Linear Elasticity

    Full text link
    The Neumann problem of linear elasticity is singular with a kernel formed by the rigid motions of the body. There are several tricks that are commonly used to obtain a non-singular linear system. However, they often cause reduced accuracy or lead to poor convergence of the iterative solvers. In this paper, different well-posed formulations of the problem are studied through discretization by the finite element method, and preconditioning strategies based on operator preconditioning are discussed. For each formulation we derive preconditioners that are independent of the discretization parameter. Preconditioners that are robust with respect to the first Lam\'e constant are constructed for the pure displacement formulations, while a preconditioner that is robust in both Lam\'e constants is constructed for the mixed formulation. It is shown that, for convergence in the first Sobolev norm, it is crucial to respect the orthogonality constraint derived from the continuous problem. Based on this observation a modification to the conjugate gradient method is proposed that achieves optimal error convergence of the computed solution

    Sharp high-frequency estimates for the Helmholtz equation and applications to boundary integral equations

    Get PDF
    We consider three problems for the Helmholtz equation in interior and exterior domains in R^d (d=2,3): the exterior Dirichlet-to-Neumann and Neumann-to-Dirichlet problems for outgoing solutions, and the interior impedance problem. We derive sharp estimates for solutions to these problems that, in combination, give bounds on the inverses of the combined-field boundary integral operators for exterior Helmholtz problems.Comment: Version 3: 42 pages; improved exposition in response to referee comments and added several reference

    Fast global convergence of gradient methods for high-dimensional statistical recovery

    Full text link
    Many statistical MM-estimators are based on convex optimization problems formed by the combination of a data-dependent loss function with a norm-based regularizer. We analyze the convergence rates of projected gradient and composite gradient methods for solving such problems, working within a high-dimensional framework that allows the data dimension \pdim to grow with (and possibly exceed) the sample size \numobs. This high-dimensional structure precludes the usual global assumptions---namely, strong convexity and smoothness conditions---that underlie much of classical optimization analysis. We define appropriately restricted versions of these conditions, and show that they are satisfied with high probability for various statistical models. Under these conditions, our theory guarantees that projected gradient descent has a globally geometric rate of convergence up to the \emph{statistical precision} of the model, meaning the typical distance between the true unknown parameter ξ∗\theta^* and an optimal solution ξ^\hat{\theta}. This result is substantially sharper than previous convergence results, which yielded sublinear convergence, or linear convergence only up to the noise level. Our analysis applies to a wide range of MM-estimators and statistical models, including sparse linear regression using Lasso (ℓ1\ell_1-regularized regression); group Lasso for block sparsity; log-linear models with regularization; low-rank matrix recovery using nuclear norm regularization; and matrix decomposition. Overall, our analysis reveals interesting connections between statistical precision and computational efficiency in high-dimensional estimation

    Robust and efficient preconditioners for the discontinuous Galerkin time-stepping method

    Get PDF
    The discontinuous Galerkin time-stepping method has many advantageous properties for solving parabolic equations. However, its practical use has been limited by the large and challenging nonsymmetric systems to be solved at each time-step. This work develops a fully robust and efficient preconditioning strategy for solving these systems. We first construct a left preconditioner, based on inf-sup theory, that transforms the linear system to a symmetric positive definite problem that can be solved by the preconditioned conjugate gradient (PCG) algorithm. We then prove that the transformed system can be further preconditioned by an ideal block diagonal preconditioner, leading to a condition number Îș bounded by 4 for any time-step size, any approximation order and any positive self-adjoint spatial operators. Numerical experiments demonstrate the low condition numbers and fast convergence of the algorithm for both ideal and approximate preconditioners, and show the feasibility of the high-order solution of large problems
    • 

    corecore