217 research outputs found

    Superior convergence domains for the p-cyclic SSOR majorizer

    Get PDF
    AbstractThus far for an n × n complex nonsingular matrix A, the symmetric successive overrelaxation (SSOR) majorizing operator has been used to establish convergence properties of the SSOR method mostly in the case where A is an H-matrix. In this paper we use (actually a similarity transformation of) the SSOR majorizer to investigate convergence properties of the block SSOR method when A is a block p-cyclic matrix. Let JA denote the block Jacobi method and let ν = ϱ(¦JA¦). We establish regions in the (ν, ω)-plane where ϱ(SωA) ⩽ ϱ(QωA) < ¦ω − 1¦ [⩽ ϱ(LωA)]. Here SωA is the block SSOR iteration operator associated with A, LωA is the block successive overrelaxation (SOR) iteration operator associated with A, and QωA is a convenient similarity transformation of the majorizing operator for SωA. Of special interest to us are the values of ν for which the above inequality holds for the corresponding values of the relaxation parameter ω(A) = 2(1 + ν), the latter being an important quantity in the SOR-SSOR theory for H-matrices

    The Young-Eidson Algorithm: Applications and Extensions

    Get PDF

    The relationship between the Jacobi and the successive overrelaxation (SOR) matrices of a k-cyclic matrix

    Get PDF
    AbstractLet A be a (k−l, l)-generalized consistently ordered matrix with T and Lω its associated Jacobi and SOR matrices whose eigenvalues μ and λ satisfy the well-known relationship (λ+ω−1)k=ωkμkλk−1. For a subclass of the above matrices A we prove that the matrix analogue of the previous relationship holds. Exploiting the matrix relationship we show that the SOR method is equivalent to a certain monoparametric k-step iterative one when used for the solution of the fixed-point problem x=Tx+c

    Deep Bilevel Learning

    Full text link
    We present a novel regularization approach to train neural networks that enjoys better generalization and test error than standard stochastic gradient descent. Our approach is based on the principles of cross-validation, where a validation set is used to limit the model overfitting. We formulate such principles as a bilevel optimization problem. This formulation allows us to define the optimization of a cost on the validation set subject to another optimization on the training set. The overfitting is controlled by introducing weights on each mini-batch in the training set and by choosing their values so that they minimize the error on the validation set. In practice, these weights define mini-batch learning rates in a gradient descent update equation that favor gradients with better generalization capabilities. Because of its simplicity, this approach can be integrated with other regularization methods and training schemes. We evaluate extensively our proposed algorithm on several neural network architectures and datasets, and find that it consistently improves the generalization of the model, especially when labels are noisy.Comment: ECCV 201

    On some extensions of the accelerated overrelaxation (AOR) theory

    Get PDF
    This paper extends the convergence theory of the Accelerated Overrelaxation (AOR) method to cases analogous to those considered first by Ostrowski and then by Varga in connection with the Successive Overrelaxation (SOR) method. Among others, the Ostrowski Theorem, some of the theorems by Varga on the extensions of the SOR theory, and some recent results by Niethammer and by the authors are obtained as special cases of the work presented in this paper. In addition, several points are raised which suggest further research

    Is A ∈ C n,n a General H−Matrix? *

    Get PDF
    Abstract H−matrices play an important role in the theory and applications of Numerical Linear Algebra. So, it is very useful to know whether a given matrix A ∈ C n,n , usually the coefficient of a complex linear system of algebraic equations or of a Linear Complementarity Problem (A ∈ R n,n , with a ii &gt; 0 for i = 1, 2, . . . , n in this case), is an H−matrix; then, most of the classical iterative methods for the solution of the problem at hand converge. In recent years the set of H−matrices has been extended to what is now known as the set of General H−matrices, and a partition of this set in three different classes has been made. The main objective of this work is to develop an algorithm that will determine the H−matrix character and will identify the class to which a given matrix A ∈ C n,n belongs; in addition, some results on the classes of general H−matrices and a partition of the non-H−matrices set are presented
    • …
    corecore