47,524 research outputs found

    New modification of the hestenes-stiefel with strong wolfe line search

    Get PDF
    . The method of the nonlinear conjugate gradient is widely used in solving large-scale unconstrained optimization since been proven in solving optimization problems without using large memory storage. In this paper, we proposed a new modification of the Hestenes-Stiefel conjugate gradient parameter that fulfils the condition of sufficient descent using a strong Wolfe-Powell line search. Besides, the conjugate gradient method with the proposed conjugate gradient also guarantees low computation of iteration and CPU time by comparing with other classical conjugate gradient parameters. Numerical results have shown that the conjugate gradient method with the proposed conjugate gradient parameter performed better than the conjugate gradient method with other classical conjugate gradient parameters

    A Three-Term Conjugate Gradient Method with Sufficient Descent Property for Unconstrained Optimization

    Get PDF
    Conjugate gradient methods are widely used for solving large-scale unconstrained optimization problems, because they do not need the storage of matrices. In this paper, we propose a general form of three-term conjugate gradient methods which always generate a sufficient descent direction. We give a sufficient condition for the global convergence of the proposed general method. Moreover, we present a specific three-term conjugate gradient method based on the multi-step quasi-Newton method. Finally, some numerical results of the proposed method are given

    Differentiating through Conjugate Gradient

    Get PDF
    This is the pre-print version of an article published by Taylor & Francis in Optimization Methods and Software on 6 January 2018, available online at: https://doi.org/10.1080/10556788.2018.1425862.We show that, although the Conjugate Gradient (CG) Algorithm has a singularity at the solution, it is possible to differentiate forward through the algorithm automatically by re-declaring all the variables as truncated Taylor series, the type of active variable widely used in Automatic Differentiation (AD) tools such as ADOL-C. If exact arithmetic is used, this approach gives a complete sequence of correct directional derivatives of the solution, to arbitrary order, in a single cycle of at most n iterations, where n is the number of dimensions. In the inexact case the approach emphasizes the need for a means by which the programmer can communicate certain conditions involving derivative values directly to an AD tool.Peer reviewe

    Numerically Stable Recurrence Relations for the Communication Hiding Pipelined Conjugate Gradient Method

    Full text link
    Pipelined Krylov subspace methods (also referred to as communication-hiding methods) have been proposed in the literature as a scalable alternative to classic Krylov subspace algorithms for iteratively computing the solution to a large linear system in parallel. For symmetric and positive definite system matrices the pipelined Conjugate Gradient method outperforms its classic Conjugate Gradient counterpart on large scale distributed memory hardware by overlapping global communication with essential computations like the matrix-vector product, thus hiding global communication. A well-known drawback of the pipelining technique is the (possibly significant) loss of numerical stability. In this work a numerically stable variant of the pipelined Conjugate Gradient algorithm is presented that avoids the propagation of local rounding errors in the finite precision recurrence relations that construct the Krylov subspace basis. The multi-term recurrence relation for the basis vector is replaced by two-term recurrences, improving stability without increasing the overall computational cost of the algorithm. The proposed modification ensures that the pipelined Conjugate Gradient method is able to attain a highly accurate solution independently of the pipeline length. Numerical experiments demonstrate a combination of excellent parallel performance and improved maximal attainable accuracy for the new pipelined Conjugate Gradient algorithm. This work thus resolves one of the major practical restrictions for the useability of pipelined Krylov subspace methods.Comment: 15 pages, 5 figures, 1 table, 2 algorithm

    Minimizing inner product data dependencies in conjugate gradient iteration

    Get PDF
    The amount of concurrency available in conjugate gradient iteration is limited by the summations required in the inner product computations. The inner product of two vectors of length N requires time c log(N), if N or more processors are available. This paper describes an algebraic restructuring of the conjugate gradient algorithm which minimizes data dependencies due to inner product calculations. After an initial start up, the new algorithm can perform a conjugate gradient iteration in time c*log(log(N))

    Optimasi Conjugate Gradient pada Backpropagation Neural Network untuk Deteksi Kualitas Daun Tembakau

    Full text link
    Tembakau merupakan komoditi perkebunan yang memiliki nilai ekonomi tingg, teutama sebagai bahan utama rokok. Produksi rokok memberikan pengaruh pada perekonomian di beberapa negara. Sebelum proses produksi rokok, diperlukan klasifikasi kualitas daun tembakau agar mendapatkan komposisi bahan baku rokok yang tepat. Penilaian kualitas daun tembakau ini terdiri dari dua faktor yaitu human sensory dan human vision yang dilakukan oleh grader. Perkembangan teknologi informasi saat ini mampu melakukan pengolahan citra sehingga dapat memaksimalkan faktor human vision yang diharapkan dapat menghemat waktu dan biaya. Pada penelitian ini, deteksi kualitas daun tembakau didasarkan pada dua ekstraksi fitur daun tembakau yaitu bentuk dan tekstur. Kedua fitur tersebut nantinya akan diklasifikasikan menggunakan optimasi Conjugate Gradient pada Backpropagation Neural Network. Hasilnya, metode yang digunakan mampu meningkatkan tingkat akurasi deteksi kualitas daun tembakau. Peningkatan akurasi untuk klasifikasi grade daun tembakau dengan metode backpropagation neural network mencapai akurasi hingga 77,50%
    corecore