176 research outputs found

    On methods for minimizing a function without calculating its derivatives

    Get PDF
    Imperial Users onl

    Mixed nonderivative algorithms for unconstrained optimization

    Get PDF
    A general technique is developed to restart nonderivative algorithms in unconstrained optimization. Application of the technique is shown to result in mixed algorithms which are considerably more robust than their component procedures. A general mixed algorithm is developed and its convergence is demonstrated. A uniform computational comparison is given for the new mixed algorithms and for a collection of procedures from the literature --Abstract, page ii

    New Direct Search Method for Unconstrained Function Optimization

    Get PDF
    Electrical Engineerin

    Scheduling physicians in an outpatient clinic modeled as a transient queue /

    Get PDF

    A feed forward neural network approach for matrix computations

    Get PDF
    This thesis was submitted for the degree of Doctor of Philosophy and awarded by Brunel University.A new neural network approach for performing matrix computations is presented. The idea of this approach is to construct a feed-forward neural network (FNN) and then train it by matching a desired set of patterns. The solution of the problem is the converged weight of the FNN. Accordingly, unlike the conventional FNN research that concentrates on external properties (mappings) of the networks, this study concentrates on the internal properties (weights) of the network. The present network is linear and its weights are usually strongly constrained; hence, complicated overlapped network needs to be construct. It should be noticed, however, that the present approach depends highly on the training algorithm of the FNN. Unfortunately, the available training methods; such as, the original Back-propagation (BP) algorithm, encounter many deficiencies when applied to matrix algebra problems; e. g., slow convergence due to improper choice of learning rates (LR). Thus, this study will focus on the development of new efficient and accurate FNN training methods. One improvement suggested to alleviate the problem of LR choice is the use of a line search with steepest descent method; namely, bracketing with golden section method. This provides an optimal LR as training progresses. Another improvement proposed in this study is the use of conjugate gradient (CG) methods to speed up the training process of the neural network. The computational feasibility of these methods is assessed on two matrix problems; namely, the LU-decomposition of both band and square ill-conditioned unsymmetric matrices and the inversion of square ill-conditioned unsymmetric matrices. In this study, two performance indexes have been considered; namely, learning speed and convergence accuracy. Extensive computer simulations have been carried out using the following training methods: steepest descent with line search (SDLS) method, conventional back propagation (BP) algorithm, and conjugate gradient (CG) methods; specifically, Fletcher Reeves conjugate gradient (CGFR) method and Polak Ribiere conjugate gradient (CGPR) method. The performance comparisons between these minimization methods have demonstrated that the CG training methods give better convergence accuracy and are by far the superior with respect to learning time; they offer speed-ups of anything between 3 and 4 over SDLS depending on the severity of the error goal chosen and the size of the problem. Furthermore, when using Powell's restart criteria with the CG methods, the problem of wrong convergence directions usually encountered in pure CG learning methods is alleviated. In general, CG methods with restarts have shown the best performance among all other methods in training the FNN for LU-decomposition and matrix inversion. Consequently, it is concluded that CG methods are good candidates for training FNN of matrix computations, in particular, Polak-Ribidre conjugate gradient method with Powell's restart criteria

    From genetic algorthms to efficient optimization

    Get PDF
    Thesis (M.S.)--Massachusetts Institute of Technology, Dept. of Electrical Engineering and Computer Science, 1994.Includes bibliographical references (leaves 51-53).by Deniz Yuret.M.S

    NA

    Get PDF
    http://archive.org/details/acomparisonoftwo1094516033NAN

    Direction set based Algorithms for adaptive least squares problems improvements and innovations.

    Get PDF
    The main objective of this research is to provide a mathematically tractable solutions to the adaptive filtering problem by formulating the problem as an adaptive least squares problem. This approach follows the work of Chen (1998) in his study of direction-set based CDS) adaptive filtering algorithm. Through the said formulation, we relate the DS algorithm to a class of projection method. Objektif utama penyelidikan ini ialah untuk menyediakan penyelesaian matematik yang mudah runut kepada masalah penurasan adaptif dengan memfonnulasikan masalah tersebut sebagai masalah kuasa dua terkecil adaptif. Pendekatan ini rnengikut hasil kerja oleh Chen (1998) dalam kajian beliau tentang algoritma penurasan adaptif berasaskan 'direction-set' (DS). Melalui fornulasi tersebut, kami menghubungkaitkan algoritma DS kepada satu kelas kaedah unjuran. Secara khususnya, versi rnudah aigoritma itu, iaitu algoritma 'Euclidean direction search' (EDS) ditunjukkan mempunyai hubungkait dengan satu kelas kaedah berlelaran yang dipanggil kaedah 'relaxation'. Penernuan ini rnembolehkan kami menambahbaik algoritma EDS kepada 'accelerated EDS' eli mana satu parameter pemecutan diperkenalkan untuk rnengoptirnumkan saiz langkah sernasa setiap pencarian garis
    corecore