10 research outputs found

    Implementation of gradient methods for optimization of underage costs in aviation industry

    Get PDF
    Underage costs are not easily quantifiable in spare parts management. These costs occur when a spare part is required and none are available in inventory. This paper provides another approach to underage cost optimization for subassemblies and assemblies in aviation industry. The quantity of spare parts is determined by using a method for airplane spare parts forecasting based on Rayleigh's model. Based on that, the underage cost per unit is determined by using the Newsvendor model. Then, by implementing a transformed accelerated double-step size gradient method, the underage costs for spare sub-assemblies and assemblies in airline industry are optimized

    Implementation of gradient methods for optimization of underage costs in aviation industry

    Full text link

    A Transformation of Accelerated Double Step Size Method for Unconstrained Optimization

    Get PDF
    A reduction of the originally double step size iteration into the single step length scheme is derived under the proposed condition that relates two step lengths in the accelerated double step size gradient descent scheme. The proposed transformation is numerically tested. Obtained results confirm the substantial progress in comparison with the single step size accelerated gradient descent method defined in a classical way regarding all analyzed characteristics: number of iterations, CPU time, and number of function evaluations. Linear convergence of derived method has been proved

    Примена математичких модела као инструмента информационих технологија за процену залиха резервних делова у авио индустрији

    Get PDF

    Choice of parameters in gradient methods for the unconstrained optimization problems

    Get PDF
    Posmatra se problem optimizacije bez ograničenja. Za rešavanje problema  optimizacije bez ograničenja postoji mnoštvo raznovrsnih metoda. Istraživanje ovde motivisano je potrebom za metodama koje će brzo konvergirati. Cilj je sistematizacija poznatih rezultata, kao i teorijska i numerička analiza mogućnosti uvođenja parametra u gradijentne metode. Najpre se razmatra problem minimizacije konveksne funkcije više promenljivih. Problem minimizacije konveksne funkcije više promenljivih ovde se rešava bez izračunavanja matrice hesijana, što je naročito aktuelno za sisteme velikih dimenzija, kao i za probleme optimizacije kod kojih ne raspolažemo ni tačnom vrednošću funkcije cilja, ni tačnom vrednošću gradijenta. Deo motivacije za istraživanjem ovde leži i u postojanju problema kod kojih je funkcija cilja rezultat simulacija. Numerički rezultati, predstavljeni u Glavi 6, pokazuju da uvođenje izvesnog parametra može biti korisno, odnosno, dovodi do ubrzanja određenog metoda optimizacije. Takođe se predstavlja jedan novi hibridni metod konjugovanog gradijenta, kod koga je parametar konjugovanog gradijenta konveksna kombinacija dva poznata parametra konjugovanog gradijenta. U prvoj glavi opisuje se motivacija kao i osnovni pojmovi potrebni za praćenje preostalih glava. U drugoj glavi daje se pregled nekih gradijentnih metoda prvog i drugog reda. Četvrta glava sadrži pregled osnovnih pojmova i nekih rezultata vezanih za metode konjugovanih gradijenata. Pomenute glave su tu radi pregleda nekih poznatih rezultata, dok se originalni doprinos predstavlja u trećoj, petoj i šestoj glavi. U trećoj glavi se opisuje izvesna modifikacija određenog metoda u kome se koristi multiplikativni parametar, izabran na slučajan način. Dokazuje se linearna konvergencija tako formiranog novog metoda. Peta glava sadrži originalne rezultate koji se odnose na metode konjugovanih gradijenata. Naime, u ovoj glavi predstavlja se novi hibridni metod konjugovanih gradijenata, koji je konveksna kombinacija dva poznata metoda konjugovanih gradijenata. U šestoj glavi se daju rezultati numeričkih eksperimenata, izvršenih na  izvesnom skupu test funkcija, koji se odnose na metode iz treće i pete glave. Implementacija svih razmatranih algoritama rađena je u paketu MATHEMATICA. Kriterijum upoređivanja je vreme rada centralne procesorske jedinice.6The problem under consideration is an unconstrained optimization problem. There are many different methods made in aim to solve the optimization problems.  The investigation made here is motivated by the fact that the methods which converge fast are necessary. The main goal is the systematization of some known results and also theoretical and numerical analysis of the possibilities to int roduce some parameters within gradient methods. Firstly, the minimization problem is considered, where the objective function is a convex, multivar iable function. This problem is solved here without the calculation of Hessian, and such solution is very important, for example, when the  big dimension systems are solved, and also for solving optimization problems with unknown values of the objective function and its gradient. Partially, this investigation is motivated by the existence of problems where the objective function is the result of simulations. Numerical results, presented in  Chapter  6, show that the introduction of a parameter is useful, i.e., such introduction results by the acceleration of the known optimization method. Further, one new hybrid conjugate gradient method is presented, in which the conjugate gradient parameter is a convex combination of two known conjugate gradient parameters. In the first chapter, there is motivation and also the basic co ncepts which are necessary for the other chapters. The second chapter contains the survey of some first order and second order gradient methods. The fourth chapter contains the survey of some basic concepts and results corresponding to conjugate gradient methods. The first, the second and the fourth  chapters are here to help in considering of some known results, and the original results are presented in the chapters 3,5 and 6. In the third chapter, a modification of one unco nstrained optimization method is presented, in which the randomly chosen multiplicative parameter is used. Also, the linear convergence of such modification is proved. The fifth chapter contains the original results, corresponding to conjugate gradient methods. Namely, one new hybrid conjugate gradient method is presented, and this  method is the convex combination of two known conjugate gradient methods. The sixth chapter consists of the numerical results, performed on a set of test functions, corresponding to methods in the chapters 3 and 5. Implementation of all considered algorithms is made in Mathematica. The comparison criterion is CPU time.The problem under consideration is an unconstrained optimization problem. There are many different methods made in aim to solve the optimization problems.  The investigation made here is motivated by the fact that the methods which converge fast are necessary. The main goal is the systematization of some known results and also theoretical and numerical analysis of the possibilities to int roduce some parameters within gradient methods. Firstly, the minimization problem is considered, where the objective function is a convex, multivar iable function. This problem is solved here without the calculation of Hessian, and such solution is very important, for example, when the  big dimension systems are solved, and also for solving optimization problems with unknown values of the objective function and its gradient. Partially, this investigation is motivated by the existence of problems where the objective function is the result of simulations. Numerical results, presented in  Chapter  6, show that the introduction of a parameter is useful, i.e., such introduction results by the acceleration of the known optimization method. Further, one new hybrid conjugate gradient method is presented, in which the conjugate gradient parameter is a convex combination of two known conjugate gradient parameters. In the first chapter, there is motivation and also the basic co ncepts which are necessary for the other chapters. Key  Words Documentation  97 The second chapter contains the survey of some first order and second order gradient methods. The fourth chapter contains the survey of some basic concepts and results corresponding to conjugate gradient methods. The first, the second and the fourth  chapters are here to help in considering of some known results, and the original results are presented in the chapters 3,5 and 6. In the third chapter, a modification of one unco nstrained optimization method is presented, in which the randomly chosen multiplicative parameter is used. Also, the linear convergence of such modification is proved. The fifth chapter contains the original results, corresponding to conjugate gradient methods. Namely, one new hybrid conjugate gradient method is presented, and this  method is the convex combination of two known conjugate gradient methods. The sixth chapter consists of the numerical results, performed on a set of test functions, corresponding to methods in the chapters 3 and 5. Implementation of all considered algorithms is made in Mathematica. The comparison criterion is CPU tim

    Avaliação de diferentes alternativas para a determinação de regiões de confiança de estimativas de parâmetros

    Get PDF
    A estimação de parâmetros consiste em determinar o conjunto ótimo dos valores dos parâmetros de modelo específico, o qual corresponde ao menor valor possível para o desvio entre as variáveis medidas experimentalmente e as preditas pelo modelo. Como os valores mensurados experimentalmente estão sujeitos a incertezas, o mesmo acontecerá com os valores estimados, fazendo-se necessário determinar a incerteza destes. Em modelos com um único parâmetro, podem ser definidas faixas ou intervalos de confiança contendo valores estatisticamente iguais ao valor ótimo. Já para modelos que possuem mais de um parâmetro, a abordagem mais adequada é determinar as suas regiões de confiança, pois a provável presença da correlação entre os parâmetros estimados invalida a análise somente por intervalos de confiança. Nesse trabalho foram avaliadas diversas metodologias para a determinação de regiões de confiança. Inicialmente foi avaliado o impacto que o uso da aproximação de Gauss-Newton gera no cálculo das regiões de confiança elípticas quando comparada com o cálculo da matriz Hessiana completa. O uso da aproximação comprovou-se justificável nos casos estudados, já que não foram observadas diferenças significativas entre as regiões resultantes pelo emprego dos dois métodos. Também foi avaliada a qualidade das regiões de confiança obtidas pelo método da razão de verossimilhança. Foi utilizado o método do Contorno que percorre o limite da região de confiança determinando de forma quase exata o seu traçado. Este método funciona bem para uma grande maioria de modelos, mas apresenta algumas dificuldades quando as regiões possuem variações bruscas na sua geometria. Também foi usado o método do Perfil, que consiste na determinação de intervalos de confiança baseados na razão de verossimilhança, seguido de uma interpolação para determinação das regiões de confiança. Entretanto, esta metodologia não conseguiu delimitar regiões de confiança com geometria mais complexa. Por fim, foi utilizado o método de Bootstrap, o qual consiste em estimar novos valores ótimos a partir de perturbações normalmente distribuídas nas variáveis experimentais. As regiões geradas por este método possuem pontos que se concentram majoritariamente nas proximidades do ponto ótimo, embora também ocorram dispersões. Apesar do traçado da região de confiança não ficar evidente, estes pontos podem ser utilizados para a avaliação da distribuição de probabilidades das estimativas dos parâmetros.Parameter estimation consists of determining the optimal set of parameter values for a specific model, which minimize the difference between the variables measured experimentally and those predicted by the model. As the values measured experimentally are subject to uncertainties, the same will happen with the estimated values, making it necessary to determine their uncertainty. In models with a single parameter, confidence ranges or intervals can be defined containing values statistically equal to the optimum value. For models that have more than one parameter, the most appropriate approach is to determine their confidence regions, since the likely presence of the correlation between the estimated parameters invalidates the analysis only by confidence intervals. In this work, several methodologies for determining regions of trust were evaluated. Initially it was evaluated the impact that the use of the Gauss-Newton approximation generates when considering it in the calculation of the elliptical confidence regions when compared with the calculation of the complete Hessian matrix. The use of the approximation proved to be justiciable, since no significant differences were observed between the regions obtained by using the two methods. The quality of the confidence regions obtained by the likelihood ratio method was also evaluated. For this work, the Contour method was chosen, which crosses the border of the confidence region, determining its contour almost exactly. This method works well for a large majority of models, but presents some difficulties when the regions have sudden variations in their geometry. The Profile method was also used, which consists of determining the confidence interval based on the likelihood ratio, followed by an interpolation to determine the confidence regions. However, this methodology was unable to delimit regions of trust with more complex geometry. Finally, the Bootstrap method was used, which consists of estimating new optimal values from perturbations normally distributed in the experimental variables. The regions generated by this method have points that are mainly concentrated in the vicinity of the optimum point, while some of them are more dispersed. Although the tracing of the confidence region is not evident, these points can be used to assess the probability distribution of the parameter estimates

    Studying the rate of convergence of gradient optimisation algorithms via the theory of optimal experimental design

    Get PDF
    The most common class of methods for solving quadratic optimisation problems is the class of gradient algorithms, the most famous of which being the Steepest Descent algorithm. The development of a particular gradient algorithm, the Barzilai-Borwein algorithm, has sparked a lot of research in the area in recent years and many algorithms now exist which have faster rates of convergence than that possessed by the Steepest Descent algorithm. The technology to effectively analyse and compare the asymptotic rates of convergence of gradient algorithms is, however, limited and so it is somewhat unclear from literature as to which algorithms possess the faster rates of convergence. In this thesis methodology is developed to enable better analysis of the asymptotic rates of convergence of gradient algorithms applied to quadratic optimisation problems. This methodology stems from a link with the theory of optimal experimental design. It is established that gradient algorithms can be related to algorithms for constructing optimal experimental designs for linear regression models. Furthermore, the asymptotic rates of convergence of these gradient algorithms can be expressed through the asymptotic behaviour of multiplicative algorithms for constructing optimal experimental designs. The described connection to optimal experimental design has also been used to influence the creation of several new gradient algorithms which would not have otherwise been intuitively thought of. The asymptotic rates of convergence of these algorithms are studied extensively and insight is given as to how some gradient algorithms are able to converge faster than others. It is demonstrated that the worst rates are obtained when the corresponding multiplicative procedure for updating the designs converges to the optimal design. Simulations reveal that the asymptotic rates of convergence of some of these new algorithms compare favourably with those of existing gradient-type algorithms such as the Barzilai-Borwein algorithm.EThOS - Electronic Theses Online ServiceGBUnited Kingdo

    Studying the rate of convergence of gradient optimisation algorithms via the theory of optimal experimental design

    Get PDF
    The most common class of methods for solving quadratic optimisation problems is the class of gradient algorithms, the most famous of which being the Steepest Descent algorithm. The development of a particular gradient algorithm, the Barzilai-Borwein algorithm, has sparked a lot of research in the area in recent years and many algorithms now exist which have faster rates of convergence than that possessed by the Steepest Descent algorithm. The technology to effectively analyse and compare the asymptotic rates of convergence of gradient algorithms is, however, limited and so it is somewhat unclear from literature as to which algorithms possess the faster rates of convergence. In this thesis methodology is developed to enable better analysis of the asymptotic rates of convergence of gradient algorithms applied to quadratic optimisation problems. This methodology stems from a link with the theory of optimal experimental design. It is established that gradient algorithms can be related to algorithms for constructing optimal experimental designs for linear regression models. Furthermore, the asymptotic rates of convergence of these gradient algorithms can be expressed through the asymptotic behaviour of multiplicative algorithms for constructing optimal experimental designs. The described connection to optimal experimental design has also been used to influence the creation of several new gradient algorithms which would not have otherwise been intuitively thought of. The asymptotic rates of convergence of these algorithms are studied extensively and insight is given as to how some gradient algorithms are able to converge faster than others. It is demonstrated that the worst rates are obtained when the corresponding multiplicative procedure for updating the designs converges to the optimal design. Simulations reveal that the asymptotic rates of convergence of some of these new algorithms compare favourably with those of existing gradient-type algorithms such as the Barzilai-Borwein algorithm
    corecore