A recent approach for the construction of nonlinear optimization software has been to allow an algorithm to choose between two possible models to the objective function at each iteration. The model switching algorithm NL2SOL of Dennis, Gay and Welsch and the hybrid algorithms of Al-Baali and Fletcher has proven highly effective in practice. Although not explicitly formulated as multi-model methods, many other algorithms implicitly perform a model switch under certain circumstances (e.g., resetting a secant model to the exact value of the Hessian). We present a trust region formulation for multi-model methods which allows the efficient incorporation of an arbitrary number of models. Global convergence can be shown for three classes of algorithms under different assumptions on the models. First, essentially any multi-model algorithm is globally convergent if each of the models is sufficiently well behaved. Second, algorithms based on the central feature of the NL2SOL switching system are globally convergent if one model is well behaved and each other model obeys a "sufficient predicted decrease" condition. No requirement is made that these alternate models be quadratic. Third, algorithms of the second type which directly enforce the "sufficient predicted decrease" condition are globally convergent if a single model is sufficiently well behaved
To submit an update or takedown request for this paper, please submit an Update/Correction/Removal Request.