8 research outputs found
Implementation of a fixing strategy and parallelization in a recent global optimization method
Electromagnetism-like Mechanism (EM) heuristic is a population-based stochastic global optimization method inspired by the attraction-repulsion mechanism of the electromagnetism theory. EM was originally proposed for solving continuous global optimization problems with bound constraints and it has been shown that the algorithm performs quite well compared to some other global optimization methods. In this work, we propose two extensions to improve the performance of the original algorithm: First, we introduce a fixing strategy that provides a mechanism for not being trapped in local minima, and thus, improves the effectiveness of the search. Second, we use the proposed fixing strategy to parallelize the algorithm and utilize a cooperative parallel search on the solution space. We then evaluate the performance of our study under three criteria: the quality of the solutions, the number of function evaluations and the number of local minima obtained. Test problems are generated by an algorithm suggested in the literature that builds test problems with varying degrees of difficulty. Finally, we benchmark our results with that of the
Knitro solver with the multistart option set
Input/Output of Ab-initio Nuclear Structure Calculations for Improved Performance and Portability
Many modern scientific applications rely on highly computation intensive calculations. However, most applications do not concentrate as much on the role that input/output operations can play for improved performance and portability. Parallelizing input/output operations of large files can significantly improve the performance of parallel applications where sequential I/O is a bottleneck. A proper choice of I/O library also offers a scope for making input/output operations portable across different architectures. Thus, use of parallel I/O libraries for organizing I/O of large data files offers great scope in improving performance and portability of applications. In particular, sequential I/O has been identified as a bottleneck for the highly scalable MFDn (Many Fermion Dynamics for nuclear structure) code performing ab-initio nuclear structure calculations. We develop interfaces and parallel I/O procedures to use a well-known parallel I/O library in MFDn. As a result, we gain efficient I/O of large datasets along with their portability and ease of use in the down-stream processing. Even situations where the amount of data to be written is not huge, proper use of input/output operations can boost the performance of scientific applications. Application checkpointing offers enormous performance improvement and flexibility by doing a negligible amount of I/O to disk. Checkpointing saves and resumes application state in such a manner that in most cases the application is unaware that there has been an interruption to its execution. This helps in saving large amount of work that has been previously done and continue application execution. This small amount of I/O provides substantial time saving by offering restart/resume capability to applications. The need for checkpointing in optimization code NEWUOA has been identified and checkpoint/restart capability has been implemented in NEWUOA by using simple file I/O
Parallel algorithms for nonlinear optimization
Parallel algorithm design is a very active research topic in optimization as parallel computer architectures have recently become easily accessible. This thesis is about an approach for designing parallel nonlinear programming algorithms. The main idea is to benefit from parallelization in designing new algorithms rather than considering direct parallelizations of the existing methods. We give a general framework following our approach, and then, give distinct algorithms that fit into this framework. The example algorithms we have designed either use procedures of existing methods within a multistart scheme, or they are completely new inherently parallel algorithms. In doing so, we try to show how it is possible to achieve parallelism in algorithm structure (at different levels) so that the resulting algorithms have a good solution performance in terms of robustness, quality of steps, and scalability. We complement our discussion with convergence proofs of the proposed algorithms
Modified Chebyshev-Picard Iteration Methods for Solution of Initial Value and Boundary Value Problems
The solution of initial value problems (IVPs) provides the evolution of dynamic
system state history for given initial conditions. Solving boundary value problems
(BVPs) requires finding the system behavior where elements of the states are defined
at different times. This dissertation presents a unified framework that applies modified
Chebyshev-Picard iteration (MCPI) methods for solving both IVPs and BVPs.
Existing methods for solving IVPs and BVPs have not been very successful in
exploiting parallel computation architectures. One important reason is that most
of the integration methods implemented on parallel machines are only modified versions
of forward integration approaches, which are typically poorly suited for parallel
computation.
The proposed MCPI methods are inherently parallel algorithms. Using Chebyshev
polynomials, it is straightforward to distribute the computation of force functions
and polynomial coefficients to different processors. Combining Chebyshev polynomials
with Picard iteration, MCPI methods iteratively refine estimates of the solutions
until the iteration converges. The developed vector-matrix form makes MCPI methods
computationally efficient.
The power of MCPI methods for solving IVPs is illustrated through a small perturbation
from the sinusoid motion problem and satellite motion propagation problems.
Compared with a Runge-Kutta 4-5 forward integration method implemented in MATLAB, MCPI methods generate solutions with better accuracy as well as orders
of magnitude speedups, prior to parallel implementation. Modifying the algorithm
to do double integration for second order systems, and using orthogonal polynomials
to approximate position states lead to additional speedups. Finally, introducing
perturbation motions relative to a reference motion results in further speedups.
The advantages of using MCPI methods to solve BVPs are demonstrated by
addressing the classical Lambert’s problem and an optimal trajectory design problem.
MCPI methods generate solutions that satisfy both dynamic equation constraints and
boundary conditions with high accuracy. Although the convergence of MCPI methods
in solving BVPs is not guaranteed, using the proposed nonlinear transformations,
linearization approach, or correction control methods enlarge the convergence domain.
Parallel realization of MCPI methods is implemented using a graphics card that
provides a parallel computation architecture. The benefit from the parallel implementation
is demonstrated using several example problems. Larger speedups are achieved
when either force functions become more complicated or higher order polynomials are
used to approximate the solutions
Efficient Nonlinear Optimization with Rigorous Models for Large Scale Industrial Chemical Processes
Large scale nonlinear programming (NLP) has proven to be an effective framework
for obtaining profit gains through optimal process design and operations in
chemical engineering. While the classical SQP and Interior Point methods have been
successfully applied to solve many optimization problems, the focus of both academia
and industry on larger and more complicated problems requires further development
of numerical algorithms which can provide improved computational efficiency.
The primary purpose of this dissertation is to develop effective problem formulations
and an advanced numerical algorithms for efficient solution of these challenging
problems. As problem sizes increase, there is a need for tailored algorithms that
can exploit problem specific structure. Furthermore, computer chip manufacturers
are no longer focusing on increased clock-speeds, but rather on hyperthreading and
multi-core architectures. Therefore, to see continued performance improvement, we
must focus on algorithms that can exploit emerging parallel computing architectures.
In this dissertation, we develop an advanced parallel solution strategy for nonlinear
programming problems with block-angular structure. The effectiveness of this and
modern off-the-shelf tools are demonstrated on a wide range of problem classes.
Here, we treat optimal design, optimal operation, dynamic optimization, and
parameter estimation. Two case studies (air separation units and heat-integrated columns) are investigated to deal with design under uncertainty with rigorous models.
For optimal operation, this dissertation takes cryogenic air separation units as
a primary case study and focuses on formulations for handling uncertain product
demands, contractual constraints on customer satisfaction levels, and variable power
pricing. Multiperiod formulations provide operating plans that consider inventory to
meet customer demands and improve profits.
In the area of dynamic optimization, optimal reference trajectories are determined
for load changes in an air separation process. A multiscenario programming
formulation is again used, this time with large-scale discretized dynamic models.
Finally, to emphasize a different decomposition approach, we address a problem
with significant spatial complexity. Unknown water demands within a large scale
city-wide distribution network are estimated. This problem provides a different decomposition
mechanism than the multiscenario or multiperiod problems; nevertheless,
our parallel approach provides effective speedup
Stratégies d'optimisation mixte en Génie des Procédés – Application à la conception d'ateliers discontinus
La conception d'ateliers discontinus implique généralement la résolution de problèmes d'optimisation non-linéaire en variables mixtes. L'objectif de ce travail est de proposer une méthodologie adaptée pour leur traitement en évaluant les performances de deux méthodes déterministes de l'environnement GAMS et un algorithme génétique (AG), sur un jeu d'exemples de complexité croissante. Avec la formulation de Programmation Mathématique retenue, les résultats numériques vérifient l'efficacité de la méthode de Branch & Bound. Les solutions optimales fournissent une référence pour fixer des procédures appropriées de codage et de gestion des contraintes au sein de l'AG. Ainsi, les performances de ce dernier sont très satisfaisantes et valables pour le cas récurrent de problèmes où le critère est calculé par un simulateur. Puis le modèle initial est modifié pour traiter les mêmes problèmes en variables purement discrètes. L'AG reste aussi efficace alors que les méthodes déterministes sont dépassées par la combinatoire des problèmes. La stratégie est finalement validée sur un exemple de bioprocédés formulé de manière similaire
Nonlinear Optimization and Parallel Computing
The new computational technologies are having a very strong influence on numerical optimization, in several different ways. Many researchers have been stimulated by the need to either conform the existing numerical techniques to the new parallel architectures or to devise completely new parallel solution approaches. A mini-symposium on Parallel Computing in Nonlinear Optimization was held in Naples, Italy, September 2001, during the International Conference ParCo2001, in order to bring together researchers active in this field and to discuss and share their findings. Some of the papers presented during the mini-symposium, as well as additional contributions from other researchers are collected in this special issue. Clearly, two different trends, well representative for most of the current research activities, can be identified. Firstly, there is an attempt to encapsulate parallel linear algebra software and algorithms into optimization codes, particularly codes implementing interior point strategies for which the linear algebra issues are very critical, and secondly, there is an effort to devise new parallel solution strategies in global optimization, either for specific or general purpose problems, motivated by the large size and the combinatorial nature of them. In the present paper we review the literature on these trends and classify the contributed papers within this framework