796 research outputs found

    The True Destination of EGO is Multi-local Optimization

    Full text link
    Efficient global optimization is a popular algorithm for the optimization of expensive multimodal black-box functions. One important reason for its popularity is its theoretical foundation of global convergence. However, as the budgets in expensive optimization are very small, the asymptotic properties only play a minor role and the algorithm sometimes comes off badly in experimental comparisons. Many alternative variants have therefore been proposed over the years. In this work, we show experimentally that the algorithm instead has its strength in a setting where multiple optima are to be identified

    Scalarized Preferences in Multi-objective Optimization

    Get PDF
    Multikriterielle Optimierungsprobleme verfügen über keine Lösung, die optimal in jeder Zielfunktion ist. Die Schwierigkeit solcher Probleme liegt darin eine Kompromisslösung zu finden, die den Präferenzen des Entscheiders genügen, der den Kompromiss implementiert. Skalarisierung – die Abbildung des Vektors der Zielfunktionswerte auf eine reelle Zahl – identifiziert eine einzige Lösung als globales Präferenzenoptimum um diese Probleme zu lösen. Allerdings generieren Skalarisierungsmethoden keine zusätzlichen Informationen über andere Kompromisslösungen, die die Präferenzen des Entscheiders bezüglich des globalen Optimums verändern könnten. Um dieses Problem anzugehen stellt diese Dissertation eine theoretische und algorithmische Analyse skalarisierter Präferenzen bereit. Die theoretische Analyse besteht aus der Entwicklung eines Ordnungsrahmens, der Präferenzen als Problemtransformationen charakterisiert, die präferierte Untermengen der Paretofront definieren. Skalarisierung wird als Transformation der Zielmenge in diesem Ordnungsrahmen dargestellt. Des Weiteren werden Axiome vorgeschlagen, die wünschenswerte Eigenschaften von Skalarisierungsfunktionen darstellen. Es wird gezeigt unter welchen Bedingungen existierende Skalarisierungsfunktionen diese Axiome erfüllen. Die algorithmische Analyse kennzeichnet Präferenzen anhand des Resultats, das ein Optimierungsalgorithmus generiert. Zwei neue Paradigmen werden innerhalb dieser Analyse identifiziert. Für beide Paradigmen werden Algorithmen entworfen, die skalarisierte Präferenzeninformationen verwenden: Präferenzen-verzerrte Paretofrontapproximationen verteilen Punkte über die gesamte Paretofront, fokussieren aber mehr Punkte in Regionen mit besseren Skalarisierungswerten; multimodale Präferenzenoptima sind Punkte, die lokale Skalarisierungsoptima im Zielraum darstellen. Ein Drei-Stufen-Algorith\-mus wird entwickelt, der lokale Skalarisierungsoptima approximiert und verschiedene Methoden werden für die unterschiedlichen Stufen evaluiert. Zwei Realweltprobleme werden vorgestellt, die die Nützlichkeit der beiden Algorithmen illustrieren. Das erste Problem besteht darin Fahrpläne für ein Blockheizkraftwerk zu finden, die die erzeugte Elektrizität und Wärme maximieren und den Kraftstoffverbrauch minimiert. Präferenzen-verzerrte Approximationen generieren mehr Energie-effiziente Lösungen, unter denen der Entscheider seine favorisierte Lösung auswählen kann, indem er die Konflikte zwischen den drei Zielen abwägt. Das zweite Problem beschäftigt sich mit der Erstellung von Fahrplänen für Geräte in einem Wohngebäude, so dass Energiekosten, Kohlenstoffdioxidemissionen und thermisches Unbehagen minimiert werden. Es wird gezeigt, dass lokale Skalarisierungsoptima Fahrpläne darstellen, die eine gute Balance zwischen den drei Zielen bieten. Die Analyse und die Experimente, die in dieser Arbeit vorgestellt werden, ermöglichen es Entscheidern bessere Entscheidungen zu treffen indem Methoden angewendet werden, die mehr Optionen generieren, die mit den Präferenzen der Entscheider übereinstimmen

    A Shuffled Complex Evolution Metropolis algorithm for optimization and uncertainty assessment of hydrologic model parameters

    Get PDF
    Markov Chain Monte Carlo (MCMC) methods have become increasingly popular for estimating the posterior probability distribution of parameters in hydrologic models. However, MCMC methods require the a priori definition of a proposal or sampling distribution, which determines the explorative capabilities and efficiency of the sampler and therefore the statistical properties of the Markov Chain and its rate of convergence. In this paper we present an MCMC sampler entitled the Shuffled Complex Evolution Metropolis algorithm (SCEM-UA), which is well suited to infer the posterior distribution of hydrologic model parameters. The SCEM-UA algorithm is a modified version of the original SCE-UA global optimization algorithm developed by Duan et al. [1992]. The SCEM-UA algorithm operates by merging the strengths of the Metropolis algorithm, controlled random search, competitive evolution, and complex shuffling in order to continuously update the proposal distribution and evolve the sampler to the posterior target distribution. Three case studies demonstrate that the adaptive capability of the SCEM-UA algorithm significantly reduces the number of model simulations needed to infer the posterior distribution of the parameters when compared with the traditional Metropolis-Hastings samplers

    A Methodology for Public-Planner Interaction in Multiobjective Project Planning and Evaluation

    Get PDF
    A review of current multiple objective planning techniques is presented. A critique of certain classes of these techniques is offered, especially in terms of the degree to which they facilitate certain information needs of the planning process. Various tools in operations research are used to constructed a new multiple objective planning methodology, called the Vector Optimization Decision Convergence Algorithm (VODCA). An application of the methodology pertaining to water resources development in Utah is documented

    An exploration of evolutionary computation applied to frequency modulation audio synthesis parameter optimisation

    Get PDF
    With the ever-increasing complexity of sound synthesisers, there is a growing demand for automated parameter estimation and sound space navigation techniques. This thesis explores the potential for evolutionary computation to automatically map known sound qualities onto the parameters of frequency modulation synthesis. Within this exploration are original contributions in the domain of synthesis parameter estimation and, within the developed system, evolutionary computation, in the form of the evolutionary algorithms that drive the underlying optimisation process. Based upon the requirement for the parameter estimation system to deliver multiple search space solutions, existing evolutionary algorithmic architectures are augmented to enable niching, while maintaining the strengths of the original algorithms. Two novel evolutionary algorithms are proposed in which cluster analysis is used to identify and maintain species within the evolving populations. A conventional evolution strategy and cooperative coevolution strategy are defined, with cluster-orientated operators that enable the simultaneous optimisation of multiple search space solutions at distinct optima. A test methodology is developed that enables components of the synthesis matching problem to be identified and isolated, enabling the performance of different optimisation techniques to be compared quantitatively. A system is consequently developed that evolves sound matches using conventional frequency modulation synthesis models, and the effectiveness of different evolutionary algorithms is assessed and compared in application to both static and timevarying sound matching problems. Performance of the system is then evaluated by interview with expert listeners. The thesis is closed with a reflection on the algorithms and systems which have been developed, discussing possibilities for the future of automated synthesis parameter estimation techniques, and how they might be employed

    Methodological review of multicriteria optimization techniques: aplications in water resources

    Get PDF
    Multi-criteria decision analysis (MCDA) is an umbrella approach that has been applied to a wide range of natural resource management situations. This report has two purposes. First, it aims to provide an overview of advancedmulticriteriaapproaches, methods and tools. The review seeks to layout the nature of the models, their inherent strengths and limitations. Analysis of their applicability in supporting real-life decision-making processes is provided with relation to requirements imposed by organizationally decentralized and economically specific spatial and temporal frameworks. Models are categorized based on different classification schemes and are reviewed by describing their general characteristics, approaches, and fundamental properties. A necessity of careful structuring of decision problems is discussed regarding planning, staging and control aspects within broader agricultural context, and in water management in particular. A special emphasis is given to the importance of manipulating decision elements by means ofhierarchingand clustering. The review goes beyond traditionalMCDAtechniques; it describes new modelling approaches. The second purpose is to describe newMCDAparadigms aimed at addressing the inherent complexity of managing water ecosystems, particularly with respect to multiple criteria integrated with biophysical models,multistakeholders, and lack of information. Comments about, and critical analysis of, the limitations of traditional models are made to point out the need for, and propose a call to, a new way of thinking aboutMCDAas they are applied to water and natural resources management planning. These new perspectives do not undermine the value of traditional methods; rather they point to a shift in emphasis from methods for problem solving to methods for problem structuring. Literature review show successfully integrations of watershed management optimization models to efficiently screen a broad range of technical, economic, and policy management options within a watershed system framework and select the optimal combination of management strategies and associated water allocations for designing a sustainable watershed management plan at least cost. Papers show applications in watershed management model that integrates both natural and human elements of a watershed system including the management of ground and surface water sources, water treatment and distribution systems, human demands,wastewatertreatment and collection systems, water reuse facilities,nonpotablewater distribution infrastructure, aquifer storage and recharge facilities, storm water, and land use

    Environmental Indicators for the Coastal Region of the U.S. Great Lakes

    Get PDF
    The goal of this research collaboration was to develop indicators that both estimate environmental condition and suggest plausible causes of ecosystem degradation in the coastal region of the U.S. Great Lakes. The collaboration consisted of 8 broad components, each of which generated different types of environmental responses and characteristics of the coastal region. These indicators included biotic communities of amphibians, birds, diatoms, fish, macroinvertebrates, and wetland plants as well as indicators of polycyclic aromatic hydrocarbon (PAH) photo-induced toxicity and landscape characterization. These components are summarized below and discussed in more detailed in 5 separate reports (Section II). Stress gradients within the U.S. Great Lakes coastal region were defined from 207 variables (e.g., agriculture, atmospheric deposition, land use/land cover, human populations, point source pollution, and shoreline modification) from 19 different data sources that were publicly available for the coastal region. Biotic communities along these gradients were sampled with a stratified, random design among representative ecosystems within the coastal zone. To achieve the sampling across this massive area, the coastal region was subdivided into 2 major ecological provinces and further subdivided into 762 segment sheds. Stress gradients were defined for the major categories of human-induced disturbance in the coastal region and an overall stress index was calculated which represented a combination of all the stress gradients. Investigators of this collaboration have had extensive interactions with the Great Lakes community. For instance, the Lake Erie Lakewide Area Management Plan (LAMP) has adopted many of the stressor measures as integral indicators of the condition of watersheds tributary to Lake Erie. Furthermore, the conceptual approach and applications for development of a generalized stressor gradient have been incorporated into a document defining the tiered aquatic life criteria for defining biological integrity of the nation’s waters. A total of 14 indicators of the U.S. Great Lakes coastal region are presented for potential application. Each indicator is summarized with respect to its use, methodology, spatial context, and diagnosis capability. In general, the results indicate that stress related to agricultural activity and human population density/development had the largest impacts on the biotic community indicators. In contrast, the photoinduced PAH indicator was primarily related to industrial activity in the U.S. Great Lakes, and over half of the sites sampled were potentially at risk of PAH toxicity to larval fish. One of the indicators developed for land use/land change was developed from Landsat imagery for the entire U.S. Great Lakes basin and for the period from 1992 to 2001. This indicator quantified the extensive conversions of both agricultural and forest land to residential area that has occurred during a short 9 year period. Considerable variation in the responses were manifest at different spatial scales and many at surprisingly large scales. Significant advances were made with respect to development of methods for identifying and testing environmental indicators. In addition, many indicators and concepts developed from this project are being incorporated into management plans and U.S. 8 EPA methods documents. Further details, downloadable documents, and updates on these indicators can be found at the GLEI website - http://glei.nrri.umn.edu

    Optimisation Fiabiliste - Prise en Compte des Tests Futurs et Approche par Systèmes Multi-Agent

    Get PDF
    Les premières étapes d'une conception fiabiliste impliquent la formulation de critères de performance et de contraintes de fiabilité d'une part, et le choix d'une représentation des incertitudes d'autre part. Force est de constater que, le plus souvent, des aspects de performance ou de fiabilité conditionnant la solution optimale ne seront pas connus ou seront négligés lors des premières phases de conception. De plus, les techniques de réduction des incertitudes telles que les tests additionnels et la reconception ne sont pas pris en compte dans les calculs de fiabilité initiaux. Le travail exposé dans ce manuscrit aborde la conception optimale de systèmes sous deux angles : 1) le compromis entre performance et coût généré par les tests supplémentaires et les reconceptions et, 2) l'identification de multiples solutions optimales (dont certaines locales) en tant que stratégie contre les erreurs initiales de conception. Dans la première partie de notre travail, une méthodologie est proposée pour estimer l'effet sur la performance et le coût d'un produit d'un test supplémentaire et d'une éventuelle reconception. Notre approche se base, d'une part, sur des distributions en probabilité des erreurs de calcul et des erreurs expérimentales et, d'autre part, sur une rêgle de reconception a priori. Ceci permet d'estimer a posteriori la probabilité et le coût d'un produit. Nous montrons comment, à travers le choix de politiques de prochain test et de re-conception, une entreprise est susceptible de contrôler le compromis entre performance et coût de développement.Dans la seconde partie de notre travail, nous proposons une méthode pour l'estimation de plusieurs solutions candidates à un problème de conception où la fonction coût et/ou les contraintes sont coûteuses en calcul. Une approche pour aborder de tels problèmes est d'utiliser un métamodèle, ce qui nécessite des évaluations de points en diverses régions de l'espace de recherche. Il est alors dommage d'utiliser cette connaissance seulement pour estimer un optimum global. Nous proposons une nouvelle approche d'échantillonnage à partir de métamodèles pour trouver plusieurs optima locaux. Cette méthode procède par partitionnement adaptatif de l'espace de recherche et construction de métamodèles au sein de chaque partition. Notre méthode est testée et comparée à d'autres approches d'optimisation globale par métamodèles sur des exemples analytiques en dimensions 2 à 6, ainsi que sur la conception d'un bouclier thermique en 5 dimensions.The initial stages of reliability-based design optimization involve the formulation of objective functions and constraints, and building a model to estimate the reliability of the design with quantified uncertainties. However, even experienced hands often overlook important objective functions and constraints that affect the design. In addition, uncertainty reduction measures, such as tests and redesign, are often not considered in reliability calculations during the initial stages. This research considers two areas that concern the design of engineering systems: 1) the trade-off of the effect of a test and post-test redesign on reliability and cost and 2) the search for multiple candidate designs as insurance against unforeseen faults in some designs. In this research, a methodology was developed to estimate the effect of a single future test and post-test redesign on reliability and cost. The methodology uses assumed distributions of computational and experimental errors with re-design rules to simulate alternative future test and redesign outcomes to form a probabilistic estimate of the reliability and cost for a given design. Further, it was explored how modeling a future test and redesign provides a company an opportunity to balance development costs versus performance by simultaneously designing the design and the post-test redesign rules during the initial design stage.The second area of this research considers the use of dynamic local surrogates, or surrogate-based agents, to locate multiple candidate designs. Surrogate-based global optimization algorithms often require search in multiple candidate regions of design space, expending most of the computation needed to define multiple alternate designs. Thus, focusing on solely locating the best design may be wasteful. We extended adaptive sampling surrogate techniques to locate multiple optima by building local surrogates in sub-regions of the design space to identify optima. The efficiency of this method was studied, and the method was compared to other surrogate-based optimization methods that aim to locate the global optimum using two two-dimensional test functions, a six-dimensional test function, and a five-dimensional engineering example.ST ETIENNE-ENS des Mines (422182304) / SudocSudocFranceF
    • …
    corecore