375 research outputs found

    Iterative restricted space search : a solving approach based on hybridization

    Get PDF
    Face à la complexité qui caractérise les problèmes d'optimisation de grande taille l'exploration complète de l'espace des solutions devient rapidement un objectif inaccessible. En effet, à mesure que la taille des problèmes augmente, des méthodes de solution de plus en plus sophistiquées sont exigées afin d'assurer un certain niveau d 'efficacité. Ceci a amené une grande partie de la communauté scientifique vers le développement d'outils spécifiques pour la résolution de problèmes de grande taille tels que les méthodes hybrides. Cependant, malgré les efforts consentis dans le développement d'approches hybrides, la majorité des travaux se sont concentrés sur l'adaptation de deux ou plusieurs méthodes spécifiques, en compensant les points faibles des unes par les points forts des autres ou bien en les adaptant afin de collaborer ensemble. Au meilleur de notre connaissance, aucun travail à date n'à été effectué pour développer un cadre conceptuel pour la résolution efficace de problèmes d'optimisation de grande taille, qui soit à la fois flexible, basé sur l'échange d'information et indépendant des méthodes qui le composent. L'objectif de cette thèse est d'explorer cette avenue de recherche en proposant un cadre conceptuel pour les méthodes hybrides, intitulé la recherche itérative de l'espace restreint, ±Iterative Restricted Space Search (IRSS)>>, dont, la principale idée est la définition et l'exploration successives de régions restreintes de l'espace de solutions. Ces régions, qui contiennent de bonnes solutions et qui sont assez petites pour être complètement explorées, sont appelées espaces restreints "Restricted Spaces (RS)". Ainsi, l'IRSS est une approche de solution générique, basée sur l'interaction de deux phases algorithmiques ayant des objectifs complémentaires. La première phase consiste à identifier une région restreinte intéressante et la deuxième phase consiste à l'explorer. Le schéma hybride de l'approche de solution permet d'alterner entre les deux phases pour un nombre fixe d'itérations ou jusqu'à l'atteinte d'une certaine limite de temps. Les concepts clés associées au développement de ce cadre conceptuel et leur validation seront introduits et validés graduellement dans cette thèse. Ils sont présentés de manière à permettre au lecteur de comprendre les problèmes que nous avons rencontrés en cours de développement et comment les solutions ont été conçues et implémentées. À cette fin, la thèse a été divisée en quatre parties. La première est consacrée à la synthèse de l'état de l'art dans le domaine de recherche sur les méthodes hybrides. Elle présente les principales approches hybrides développées et leurs applications. Une brève description des approches utilisant le concept de restriction d'espace est aussi présentée dans cette partie. La deuxième partie présente les concepts clés de ce cadre conceptuel. Il s'agit du processus d'identification des régions restreintes et des deux phases de recherche. Ces concepts sont mis en oeuvre dans un schéma hybride heuristique et méthode exacte. L'approche a été appliquée à un problème d'ordonnancement avec deux niveaux de décision, relié au contexte des pâtes et papier: "Pulp Production Scheduling Problem". La troisième partie a permit d'approfondir les concepts développés et ajuster les limitations identifiées dans la deuxième partie, en proposant une recherche itérative appliquée pour l'exploration de RS de grande taille et une structure en arbre binaire pour l'exploration de plusieurs RS. Cette structure a l'avantage d'éviter l'exploration d 'un espace déjà exploré précédemment tout en assurant une diversification naturelle à la méthode. Cette extension de la méthode a été testée sur un problème de localisation et d'allocation en utilisant un schéma d'hybridation heuristique-exact de manière itérative. La quatrième partie généralise les concepts préalablement développés et conçoit un cadre général qui est flexible, indépendant des méthodes utilisées et basé sur un échange d'informations entre les phases. Ce cadre a l'avantage d'être général et pourrait être appliqué à une large gamme de problèmes

    On multiobjective optimization from the nonsmooth perspective

    Get PDF
    Practical applications usually have multiobjective nature rather than having only one objective to optimize. A multiobjective problem cannot be solved with a single-objective solver as such. On the other hand, optimization of only one objective may lead to an arbitrary bad solutions with respect to other objectives. Therefore, special techniques for multiobjective optimization are vital. In addition to multiobjective nature, many real-life problems have nonsmooth (i.e. not continuously differentiable) structure. Unfortunately, many smooth (i.e. continuously differentiable) methods adopt gradient-based information which cannot be used for nonsmooth problems. Since both of these characteristics are relevant for applications, we focus here on nonsmooth multiobjective optimization. As a research topic, nonsmooth multiobjective optimization has gained only limited attraction while the fields of nonsmooth single-objective and smooth multiobjective optimization distinctively have attained greater interest. This dissertation covers parts of nonsmooth multiobjective optimization in terms of theory, methodology and application. Bundle methods are widely considered as effective and reliable solvers for single-objective nonsmooth optimization. Therefore, we investigate the use of the bundle idea in the multiobjective framework with three different methods. The first one generalizes the single-objective proximal bundle method for the nonconvex multiobjective constrained problem. The second method adopts the ideas from the classical steepest descent method into the convex unconstrained multiobjective case. The third method is designed for multiobjective problems with constraints where both the objectives and constraints can be represented as a difference of convex (DC) functions. Beside the bundle idea, all three methods are descent, meaning that they produce better values for each objective at each iteration. Furthermore, all of them utilize the improvement function either directly or indirectly. A notable fact is that none of these methods use scalarization in the traditional sense. With the scalarization we refer to the techniques transforming a multiobjective problem into the single-objective one. As the scalarization plays an important role in multiobjective optimization, we present one special family of achievement scalarizing functions as a representative of this category. In general, the achievement scalarizing functions suit well in the interactive framework. Thus, we propose the interactive method using our special family of achievement scalarizing functions. In addition, this method utilizes the above mentioned descent methods as tools to illustrate the range of optimal solutions. Finally, this interactive method is used to solve the practical case studies of the scheduling the final disposal of the spent nuclear fuel in Finland.Käytännön optimointisovellukset ovat usein luonteeltaan ennemmin moni- kuin yksitavoitteisia. Erityisesti monitavoitteisille tehtäville suunnitellut menetelmät ovat tarpeen, sillä monitavoitteista optimointitehtävää ei sellaisenaan pysty ratkaisemaan yksitavoitteisilla menetelmillä eikä vain yhden tavoitteen optimointi välttämättä tuota mielekästä ratkaisua muiden tavoitteiden suhteen. Monitavoitteisuuden lisäksi useat käytännön tehtävät ovat myös epäsileitä siten, etteivät niissä esiintyvät kohde- ja rajoitefunktiot välttämättä ole kaikkialla jatkuvasti differentioituvia. Kuitenkin monet optimointimenetelmät hyödyntävät gradienttiin pohjautuvaa tietoa, jota ei epäsileille funktioille ole saatavissa. Näiden molempien ominaisuuksien ollessa keskeisiä sovelluksia ajatellen, keskitytään tässä työssä epäsileään monitavoiteoptimointiin. Tutkimusalana epäsileä monitavoiteoptimointi on saanut vain vähän huomiota osakseen, vaikka sekä sileä monitavoiteoptimointi että yksitavoitteinen epäsileä optimointi erikseen ovat aktiivisia tutkimusaloja. Tässä työssä epäsileää monitavoiteoptimointia on käsitelty niin teorian, menetelmien kuin käytännön sovelluksien kannalta. Kimppumenetelmiä pidetään yleisesti tehokkaina ja luotettavina menetelminä epäsileän optimointitehtävän ratkaisemiseen ja siksi tätä ajatusta hyödynnetään myös tässä väitöskirjassa kolmessa eri menetelmässä. Ensimmäinen näistä yleistää yksitavoitteisen proksimaalisen kimppumenetelmän epäkonveksille monitavoitteiselle rajoitteiselle tehtävälle sopivaksi. Toinen menetelmä hyödyntää klassisen nopeimman laskeutumisen menetelmän ideaa konveksille rajoitteettomalle tehtävälle. Kolmas menetelmä on suunniteltu erityisesti monitavoitteisille rajoitteisille tehtäville, joiden kohde- ja rajoitefunktiot voidaan ilmaista kahden konveksin funktion erotuksena. Kimppuajatuksen lisäksi kaikki kolme menetelmää ovat laskevia eli ne tuottavat joka kierroksella paremman arvon jokaiselle tavoitteelle. Yhteistä on myös se, että nämä kaikki hyödyntävät parannusfunktiota joko suoraan sellaisenaan tai epäsuorasti. Huomattavaa on, ettei yksikään näistä menetelmistä hyödynnä skalarisointia perinteisessä merkityksessään. Skalarisoinnilla viitataan menetelmiin, joissa usean tavoitteen tehtävä on muutettu sopivaksi yksitavoitteiseksi tehtäväksi. Monitavoiteoptimointimenetelmien joukossa skalarisoinnilla on vankka jalansija. Esimerkkinä skalarisoinnista tässä työssä esitellään yksi saavuttavien skalarisointifunktioiden perhe. Yleisesti saavuttavat skalarisointifunktiot soveltuvat hyvin interaktiivisten menetelmien rakennuspalikoiksi. Täten kuvaillaan myös esiteltyä skalarisointifunktioiden perhettä hyödyntävä interaktiivinen menetelmä, joka lisäksi hyödyntää laskevia menetelmiä optimaalisten ratkaisujen havainnollistamisen apuna. Lopuksi tätä interaktiivista menetelmää käytetään aikatauluttamaan käytetyn ydinpolttoaineen loppusijoitusta Suomessa

    Optimization of stochastic-dynamic decision problems with applications in energy and production systems

    Get PDF
    Die vorliegende Arbeit beschäftigt sich mit der mathematischen Optimierung von stochastisch-dynamischen Entscheidungsproblemen. Diese Problemklasse stellt eine besondere Herausforderung für die mathematische Optimierung dar, da bislang kein Lösungsverfahren bekannt ist, das in polynomieller Zeit zu einer exakten Lösung konvergiert. Alle generischen Verfahren der dynamischen Optimierung unterliegen dem sogenannten "Fluch der Dimensionen", der dazu führt, dass die Problemkomplexität exponentiell in der Anzahl der Zustandsvariablen zunimmt. Da Entscheidungsprobleme von realistischer Größenordnung meist über eine Vielzahl von Zustandsvariablen verfügen, stoßen exakte Lösungsverfahren schnell an ihre Grenzen. Einen vielversprechenden Ausweg, um dem Fluch der Dimensionen zu entgehen, stellen Verfahren der "approximativ-dynamischen Optimierung" dar (engl.: "approximate dynamic programming"), welche versuchen eine Nährungslösung des stochastisch-dynamischen Problems zu berechnen. Diese Verfahren erzeugen eine künstliche Stichprobe des Entscheidungsprozesses mittels Monte-Carlo-Simulation und konstruieren basierend auf dieser Stichprobe eine Approximation der Wertfunktion des dynamischen Problems. Dabei wird die Stichprobe so gewählt, dass lediglich diejenigen Zustände in die Stichprobe aufgenommen werden, welche für den Entscheidungsprozess von Bedeutung sind, wodurch eine vollständige Enumeration des Zustandsraums vermieden wird. In dieser Arbeit werden Verfahren der approximativ-dynamischen Optimierung auf verschiedene Probleme der Produktions- und Energiewirtschaft angewendet und daraufhin überprüft, ob sie in der Lage sind, das zugrundeliegende mathematische Optimierungproblem nährungsweise zu lösen. Die Arbeit kommt zu dem Ergebnis, dass sich komplexe stochastisch-dynamische Bewirtschaftungsprobleme effizient lösen lassen, sofern das Optimierungsproblem konvex und der Zufallsprozess unabhängig vom Entscheidungsprozess ist. Handelt es sich hingegen um ein diskretes Optimierungsproblem, so stoßen auch Verfahren der approximativ-dynamischen Optimierung an ihre Grenzen. In diesem Fall sind gut kalibrierte, einfache Entscheidungsregeln möglicherweise die bessere Alternative.This thesis studies mathematical optimization methods for stochastic-dynamic decision problems. This problem class is particularly challenging, as there still exists no algorithm that converges to an exact solution in polynomial time. Existing generic solution methods are all subject to the "curse of dimensionality", which means that problem complexity increases exponentially in the number of state variables. Since problems of realistic size typically come with a large number of state variables, applying exact solution methods is impractical. A promising methodology to break the curse of dimensionality is "approximate dynamic programming". To avoid a complete enumeration of the state space, solution techniques based on this methodology use Monte Carlo simulation to sample states that are relevant to the decision process and then approximate the value function of the dynamic program by a function of much lower complexity. This thesis applies approximate dynamic programming techniques to different resource management problems that arise in production and energy settings and studies whether these techniques are capable of solving the underlying optimization problems. The thesis concludes that stochastic-dynamic resource management problems can be solved efficiently if the underlying optimization problem is convex and randomness independent of the resource states. If the optimization problem is discrete, however, the problem remains hard to solve, even for approximate dynamic programming techniques. In this case, simple but well-adjusted decision policies may be the better choice

    Retention Prediction and Policy Optimization for United States Air Force Personnel Management

    Get PDF
    Effective personnel management policies in the United States Air Force (USAF) require methods to predict the number of personnel who will remain in the USAF as well as to replenish personnel with different skillsets over time as they depart. To improve retention predictions, we develop and test traditional random forest models and feedforward neural networks as well as partially autoregressive forms of both, outperforming the benchmark on a test dataset by 62.8% and 34.8% for the neural network and the partially autoregressive neural network, respectively. We formulate the workforce replenishment problem as a Markov decision process for active duty enlisted personnel, then extend this formulation to include the Air Force Reserve and Air National Guard. We develop and test an adaptation of the Concave Adaptive Value Estimation (CAVE) algorithm and a parameterized Deep Q-Network on the active duty problem instance with 7050 dimensions, finding that CAVE reduces costs from the benchmark policy by 29.76% and 17.38% for the two cost functions tested. We test CAVE across a range of hyperparameters for the larger intercomponent problem instance with 21,240 dimensions, reducing costs by 23.06% from the benchmark, then develop the Stochastic Use of Perturbations to Enhance Robustness of CAVE (SUPERCAVE) algorithm, reducing costs by another 0.67%. Resulting algorithms and methods are directly applicable to contemporary USAF personnel business practices and enable more accurate, less time-intensive, cogent, and data-informed policy targets for current processes

    Strategic algorithms

    Get PDF
    Thesis (Ph. D.)--Massachusetts Institute of Technology, Dept. of Electrical Engineering and Computer Science, 2009.Cataloged from PDF version of thesis.Includes bibliographical references (p. 193-201).Classical algorithms from theoretical computer science arise time and again in practice. However,a practical situations typically do not fit precisely into the traditional theoretical models. Additional necessary components are, for example, uncertainty and economic incentives. Therefore, modem algorithm design is calling for more interdisciplinary approaches, as well as for deeper theoretical understanding, so that the algorithms can apply to more realistic settings and complex systems. Consider, for instance, the classical shortest path algorithm, which, given a graph with specified edge weights, seeks the path minimizing the total weight from a source to a destination. In practice, the edge weights are often uncertain and it is not even clear what we mean by shortest path anymore: is it the path that minimizes the expected weight? Or its variance, or some another metric? With a risk-averse objective function that takes into account both mean and standard deviation, we run into nonconvex optimization challenges that require new theory beyond classical shortest path algorithm design. Yet another shortest path application, routing of packets in the Internet, needs to further incorporate economic incentives to reflect the various business relationships among the Internet Service Providers that affect the choice of packet routes. Strategic Algorithms are algorithms that integrate optimization, uncertainty and economic modeling into algorithm design, with the goal of bringing about new theoretical developments and solving practical applications arising in complex computational-economic systems.(cont.) In short, this thesis contributes new algorithms and their underlying theory at the interface of optimization, uncertainty and economics. Although the interplay of these disciplines is present in various forms in our work, for the sake of presentation we have divided the material into three categories: 1. In Part I we investigate algorithms at the intersection of Optimization and Uncertainty. The key conceptual contribution in this part is discovering a novel connection between stochastic and nonconvex optimization. Traditional algorithm design has not taken into account the risk inherent in stochastic optimization problems. We consider natural objectives that incorporate risk, which tum out equivalent to certain nonconvex problems from the realm of continuous optimization. As a result, our work advances the state of art in both stochastic and in nonconvex optimization, presenting new complexity results and proposing general purpose efficient approximation algorithms, some of which have shown promising practical performance and have been implemented in a real traffic prediction and navigation system. 2. Part II proposes new algorithm and mechanism design at the intersection of Uncertainty and Economics. In Part I we postulate that the random variables in our models come from given distributions. However, determining those distributions or their parameters is a challenging and fundamental problem in itself. A tool from Economics that has recently gained momentum for measuring the probability distribution of a random variable is an information or prediction market. Such markets, most popularly known for predicting the outcomes of political elections or other events of interest, have shown remarkable accuracy in practice, though at the same time have left open the theoretical and strategic analysis of current implementations, as well as the need for new and improved designs which handle more complex outcome spaces (probability distribution functions) as opposed to binary or n-ary valued distributions. The contributions of this part include a unified strategic analysis of different prediction market designs that have been implemented in practice.(cont.) We also offer new market designs for handling exponentially large outcome spaces stemming from ranking or permutation-type outcomes, together with algorithmic and complexity analysis. 3. In Part III we consider the interplay of optimization and economics in the context of network routing. This part is motivated by the network of autonomous systems in the Internet where each portion of the network is controlled by an Internet service provider, namely by a self-interested economic agent. The business incentives do not exist merely in addition to the computer protocols governing the network. Although they are not currently integrated in those protocols and are decided largely via private contracting and negotiations, these economic considerations are a principal factor that determines how packets are routed. And vice versa, the demand and flow of network traffic fundamentally affect provider contracts and prices. The contributions of this part are the design and analysis of economic mechanisms for network routing. The mechanisms are based on first- and second-price auctions (the so-called Vickrey-Clarke-Groves, or VCG mechanisms). We first analyze the equilibria and prices resulting from these mechanisms. We then investigate the compatibility of the better understood VCG-mechanisms with the current inter-domain routing protocols, and we demonstrate the critical importance of correct modeling and how it affects the complexity and algorithms necessary to implement the economic mechanisms.by Evdokia Velinova Nikolova.Ph.D
    corecore