605 research outputs found

    Distributed Domain Propagation

    Get PDF
    Portfolio parallelization is an approach that runs several solver instances in parallel and terminates when one of them succeeds in solving the problem. Despite its simplicity, portfolio parallelization has been shown to perform well for modern mixed-integer programming (MIP) and boolean satisfiability problem (SAT) solvers. Domain propagation has also been shown to be a simple technique in modern MIP and SAT solvers that effectively finds additional domain reductions after the domain of a variable has been reduced. In this paper we introduce distributed domain propagation, a technique that shares bound tightenings across solvers to trigger further domain propagations. We investigate its impact in modern MIP solvers that employ portfolio parallelization. Computational experiments were conducted for two implementations of this parallelization approach. While both share global variable bounds and solutions, they communicate differently. In one implementation the communication is performed only at designated points in the solving process and in the other it is performed completely asynchronously. Computational experiments show a positive performance impact of communicating global variable bounds and provide valuable insights in communication strategies for parallel solvers

    Human Control of Air Traffic Trajectory Optimizer

    Get PDF

    Large-scale parallelism for constraint-based local search: the costas array case study

    Get PDF
    International audienceWe present the parallel implementation of a constraint-based Local Search algorithm and investigate its performance on several hardware plat-forms with several hundreds or thousands of cores. We chose as the basis for these experiments the Adaptive Search method, an efficient sequential Local Search method for Constraint Satisfaction Problems (CSP). After preliminary experiments on some CSPLib benchmarks, we detail the modeling and solving of a hard combinatorial problem related to radar and sonar applications: the Costas Array Problem. Performance evaluation on some classical CSP bench-marks shows that speedups are very good for a few tens of cores, and good up to a few hundreds of cores. However for a hard combinatorial search problem such as the Costas Array Problem, performance evaluation of the sequential version shows results outperforming previous Local Search implementations, while the parallel version shows nearly linear speedups up to 8,192 cores. The proposed parallel scheme is simple and based on independent multi-walks with no communication between processes during search. We also investigated a cooperative multi-walk scheme where processes share simple information, but this scheme does not seem to improve performance

    On High-Performance Benders-Decomposition-Based Exact Methods with Application to Mixed-Integer and Stochastic Problems

    Get PDF
    RÉSUMÉ : La programmation stochastique en nombres entiers (SIP) combine la difficultĂ© de l’incertitude et de la non-convexitĂ© et constitue une catĂ©gorie de problĂšmes extrĂȘmement difficiles Ă  rĂ©soudre. La rĂ©solution efficace des problĂšmes SIP est d’une grande importance en raison de leur vaste applicabilitĂ©. Par consĂ©quent, l’intĂ©rĂȘt principal de cette dissertation porte sur les mĂ©thodes de rĂ©solution pour les SIP. Nous considĂ©rons les SIP en deux Ă©tapes et prĂ©sentons plusieurs algorithmes de dĂ©composition amĂ©liorĂ©s pour les rĂ©soudre. Notre objectif principal est de dĂ©velopper de nouveaux schĂ©mas de dĂ©composition et plusieurs techniques pour amĂ©liorer les mĂ©thodes de dĂ©composition classiques, pouvant conduire Ă  rĂ©soudre optimalement divers problĂšmes SIP. Dans le premier essai de cette thĂšse, nous prĂ©sentons une revue de littĂ©rature actualisĂ©e sur l’algorithme de dĂ©composition de Benders. Nous fournissons une taxonomie des amĂ©liorations algorithmiques et des stratĂ©gies d’accĂ©lĂ©ration de cet algorithme pour synthĂ©tiser la littĂ©rature et pour identifier les lacunes, les tendances et les directions de recherche potentielles. En outre, nous discutons de l’utilisation de la dĂ©composition de Benders pour dĂ©velopper une (mĂ©ta- )heuristique efficace, dĂ©crire les limites de l’algorithme classique et prĂ©senter des extensions permettant son application Ă  un plus large Ă©ventail de problĂšmes. Ensuite, nous dĂ©veloppons diverses techniques pour surmonter plusieurs des principaux inconvĂ©nients de l’algorithme de dĂ©composition de Benders. Nous proposons l’utilisation de plans de coupe, de dĂ©composition partielle, d’heuristiques, de coupes plus fortes, de rĂ©ductions et de stratĂ©gies de dĂ©marrage Ă  chaud pour pallier les difficultĂ©s numĂ©riques dues aux instabilitĂ©s, aux inefficacitĂ©s primales, aux faibles coupes d’optimalitĂ© ou de rĂ©alisabilitĂ©, et Ă  la faible relaxation linĂ©aire. Nous testons les stratĂ©gies proposĂ©es sur des instances de rĂ©fĂ©rence de problĂšmes de conception de rĂ©seau stochastique. Des expĂ©riences numĂ©riques illustrent l’efficacitĂ© des techniques proposĂ©es. Dans le troisiĂšme essai de cette thĂšse, nous proposons une nouvelle approche de dĂ©composition appelĂ©e mĂ©thode de dĂ©composition primale-duale. Le dĂ©veloppement de cette mĂ©thode est fondĂ© sur une reformulation spĂ©cifique des sous-problĂšmes de Benders, oĂč des copies locales des variables maĂźtresses sont introduites, puis relĂąchĂ©es dans la fonction objective. Nous montrons que la mĂ©thode proposĂ©e attĂ©nue significativement les inefficacitĂ©s primales et duales de la mĂ©thode de dĂ©composition de Benders et qu’elle est Ă©troitement liĂ©e Ă  la mĂ©thode de dĂ©composition duale lagrangienne. Les rĂ©sultats de calcul sur divers problĂšmes SIP montrent la supĂ©rioritĂ© de cette mĂ©thode par rapport aux mĂ©thodes classiques de dĂ©composition. Enfin, nous Ă©tudions la parallĂ©lisation de la mĂ©thode de dĂ©composition de Benders pour Ă©tendre ses performances numĂ©riques Ă  des instances plus larges des problĂšmes SIP. Les variantes parallĂšles disponibles de cette mĂ©thode appliquent une synchronisation rigide entre les processeurs maĂźtre et esclave. De ce fait, elles souffrent d’un important dĂ©sĂ©quilibre de charge lorsqu’elles sont appliquĂ©es aux problĂšmes SIP. Cela est dĂ» Ă  un problĂšme maĂźtre difficile qui provoque un important dĂ©sĂ©quilibre entre processeur et charge de travail. Nous proposons une mĂ©thode Benders parallĂšle asynchrone dans un cadre de type branche-et-coupe. L’assouplissement des exigences de synchronisation entraine des problĂšmes de convergence et d’efficacitĂ© divers auxquels nous rĂ©pondons en introduisant plusieurs techniques d’accĂ©lĂ©ration et de recherche. Les rĂ©sultats indiquent que notre algorithme atteint des taux d’accĂ©lĂ©ration plus Ă©levĂ©s que les mĂ©thodes synchronisĂ©es conventionnelles et qu’il est plus rapide de plusieurs ordres de grandeur que CPLEX 12.7.----------ABSTRACT : Stochastic integer programming (SIP) combines the difficulty of uncertainty and non-convexity, and constitutes a class of extremely challenging problems to solve. Efficiently solving SIP problems is of high importance due to their vast applicability. Therefore, the primary focus of this dissertation is on solution methods for SIPs. We consider two-stage SIPs and present several enhanced decomposition algorithms for solving them. Our main goal is to develop new decomposition schemes and several acceleration techniques to enhance the classical decomposition methods, which can lead to efficiently solving various SIP problems to optimality. In the first essay of this dissertation, we present a state-of-the-art survey of the Benders decomposition algorithm. We provide a taxonomy of the algorithmic enhancements and the acceleration strategies of this algorithm to synthesize the literature, and to identify shortcomings, trends and potential research directions. In addition, we discuss the use of Benders decomposition to develop efficient (meta-)heuristics, describe the limitations of the classical algorithm, and present extensions enabling its application to a broader range of problems. Next, we develop various techniques to overcome some of the main shortfalls of the Benders decomposition algorithm. We propose the use of cutting planes, partial decomposition, heuristics, stronger cuts, and warm-start strategies to alleviate the numerical challenges arising from instabilities, primal inefficiencies, weak optimality/feasibility cuts, and weak linear relaxation. We test the proposed strategies with benchmark instances from stochastic network design problems. Numerical experiments illustrate the computational efficiency of the proposed techniques. In the third essay of this dissertation, we propose a new and high-performance decomposition approach, called Benders dual decomposition method. The development of this method is based on a specific reformulation of the Benders subproblems, where local copies of the master variables are introduced and then priced out into the objective function. We show that the proposed method significantly alleviates the primal and dual shortfalls of the Benders decomposition method and it is closely related to the Lagrangian dual decomposition method. Computational results on various SIP problems show the superiority of this method compared to the classical decomposition methods as well as CPLEX 12.7. Finally, we study parallelization of the Benders decomposition method. The available parallel variants of this method implement a rigid synchronization among the master and slave processors. Thus, it suffers from significant load imbalance when applied to the SIP problems. This is mainly due to having a hard mixed-integer master problem that can take hours to be optimized. We thus propose an asynchronous parallel Benders method in a branchand- cut framework. However, relaxing the synchronization requirements entails convergence and various efficiency problems which we address them by introducing several acceleration techniques and search strategies. In particular, we propose the use of artificial subproblems, cut generation, cut aggregation, cut management, and cut propagation. The results indicate that our algorithm reaches higher speedup rates compared to the conventional synchronized methods and it is several orders of magnitude faster than CPLEX 12.7

    Heuristinen yhteistyöhaku ohjelmistoagenttien avulla

    Get PDF
    Parallel algorithms extend the notion of sequential algorithms by permitting the simultaneous execution of independent computational steps. When the independence constraint is lifted and executions can freely interact and intertwine, parallel algorithms become concurrent and may behave in a nondeterministic way. Parallelism has over the years slowly risen to be a standard feature of high-performance computing, but concurrency, being even harder to reason about, is still considered somewhat notorious and undesirable. As such, the implicit randomness available in concurrency is rarely made use of in algorithms. This thesis explores concurrency as a means to facilitate algorithmic cooperation in a heuristic search setting. We use agents, cooperating software entities, to build a single-source shortest path (SSSP) search algorithm based on parallelized A∗, dubbed A!. We show how asynchronous information sharing gives rise to implicit randomness, which cooperating agents use in A! to maintain a collective secondary ranking heuristic and focus search space exploration. We experimentally show that A! consistently outperforms both vanilla A∗ and a noncooperative, explicitly randomized A∗ variant in the standard n-puzzle sliding tile problem context. The results indicate that A! performance increases with the addition of more agents, but that the returns are diminishing. A! is observed to be sensitive to heuristic improvement, but also constrained by search overhead from limited path diversity. A hybrid approach combining both implicit and explicit randomness is also evaluated and found to not be an improvement over A! alone. The studied A! implementation based on vanilla A∗ is not as such competitive against state-of-the-art parallel A∗ algorithms, but rather a first step in applying concurrency to speed up heuristic SSSP search. The empirical results imply that concurrency and nondeterministic cooperation can successfully be harnessed in algorithm design, inviting further inquiry into algorithms of this kind.Rinnakkaisalgoritmit sallivat useiden riippumattomien ohjelmakĂ€skyjen suorittamisen samanaikaisesti. Kun riippumattomuusrajoite poistetaan ja kĂ€skyjen suorittamisen jĂ€rjestystĂ€ ei hallita, rinnakkaisalgoritmit voivat kĂ€skysuoritusten samanaikaisuuden vuoksi kĂ€yttĂ€ytyĂ€ epĂ€deterministisellĂ€ tavalla. Rinnakkaisuus on vuosien saatossa noussut tĂ€rkeÀÀn rooliin tietotekniikassa ja samalla hallitsematonta samanaikaisuutta on yleisesti alettu pitÀÀ ongelmallisena ja ei-toivottuna. Samanaikaisuudesta kumpuavaa epĂ€suoraa satunnaisuutta hyödynnetÀÀn harvoin algoritmeissa. TĂ€mĂ€ työ kĂ€sittelee kĂ€skysuoritusten samanaikaisuuden hyödyntĂ€mistĂ€ osana heuristista yhteistyöhakua. TyössĂ€ toteutetaan agenttien, yhteistyökykyisten ohjelmistokomponenttien, avulla uudenlainen A!-hakualgoritmi. A! perustuu rinnakkaiseen A∗ -algoritmiin, joka ratkaisee yhden lĂ€hteen lyhimmĂ€n polun hakuongelman. TyössĂ€ nĂ€ytetÀÀn, miten ajastamaton viestintĂ€ agenttien vĂ€lillĂ€ johtaa epĂ€suoraan satunnaisuuteen, jota A!-agentit kollektiivisesti hyödyntĂ€vĂ€t toissijaisen jĂ€rjestĂ€misheuristiikan yllĂ€pitĂ€misessĂ€ ja edelleen haun kohdentamisessa. TyössĂ€ nĂ€ytetÀÀn kokeellisesti, kuinka A! suoriutuu niin tavanomaista kuin satunnaistettuakin A∗ -algoritmia paremmin n-puzzle pulmapelin ratkaisemisessa. Tulokset osoittavat, ettĂ€ A!-algoritmin suorituskyky kasvaa lisĂ€agenttien myötĂ€, mutta myös sen, ettĂ€ hyöty on joka lisĂ€yksen jĂ€lkeen suhteellisesti pienempi. A! osoittautuu heuristiikan hyödyntĂ€misen osalta verrokkeja herkemmĂ€ksi, mutta myös etsintĂ€polkujen monimuotoisuuden kannalta vaatimattomaksi. Yksinkertaisen suoraa ja epĂ€suoraa satunnaisuutta yhdistĂ€vĂ€n hybridialgoritmin ei todeta tuovan lisĂ€suorituskykyĂ€ A!-algoritmiin verrattuna. Empiiriset kokeet osoittavat, ettĂ€ hallitsematonta samanaikaisuutta ja epĂ€determinististĂ€ yhteistyötĂ€ voi onnistuneesti hyödyntÀÀ algoritmisuunnittelussa, mikĂ€ kannustaa lisĂ€tutkimuksiin nĂ€itĂ€ soveltavan algoritmiikan parissa

    Massivel y parallel declarative computational models

    Get PDF
    Current computer archictectures are parallel, with an increasing number of processors. Parallel programming is an error-prone task and declarative models such as those based on constraints relieve the programmer from some of its difficult aspects, because they abstract control away. In this work we study and develop techniques for declarative computational models based on constraints using GPI, aiming at large scale parallel execution. The main contributions of this work are: A GPI implementation of a scalable dynamic load balancing scheme based on work stealing, suitable for tree shaped computations and effective for systems with thousands of threads. A parallel constraint solver, MaCS, implemented to take advantage of the GPI programming model. Experimental evaluation shows very good scalability results on systems with hundreds of cores. A GPI parallel version of the Adaptive Search algorithm, including different variants. The study on different problems advances the understanding of scalability issues known to exist with large numbers of cores; ### SUMÁRIO: Actualmente as arquitecturas de computadores sĂŁo paralelas, com um crescente nĂșmero de processadores. A programação paralela Ă© uma tarefa propensa a erros e modelos declarativos baseados em restriçÔes aliviam o programador de aspectos difĂ­ceis dado que abstraem o controlo. Neste trabalho estudamos e desenvolvemos tĂ©cnicas para modelos de computação declarativos baseados em restriçÔes usando o GPI, uma ferramenta e modelo de programação recente. O Objectivo Ă© a execução paralela em larga escala. As contribuiçÔes deste trabalho sĂŁo as seguintes: a implementação de um esquema dinĂąmico para balanceamento da computação baseado no GPI. O esquema Ă© adequado para computaçÔes em ĂĄrvores e efectiva em sistemas compostos por milhares de unidades de computação. Uma abordagem Ă  resolução paralela de restriçÔes denominadas de MaCS, que tira partido do modelo de programação do GPI. A Avaliação experimental revelou boa escalabilidade num sistema com centenas de processadores. Uma versĂŁo paralela do algoritmo Adaptive Search baseada no GPI, que inclui diferentes variantes. O estudo de diversos problemas aumenta a compreensĂŁo de aspectos relacionados com a escalabilidade e presentes na execução deste tipo de algoritmos num grande nĂșmero de processadores

    Optimization of heterogeneous employee scheduling problems

    Get PDF
    RÉSUMÉ: Le problĂšme de planification d’horaires du personnel consiste Ă  crĂ©er les horaires de travail des employĂ©s d’une organisation. Le nombre d’employĂ©s requis par unitĂ© de temps, appelĂ© la demande en employĂ©s par pĂ©riode, est donnĂ© pour un horizon de planification. DiffĂ©rentes rĂšgles et contraintes rĂ©gissent l’élaboration des horaires des employĂ©s. Ces rĂšgles dĂ©pendent des besoins de l’organisation, des contrats des employĂ©s et de la convention collective de travail. Le problĂšme de planification est dit hĂ©tĂ©rogĂšne quand il concerne des employĂ©s ayant des qualifications diffĂ©rentes, habituellement, dans le cadre d’un problĂšme de planification d’employĂ©s multi-tĂąches ou multi-dĂ©partements. Dans un contexte multi-dĂ©partements avec transferts entre dĂ©partements, un quart de travail peut ĂȘtre effectuĂ© dans son ensemble dans un dĂ©partement, ou un transfert de dĂ©partement peut avoir lieu au sein du quart de travail lorsque l’employĂ© a les qualifications requises. Lorsque les transferts sont autorisĂ©s, le nombre de quarts de travail possibles par employĂ© devient Ă©norme. L’optimisation d’un tel problĂšme est souvent essentielle pour le succĂšs de l’organisation. Par contre, sa rĂ©solution directe comme un programme linĂ©aire en nombres entiers s’avĂšre impossible pour les grandes instances. Dans la premiĂšre partie de cette thĂšse, nous proposons une heuristique de dĂ©composition en plusieurs phases (MP-DH) pour le problĂšme de planification des employĂ©s avec transferts. Dans ce problĂšme, la sous-couverture et la sur-couverture sont acceptĂ©es mais pĂ©nalisĂ©s dans la fonction objectif. Un dĂ©partement d’origine est introduit pour chaque employĂ© oĂč l’employĂ© doit travailler la majoritĂ© de son temps. En plus, il/elle peut ĂȘtre qualifiĂ©(e) pour travailler dans plusieurs autres dĂ©partements. La premiĂšre phase commence par dĂ©duire la demande en employĂ©s qui ne peut pas ĂȘtre couverte par les employĂ©s du dĂ©partement. Ces piĂšces de demandes extraites sont appelĂ©es intervalles critiques, car elles nĂ©cessitent des employĂ©s transfĂ©rĂ©s d’autres dĂ©partements pour y travailler. Cela se fait en rĂ©solvant un programme en nombres entiers de planification d’horaires de personnel anonyme pour chaque dĂ©partement sĂ©parĂ©ment, puis en extrayant la demande non-couverte qui dĂ©finit un ensemble d’intervalles critiques. Pour chacun des intervalles critiques, la deuxiĂšme phase choisit un dĂ©partement qui lui attribue la responsabilitĂ© de transfĂ©rer un de ses employĂ©s pour travailler pendant cet intervalle critique. Ceci est accompli en rĂ©solvant un autre programme en nombres entiers de planification d’horaires de personnel anonyme avec transferts, pour un seul jour, pour chacun des jours de l’horizon. La dĂ©composition journaliĂšre rend la taille du problĂšme gĂ©rable spĂ©cialement pour les grandes instances. Cette phase se termine par la migration de toute demande d’un dĂ©partement d1 couverte par un employĂ© d’un dĂ©partement d2, formant la demande de transfert d’employĂ© de d2 vers d1. Finalement, pour chaque dĂ©partement, la troisiĂšme phase rĂ©sout un programme en nombres entiers de planification d’horaires de personnel mono-dĂ©partemental avec transfert. La demande utilisĂ©e est la nouvelle demande rĂ©sultant de la migration de toutes les demandes de transfert pendant la deuxiĂšme phase. L’heuristique MP-DH a rĂ©ussi Ă  dĂ©composer le problĂšme de planification d’employĂ©s multidĂ©partements en plusieurs, plus petits, problĂšmes de planification d’employĂ©s mono dĂ©partementaux, ce qui a permis de rĂ©duire de beaucoup les temps de calcul et de transformer les grandes instances non rĂ©solubles en instances rĂ©solubles avec une lĂ©gĂšre baisse dans la qualitĂ© des solutions obtenues. Chacune des trois phases de MP-DH utilise le parallĂ©lisme. Dans la premiĂšre phase, les programmes en nombres entiers des dĂ©partements s’optimisent en parallĂšle. La deuxiĂšme phase exĂ©cute chaque problĂšme journalier en parallĂšle. Enfin, la troisiĂšme phase optimise chaque dĂ©partement en parallĂšle. À la fin de chaque phase, tous les rĂ©sultats des problĂšmes parallĂšles sont fusionnĂ©s pour former la solution finale. Dans les tests rĂ©alisĂ©s pour l’heuristique MP-DH, les deux premiĂšres phases sont extrĂȘmement rapides, tandis que la troisiĂšme phase peut atteindre deux heures de temps de rĂ©solution pour les grandes instances. Pour pallier Ă  cet inconvĂ©nient, nous prĂ©sentons une heuristique hybride dans la deuxiĂšme partie de la thĂšse, visant Ă  rĂ©duire fortement le temps d’exĂ©cution de la troisiĂšme phase tout en conservant la qualitĂ© de la solution. L’heuristique hybride utilise deux modĂšles de maniĂšre interchangeable afin de rĂ©soudre la troisiĂšme phase le plus prĂ©cisĂ©ment et rapidement possible. Le premier modĂšle est celui dĂ©jĂ  presentĂ© pour la troisiĂšme phase du MP-DH que nous appelons le modĂšle de base. Le second est un problĂšme de planification d’horaires du personnel mono-dĂ©partement avec transfert semi-anonyme que nous appelons le modĂšle semi-anonyme. La version semi-anonyme rĂ©duit le nombre d’employĂ©s pour lesquels les horaires sont optimisĂ©s et remplace les quarts des employĂ©s restants par un ensemble de quarts anonymes agrĂ©gĂ©s, puis rĂ©sout le problĂšme pour les employĂ©s restants par la suite. L’heuristique hybride commence par rĂ©soudre le modĂšle de base. Si aprĂšs un dĂ©lai donnĂ©, l’écart d’optimalitĂ© est supĂ©rieur Ă  un seuil donnĂ©, la rĂ©solution de modĂšle de base est annulĂ© et une version semi-anonyme est rĂ©solue. Cette opĂ©ration est rĂ©pĂ©tĂ©e jusqu’à ce que tous les horaires des employĂ©s soient optimisĂ©s. L’heuristique hybride a rĂ©ussi Ă  rĂ©duire le temps d’exĂ©cution de la troisiĂšme phase jusqu’à 87% en moyenne, tout en perdant seulement 4 % dans le coĂ»t de la solution en moyenne. Dans la troisiĂšme partie de la thĂšse, nous abordons une version diffĂ©rente du problĂšme de planification d’horaires de personnel, soit le problĂšme de planification d’horaires de personnel multi-tĂąches, oĂč ni les transferts ni la sous-couverture ne sont autorisĂ©s. À la place de la sous-couverture, des quarts anonymes appelĂ©s open-shifts sont utilisĂ©s pour couvrir la demande incouvrable par aucun employĂ©. Nous dĂ©veloppons une mĂ©taheuristique parallĂšle de recherche Ă  grands voisinage (LNS) pour ce problĂšme. Le concept de sub-scope est utilisĂ© comme unitĂ© de dĂ©composition dans l’algorithme LNS. Un sub-scope est dĂ©fini comme: un sous-ensemble d’employĂ©s, un sous-ensemble de tĂąches et un sous-ensemble continu de l’horizon du problĂšme. L’heuristique LNS est dĂ©finie par des procĂ©dures de destruction et de rĂ©paration. Notre procĂ©dure de destruction choisit des sub-scopes, entraĂźnant un coĂ»t Ă©levĂ©, Ă  dĂ©truire. Lorsqu’un sub-scope d’une solution est dĂ©truit, tous les quarts travaillĂ©s pendant l’horizon du sub-scope par un employĂ© appartenant au sub-scope, pour l’une des tĂąches du sub-scope, sont supprimĂ©s de la solution. Les coĂ»ts principaux affectant la fonction objectif sont les suivants: le coĂ»t de la surcouverture, le coĂ»t d’utilisation des open-shifts et la pĂ©nalitĂ© pour la violation des heures de travail minimales des employĂ©s. La procĂ©dure de destruction se concentre donc sur la destruction des sub-scopes entraĂźnant de tels coĂ»ts dans une solution donnĂ©e. AprĂšs la destruction des sub-scopes d’une solution, la procĂ©dure de rĂ©paration reconstruit une nouvelle solution amĂ©liorĂ©e. La procĂ©dure de rĂ©paration que nous proposons rĂ©sout un programme en nombres entiers de planification d’horaires de personnel multi-tĂąches pour les sub-scopes dĂ©jĂ  dĂ©truits. Les procĂ©dures de destruction et de rĂ©paration sont rĂ©pĂ©tĂ©es sĂ©quentiellement jusqu’à ce que la condition d’arrĂȘt soit atteinte. La procĂ©dure de destruction parallĂšle dĂ©truit plusieurs sub-scopes disjoints, puis chaque subscope est rĂ©parĂ© dans un fil (thread) parallĂšle diffĂ©rent. Nous comparons l’heuristique prĂ©sentĂ©e avec le modĂšle exact rĂ©solu dans le systĂšme commercial WFC par Kronos Inc. Les rĂ©sultats expĂ©rimentaux montrent qu’en moyenne, l’algorithme LNS parallĂšle peut rĂ©duire les temps d’exĂ©cution jusqu’à 80% et amĂ©liorer les coĂ»ts des solutions jusqu’à 1, 8%.----------ABSTRACT: The employee scheduling problem consists of creating working schedules for an organization staff. The number of required employees per time unit, called employee requirement per period, is given for the full problem horizon. Different rules and constraints govern an employee scheduling problem, these rules depends on the organization needs, employees contracts and the collective labor agreement. A heterogeneous employee scheduling problem deals with employees having different working skills, usually within a multi-job or multi-department employee scheduling context, where one employee can be qualified for several of the organization activities, and can work for any activity he/she is qualified for. One working shift can be accomplished in a single department, or a department transfer can take place within a shift when the employee has the required skills. When a department transfer within a shift is allowed, the number of possible working shifts per employee becomes huge. Optimizing such heterogeneous employee scheduling problem is often essential for organizational success. However, solving such problems directly as a mixed integer linear program (MILP) is intractable for large instances. In the first part of this thesis, we propose a multi-phase decomposition heuristic (MP-DH) for the employee scheduling problem with inter-department transfers. In this problem, the concept of department of origin is introduced, where each employee is qualified to work in several departments, but he/she has exactly one department of origin, where the employee should work the majority of his/her time. The first phase starts by extracting from each department employee requirement, the uncoverable requirement parts by internal employees,i.e. if only the department internal employees can work. These extracted requirement parts are called critical intervals, because they need transferred employees from other departments to fulfill them. This is done by solving an anonymous employee scheduling problem modeled as a MILP for each department apart, before extracting the uncovered requirement parts that form the set of critical intervals. For each of the critical intervals, in the second phase, one department is chosen to assign it the responsibility of fulfilling this critical interval requirement, i.e. to transfer one of its employees to work during the critical interval. This is accomplished by solving a one-day anonymous employee scheduling problem with inter-department transfers for the critical intervals modeled as a MILP, for each of the problem horizon days. The day decomposition renders the problem size manageable in computer memory, especially for large instances (up to 25 departments). This phase ends by migrating any department d1 requirement covered by an employee from department d2, building a new employee transfer requirement from d2 to d1. The third phase solves, for each department, a mono-department employee scheduling problem with derived inter-department transfers as a MILP. The input to the third phase is the new final requirement resulting from the requirement migration of phase two. The MP-DH heuristic succeeds to decompose the multi-department employee scheduling problem into several mono-department employee scheduling problems to save substantial computational time. This allows to solve large instances while not deteriorating much the solution quality. Each phase of the MP-DH algorithm uses parallelism. In the first phase, all department MILPs and post-processing are accomplished in parallel. The second phase runs all singleday problems in parallel. Finally the third phase optimizes all department problems in parallel. At the end of each phase, all parallel problem solutions are merged to form the final solution. In the reported computational experiments, we observe that the first two phases are solved extremely fast compared to the third phase. The size of the solved MILPs in the first two phases is not proportional with the size of the optimized instance, while the third phase MILP size is. To overcome the computational issues of the third phase we present the hybrid heuristic in the second part of the thesis. The hybrid heuristic aims at greatly reducing the MP-DH third phase computational time, while maintaining the solution quality. The hybrid heuristic uses two models interchangeably in order to solve the third phase as accurate and as fast as possible. The first is the third phase MILP of the MP-DH algorithm, which we call the basic model. The second is a semi-anonymous employee scheduling problem with derived inter-department transfers modeled as a MILP that we call the semi-anonymous model. The semi-anonymous version reduces the number of employees for whom the schedules are optimized, and replaces the remaining employee shifts by a set of aggregated anonymous shifts. Once such a model is solved, the schedules of the selected employees are fixed and the algorithm moves on to solving another MILP where another set of employees must be scheduled. The hybrid heuristic starts by solving the basic model, then if, after a given time limit, the MILP optimality gap is higher than a given threshold, the resolution of the basic model is stopped and a semi-anonymous version is solved. This is done repeatedly until all employee schedules are optimized. The hybrid heuristic succeeded in reducing on average up to 87% of the third phase computational time while only loosing 4% in the solution quality. In the third part of the thesis, we tackle a different employee scheduling problem variant: the multi-job employee scheduling problem, where neither transfers nor under-coverage is allowed. Instead, anonymous shifts called open-shifts are used to cover any unavoidable under-coverage. The three main costs composing the objective function are: Over-coverage cost, open-shift usage cost, and minimum employees working hours violation penalty. A parallel large neighborhood search (LNS) metaheuristic for the multi-job employee scheduling problem is developed. Where a sub-scope denotes: a subset of the employees, a subset of the jobs and a continuous subset of the problem horizon. The LNS heuristic is defined by destroy and repair procedures. Our destroy procedure chooses sub-scopes coupled with a high cost in the objective function to be destroyed. When a solution sub-scope is destroyed, all shifts, occurring within the sub-scope horizon and worked by an employee belonging to the sub-scope, for one of the sub-scope jobs, are removed from the current solution schedule.Once the solution sub-scopes are destroyed, the repair operator tries to build an enhanced solution. Our proposed repair operator solves a MILP restricted to the destroyed sub-scopes. The parallel LNS destroy operator creates several disjoint sub-scopes, then each sub-scope is repaired in a different parallel thread. We compare the presented heuristic with the formal MILP solved within the commercial system WFC for Kronos Inc. Experimental results show that the parallel LNS algorithm can save up to an average of 80% in the computational time and 1.8% in the solution cost

    Energy Awareness and Scheduling in Mobile Devices and High End Computing

    Get PDF
    In the context of the big picture as energy demands rise due to growing economies and growing populations, there will be greater emphasis on sustainable supply, conservation, and efficient usage of this vital resource. Even at a smaller level, the need for minimizing energy consumption continues to be compelling in embedded, mobile, and server systems such as handheld devices, robots, spaceships, laptops, cluster servers, sensors, etc. This is due to the direct impact of constrained energy sources such as battery size and weight, as well as cooling expenses in cluster-based systems to reduce heat dissipation. Energy management therefore plays a paramount role in not only hardware design but also in user-application, middleware and operating system design. At a higher level Datacenters are sprouting everywhere due to the exponential growth of Big Data in every aspect of human life, the buzz word these days is Cloud computing. This dissertation, focuses on techniques, specifically algorithmic ones to scale down energy needs whenever the system performance can be relaxed. We examine the significance and relevance of this research and develop a methodology to study this phenomenon. Specifically, the research will study energy-aware resource reservations algorithms to satisfy both performance needs and energy constraints. Many energy management schemes focus on a single resource that is dedicated to real-time or nonreal-time processing. Unfortunately, in many practical systems the combination of hard and soft real-time periodic tasks, a-periodic real-time tasks, interactive tasks and batch tasks must be supported. Each task may also require access to multiple resources. Therefore, this research will tackle the NP-hard problem of providing timely and simultaneous access to multiple resources by the use of practical abstractions and near optimal heuristics aided by cooperative scheduling. We provide an elegant EAS model which works across the spectrum which uses a run-profile based approach to scheduling. We apply this model to significant applications such as BLAT and Assembly of gene sequences in the Bioinformatics domain. We also provide a simulation for extending this model to cloud computing to answers “what if” scenario questions for consumers and operators of cloud resources to help answers questions of deadlines, single v/s distributed cluster use and impact analysis of energy-index and availability against revenue and ROI

    On the development of a stochastic optimisation algorithm with capabilities for distributed computing

    Get PDF
    In this thesis, we devise a new stochastic optimisation method (cascade optimisation algorithm) by incorporating the concepts from Markov process whilst eliminating the inherent sequential nature that is the major deficit preventing the exploitation of advances in distributed computing infrastructures. This method introduces partitions and pools to store intermediate solution and corresponding objectives. A Markov process increases the population of partitions and pools. The population is distributed periodically following an external certain. With the use of partitions and pools, multiple Markov processes can be launched simultaneously for different partitions and pools. The cascade optimisation algorithm is suitable for parallel and distributed computing environments. In addition, this method has the potential to integrate knowledge acquisition techniques (e. g. data mining and ontology) to achieve effective knowledge-based decision making. Several features are extracted and studied in this thesis. The application problems involve both the small-scale and the large-scale optimisation problems. Comparisons with the stochastic optimisation methods are made and results show that the cascade optimisation algorithm can converge to the optimal solutions in agreement with other methods more quickly. The cascade optimisation algorithm is also studied on parallel and distributed computing environments in terms of the reduction in computation time.EThOS - Electronic Theses Online ServiceGBUnited Kingdo
    • 

    corecore