52 research outputs found

    Model approximation for batch flow shop scheduling with fixed batch sizes

    Get PDF
    Batch flow shops model systems that process a variety of job types using a fixed infrastructure. This model has applications in several areas including chemical manufacturing, building construction, and assembly lines. Since the throughput of such systems depends, often strongly, on the sequence in which they produce various products, scheduling these systems becomes a problem with very practical consequences. Nevertheless, optimally scheduling these systems is NP-complete. This paper demonstrates that batch flow shops can be represented as a particular kind of heap model in the max-plus algebra. These models are shown to belong to a special class of linear systems that are globally stable over finite input sequences, indicating that information about past states is forgotten in finite time. This fact motivates a new solution method to the scheduling problem by optimally solving scheduling problems on finite-memory approximations of the original system. Error in solutions for these “t-step” approximations is bounded and monotonically improving with increasing model complexity, eventually becoming zero when the complexity of the approximation reaches the complexity of the original system.United States. Department of Homeland Security. Science and Technology Directorate (Contract HSHQDC-13-C-B0052)United States. Air Force Research Laboratory (Contract FA8750-09-2-0219)ATK Thiokol Inc

    Theoretical and Computational Research in Various Scheduling Models

    Get PDF
    Nine manuscripts were published in this Special Issue on “Theoretical and Computational Research in Various Scheduling Models, 2021” of the MDPI Mathematics journal, covering a wide range of topics connected to the theory and applications of various scheduling models and their extensions/generalizations. These topics include a road network maintenance project, cost reduction of the subcontracted resources, a variant of the relocation problem, a network of activities with generally distributed durations through a Markov chain, idea on how to improve the return loading rate problem by integrating the sub-tour reversal approach with the method of the theory of constraints, an extended solution method for optimizing the bi-objective no-idle permutation flowshop scheduling problem, the burn-in (B/I) procedure, the Pareto-scheduling problem with two competing agents, and three preemptive Pareto-scheduling problems with two competing agents, among others. We hope that the book will be of interest to those working in the area of various scheduling problems and provide a bridge to facilitate the interaction between researchers and practitioners in scheduling questions. Although discrete mathematics is a common method to solve scheduling problems, the further development of this method is limited due to the lack of general principles, which poses a major challenge in this research field

    Datacenter management for on-site intermittent and uncertain renewable energy sources

    Get PDF
    Les technologies de l'information et de la communication sont devenues, au cours des dernières années, un pôle majeur de consommation énergétique avec les conséquences environnementales associées. Dans le même temps, l'émergence du Cloud computing et des grandes plateformes en ligne a causé une augmentation en taille et en nombre des centres de données. Pour réduire leur impact écologique, alimenter ces centres avec des sources d'énergies renouvelables (EnR) apparaît comme une piste de solution. Cependant, certaines EnR telles que les énergies solaires et éoliennes sont liées aux conditions météorologiques, et sont par conséquent intermittentes et incertaines. L'utilisation de batteries ou d'autres dispositifs de stockage est souvent envisagée pour compenser ces variabilités de production. De par leur coût important, économique comme écologique, ainsi que les pertes énergétiques engendrées, l'utilisation de ces dispositifs sans intégration supplémentaire est insuffisante. La consommation électrique d'un centre de données dépend principalement de l'utilisation des ressources de calcul et de communication, qui est déterminée par la charge de travail et les algorithmes d'ordonnancement utilisés. Pour utiliser les EnR efficacement tout en préservant la qualité de service du centre, une gestion coordonnée des ressources informatiques, des sources électriques et du stockage est nécessaire. Il existe une grande diversité de centres de données, ayant différents types de matériel, de charge de travail et d'utilisation. De la même manière, suivant les EnR, les technologies de stockage et les objectifs en termes économiques ou environnementaux, chaque infrastructure électrique est modélisée et gérée différemment des autres. Des travaux existants proposent des méthodes de gestion d'EnR pour des couples bien spécifiques de modèles électriques et informatiques. Cependant, les multiples combinaisons de ces deux parties rendent difficile l'extrapolation de ces approches et de leurs résultats à des infrastructures différentes. Cette thèse explore de nouvelles méthodes pour résoudre ce problème de coordination. Une première contribution reprend un problème d'ordonnancement de tâches en introduisant une abstraction des sources électriques. Un algorithme d'ordonnancement est proposé, prenant les préférences des sources en compte, tout en étant conçu pour être indépendant de leur nature et des objectifs de l'infrastructure électrique. Une seconde contribution étudie le problème de planification de l'énergie d'une manière totalement agnostique des infrastructures considérées. Les ressources informatiques et la gestion de la charge de travail sont encapsulées dans une boîte noire implémentant un ordonnancement sous contrainte de puissance. La même chose s'applique pour le système de gestion des EnR et du stockage, qui agit comme un algorithme d'optimisation d'engagement de sources pour répondre à une demande. Une optimisation coopérative et multiobjectif, basée sur un algorithme évolutionnaire, utilise ces deux boîtes noires afin de trouver les meilleurs compromis entre les objectifs électriques et informatiques. Enfin, une troisième contribution vise les incertitudes de production des EnR pour une infrastructure plus spécifique. En utilisant une formulation en processus de décision markovien (MDP), la structure du problème de décision sous-jacent est étudiée. Pour plusieurs variantes du problème, des méthodes sont proposées afin de trouver les politiques optimales ou des approximations de celles-ci avec une complexité raisonnable.In recent years, information and communication technologies (ICT) became a major energy consumer, with the associated harmful ecological consequences. Indeed, the emergence of Cloud computing and massive Internet companies increased the importance and number of datacenters around the world. In order to mitigate economical and ecological cost, powering datacenters with renewable energy sources (RES) began to appear as a sustainable solution. Some of the commonly used RES, such as solar and wind energies, directly depends on weather conditions. Hence they are both intermittent and partly uncertain. Batteries or other energy storage devices (ESD) are often considered to relieve these issues, but they result in additional energy losses and are too costly to be used alone without more integration. The power consumption of a datacenter is closely tied to the computing resource usage, which in turn depends on its workload and on the algorithms that schedule it. To use RES as efficiently as possible while preserving the quality of service of a datacenter, a coordinated management of computing resources, electrical sources and storage is required. A wide variety of datacenters exists, each with different hardware, workload and purpose. Similarly, each electrical infrastructure is modeled and managed uniquely, depending on the kind of RES used, ESD technologies and operating objectives (cost or environmental impact). Some existing works successfully address this problem by considering a specific couple of electrical and computing models. However, because of this combined diversity, the existing approaches cannot be extrapolated to other infrastructures. This thesis explores novel ways to deal with this coordination problem. A first contribution revisits batch tasks scheduling problem by introducing an abstraction of the power sources. A scheduling algorithm is proposed, taking preferences of electrical sources into account, though designed to be independent from the type of sources and from the goal of the electrical infrastructure (cost, environmental impact, or a mix of both). A second contribution addresses the joint power planning coordination problem in a totally infrastructure-agnostic way. The datacenter computing resources and workload management is considered as a black-box implementing a scheduling under variable power constraint algorithm. The same goes for the electrical sources and storage management system, which acts as a source commitment optimization algorithm. A cooperative multiobjective power planning optimization, based on a multi-objective evolutionary algorithm (MOEA), dialogues with the two black-boxes to find the best trade-offs between electrical and computing internal objectives. Finally, a third contribution focuses on RES production uncertainties in a more specific infrastructure. Based on a Markov Decision Process (MDP) formulation, the structure of the underlying decision problem is studied. For several variants of the problem, tractable methods are proposed to find optimal policies or a bounded approximation

    Nützliche Strukturen und wie sie zu finden sind: Nicht Approximierbarkeit und Approximationen für diverse Varianten des Parallel Task Scheduling Problems

    Get PDF
    In this thesis, we consider the Parallel Task Scheduling problem and several variants. This problem and its variations have diverse applications in theory and practice; for example, they appear as sub-problems in higher dimensional problems. In the Parallel Task Scheduling problem, we are given a set of jobs and a set of identical machines. Each job is a parallel task; i.e., it needs a fixed number of identical machines to be processed. A schedule assigns to each job a set of machines it is processed on and a starting time. It is feasible if at each point in time each machine processes at most one job. In a variant of this problem, called Strip Packing, the identical machines are arranged in a total order, and jobs can only allocate neighboring machines with regard to this total order. In this case, we speak of Contiguous Parallel Task Scheduling as well. In another variant, called Single Resource Constraint Scheduling, we are given an additional constraint on how many jobs can be processed at the same time. For these variants of the Parallel Task Scheduling problem, we consider an extension, where the set of machines is grouped into identical clusters. When scheduling a job, we are allowed to allocate machines from only one cluster to process the job. For all these considered problems, we close some gaps between inapproximation or hardness result and the best possible algorithm. For Parallel Task Scheduling we prove that it is strongly NP-hard if we are given precisely 4 machines. Before it was known that it is strongly NP-hard if we are given at least 5 machines, and there was an (exact) pseudo-polynomial time algorithm for up to 3 machines. For Strip Packing, we present an algorithm with approximation ratio (5/4 +ε) and prove that there is no approximation with ratio less than 5/4 unless P = NP. Concerning Single Resource Constraint Scheduling, it is not possible to find an algorithm with ratio smaller than 3/2, unless P = NP, and we present an algorithm with ratio (3/2 +ε). For the extensions to identical clusters, there can be no approximation algorithm with a ratio smaller than 2 unless P = NP. For the extensions of Strip Packing and Parallel Task Scheduling there are 2-approximations already, but they have a huge worst case running time. We present 2-approximations that have a linear running time for the extensions of Strip Packing, Parallel Task Scheduling, and Single Resource Constraint Scheduling for the case that at least three clusters are present and greatly improve the running time for two clusters. Finally, we consider three variants of Scheduling on Identical Machines with setup times. We present EPTAS results for all of them which is the best one can hope for since these problems are strongly NP-complete.In dieser Thesis untersuchen wir das Problem Parallel Task Scheduling und einige seiner Varianten. Dieses Problem und seine Variationen haben vielfältige Anwendungen in Theorie und Praxis. Beispielsweise treten sie als Teilprobleme in höherdimensionalen Problemen auf. Im Problem Parallel Task Scheduling erhalten wir eine Menge von Jobs und eine Menge identischer Maschinen. Jeder Job ist ein paralleler Task, d. h. er benötigt eine feste Anzahl der identischen Maschinen, um bearbeitet zu werden. Ein Schedule ordnet den Jobs die Maschinen zu, auf denen sie bearbeitet werden sollen, sowie einen festen Startzeitpunkt der Bearbeitung. Der Schedule ist gültig, wenn zu jedem Zeitpunkt jede Maschine höchstens einen Job bearbeitet. Beim Strip Packing Problem sind die identischen Maschinen in einer totalen Ordnung angeordnet und Jobs können nur benachbarte Maschinen in Bezug auf diese Ordnung nutzen. In dem Single Resource Constraint Scheduling Problem gibt es eine zusätzliche Einschränkung, wie viele Jobs gleichzeitig verarbeitet werden können. Für die genannten Varianten des Parallel Task Scheduling Problems betrachten wir eine Erweiterung, bei der die Maschinen in identische Cluster gruppiert sind. Bei der Bearbeitung eines Jobs dürfen in diesem Modell nur Maschinen aus einem Cluster genutzt werden. Für all diese Probleme schließen wir Lücken zwischen Nichtapproximierbarkeit und Algorithmen. Für Parallel Task Scheduling zeigen wir, dass es stark NP-vollständig ist, wenn genau 4 Maschinen gegeben sind. Vorher war ein pseudopolynomieller Algorithmus für bis zu 3 Maschinen bekannt, sowie dass dieses Problem stark NP-vollständig ist für 5 oder mehr Maschinen. Für Strip Packing zeigen wir, dass es keinen pseudopolynomiellen Algorithmus gibt, der eine Güte besser als 5/4 besitzt und geben einen pseudopolynomiellen Algorithmus mit Güte (5/4 +ε) an. Für Single Resource Constraint Scheduling ist die bestmögliche Güte eine 3/2-Approximation und wir präsentieren eine (3/2 +ε)-Approximation. Für die Erweiterung auf identische Cluster gibt es keine Approximation mit Güte besser als 2. Vor unseren Untersuchungen waren bereits Algorithmen mit Güte 2 bekannt, die jedoch gigantische Worst-Case Laufzeiten haben. Wir geben für alle drei Varianten 2-Approximationen mit linearer Laufzeit an, sofern mindestens drei Cluster gegeben sind. Schlussendlich betrachten wir noch Scheduling auf Identischen Maschinen mit Setup Zeiten. Wir entwickeln für drei untersuche Varianten dieses Problems jeweils einen EPTAS, wobei ein EPTAS das beste ist, auf das man hoffen kann, es sei denn es gilt P = NP

    A Polyhedral Study of Mixed 0-1 Set

    Get PDF
    We consider a variant of the well-known single node fixed charge network flow set with constant capacities. This set arises from the relaxation of more general mixed integer sets such as lot-sizing problems with multiple suppliers. We provide a complete polyhedral characterization of the convex hull of the given set

    Extending scheduling algorithms to minimise the impact of bounded uncertainty using interval programming

    Full text link
     This research proposed a new methodology to extend algorithms to accept interval-based uncertain parameters. The methodology is applied on scheduling algorithms, including heuristic and meta-heuristic algorithms and produced optimal results with higher accuracy. The research outcomes are effective for decision making process using uncertain or predicted data

    A new hybrid meta-heuristic algorithm for solving single machine scheduling problems

    Get PDF
    A dissertation submitted in partial ful lment of the degree of Master of Science in Engineering (Electrical) (50/50) in the Faculty of Engineering and the Built Environment Department of Electrical and Information Engineering May 2017Numerous applications in a wide variety of elds has resulted in a rich history of research into optimisation for scheduling. Although it is a fundamental form of the problem, the single machine scheduling problem with two or more objectives is known to be NP-hard. For this reason we consider the single machine problem a good test bed for solution algorithms. While there is a plethora of research into various aspects of scheduling problems, little has been done in evaluating the performance of the Simulated Annealing algorithm for the fundamental problem, or using it in combination with other techniques. Speci cally, this has not been done for minimising total weighted earliness and tardiness, which is the optimisation objective of this work. If we consider a mere ten jobs for scheduling, this results in over 3.6 million possible solution schedules. It is thus of de nite practical necessity to reduce the search space in order to nd an optimal or acceptable suboptimal solution in a shorter time, especially when scaling up the problem size. This is of particular importance in the application area of packet scheduling in wireless communications networks where the tolerance for computational delays is very low. The main contribution of this work is to investigate the hypothesis that inserting a step of pre-sampling by Markov Chain Monte Carlo methods before running the Simulated Annealing algorithm on the pruned search space can result in overall reduced running times. The search space is divided into a number of sections and Metropolis-Hastings Markov Chain Monte Carlo is performed over the sections in order to reduce the search space for Simulated Annealing by a factor of 20 to 100. Trade-o s are found between the run time and number of sections of the pre-sampling algorithm, and the run time of Simulated Annealing for minimising the percentage deviation of the nal result from the optimal solution cost. Algorithm performance is determined both by computational complexity and the quality of the solution (i.e. the percentage deviation from the optimal). We nd that the running time can be reduced by a factor of 4.5 to ensure a 2% deviation from the optimal, as compared to the basic Simulated Annealing algorithm on the full search space. More importantly, we are able to reduce the complexity of nding the optimal from O(n:n!) for a complete search to O(nNS) for Simulated Annealing to O(n(NMr +NS)+m) for the input variables n jobs, NS SA iterations, NM Metropolis- Hastings iterations, r inner samples and m sections.MT 201

    Integrated Production-Inventory Models in Steel Mills Operating in a Fuzzy Environment

    Get PDF
    Despite the paramount importance of the steel rolling industry and its vital contributions to a nation’s economic growth and pace of development, production planning in this industry has not received as much attention as opposed to other industries. The work presented in this thesis tackles the master production scheduling (MPS) problem encountered frequently in steel rolling mills producing reinforced steel bars of different grades and dimensions. At first, the production planning problem is dealt with under static demand conditions and is formulated as a mixed integer bilinear program (MIBLP) where the objective of this deterministic model is to provide insights into the combined effect of several interrelated factors such as batch production, scrap rate, complex setup time structure, overtime, backlogging and product substitution, on the planning decisions. Typically, MIBLPs are not readily solvable using off-the-shelf optimization packages necessitating the development of specifically tailored solution algorithms that can efficiently handle this class of models. The classical linearization approaches are first discussed and employed to the model at hand, and then a hybrid linearization-Benders decomposition technique is developed in order to separate the complicating variables from the non-complicating ones. As a third alternative, a modified Branch-and-Bound (B&B) algorithm is proposed where the branching, bounding and fathoming criteria differ from those of classical B&B algorithms previously established in the literature. Numerical experiments have shown that the proposed B&B algorithm outperforms the other two approaches for larger problem instances with savings in computational time amounting to 48%. The second part of this thesis extends the previous analysis to allow for the incorporation of internal as well as external sources of uncertainty associated with end customers’ demand and production capacity in the planning decisions. In such situations, the implementation of the model on a rolling horizon basis is a common business practice but it requires the repetitive solution of the model at the beginning of each time period. As such, viable approximations that result in a tractable number of binary and/or integer variables and generate only exact schedules are developed. Computational experiments suggest that a fair compromise between the quality of the solutions and substantial computational time savings is achieved via the employment of these approximate models. The dynamic nature of the operating environment can also be captured using the concept of fuzzy set theory (FST). The use of FST allows for the incorporation of the decision maker’s subjective judgment in the context of mathematical models through flexible mathematical programming (FMP) approach and possibilistic programming (PP) approach. In this work, both of these approaches are combined where the volatility in demand is reflected by a flexible constraint expressed by a fuzzy set having a triangular membership function, and the production capacity is expressed as a triangular fuzzy number. Numerical analysis illustrates the economical benefits obtained from using the fuzzy approach as compared to its deterministic counterpart

    Multiobjective Simulation Optimization Using Enhanced Evolutionary Algorithm Approaches

    Get PDF
    In today\u27s competitive business environment, a firm\u27s ability to make the correct, critical decisions can be translated into a great competitive advantage. Most of these critical real-world decisions involve the optimization not only of multiple objectives simultaneously, but also conflicting objectives, where improving one objective may degrade the performance of one or more of the other objectives. Traditional approaches for solving multiobjective optimization problems typically try to scalarize the multiple objectives into a single objective. This transforms the original multiple optimization problem formulation into a single objective optimization problem with a single solution. However, the drawbacks to these traditional approaches have motivated researchers and practitioners to seek alternative techniques that yield a set of Pareto optimal solutions rather than only a single solution. The problem becomes much more complicated in stochastic environments when the objectives take on uncertain (or noisy ) values due to random influences within the system being optimized, which is the case in real-world environments. Moreover, in stochastic environments, a solution approach should be sufficiently robust and/or capable of handling the uncertainty of the objective values. This makes the development of effective solution techniques that generate Pareto optimal solutions within these problem environments even more challenging than in their deterministic counterparts. Furthermore, many real-world problems involve complicated, black-box objective functions making a large number of solution evaluations computationally- and/or financially-prohibitive. This is often the case when complex computer simulation models are used to repeatedly evaluate possible solutions in search of the best solution (or set of solutions). Therefore, multiobjective optimization approaches capable of rapidly finding a diverse set of Pareto optimal solutions would be greatly beneficial. This research proposes two new multiobjective evolutionary algorithms (MOEAs), called fast Pareto genetic algorithm (FPGA) and stochastic Pareto genetic algorithm (SPGA), for optimization problems with multiple deterministic objectives and stochastic objectives, respectively. New search operators are introduced and employed to enhance the algorithms\u27 performance in terms of converging fast to the true Pareto optimal frontier while maintaining a diverse set of nondominated solutions along the Pareto optimal front. New concepts of solution dominance are defined for better discrimination among competing solutions in stochastic environments. SPGA uses a solution ranking strategy based on these new concepts. Computational results for a suite of published test problems indicate that both FPGA and SPGA are promising approaches. The results show that both FPGA and SPGA outperform the improved nondominated sorting genetic algorithm (NSGA-II), widely-considered benchmark in the MOEA research community, in terms of fast convergence to the true Pareto optimal frontier and diversity among the solutions along the front. The results also show that FPGA and SPGA require far fewer solution evaluations than NSGA-II, which is crucial in computationally-expensive simulation modeling applications
    corecore