455 research outputs found
A SURVEY ON MACHINE SCHEDULING TECHNIQUES
ABSTRACT In this paper the study about the different methodologies and techniques implemented for different types of scheduling problems in single machine, job shop and flow shop scheduling. Every author tells about the different scenario and approach to minimize the Make span, Tardiness and different parameters in scheduling. Every author implements their own algorithms and the strategies to find out the result, it may be positive or negative. This paper gives the clear idea for the future research work
Implementación de un híbrido entre Grasp y Tabú search para la solución del problema de programación de la producción en un ambiente Job Shop para la minimización de la tardanza total ponderada
El presente trabajo resuelve un Job shop para la minimización de la tardanza total ponderada ya que ésta es una medida de desempeño que tiene en cuenta no solo el nivel de cumplimiento de los clientes sino la importancia de los mismos. Como método de solución se propone un algoritmo híbrido entre la metodología GRASP la cual no ha sido muy estudiada para la solución de éste problema (y es de gran ayuda para la construcción inicial de una solución), y la búsqueda tabú (con la cual se han obtenido muy buenos resultados para Job Shop) para la fase de búsqueda local del algoritmo. Los resultados obtenidos se comparan con el algoritmo de búsqueda local genética propuesto por (Essafi, Mati, - Dauzère-Pérès, 2008). Éste documento presenta inicialmente el planteamiento de problema y la justificación del mismo, seguido por una explicación del problema, la meta heurística realizada, y los antecedentes relacionados con investigación del problema y métodos de solución propuestos para éste. Posteriormente se plantean los objetivos y alcance del documento, junto con el desarrollo, análisis de resultados del mismo, y finalmente algunas recomendaciones para futuros trabajos.This paper addresses a job shop problem minimizing the total weighted tardiness as this is a performance measure that takes into account, not only the level of compliance with the customers but the importance of them. The paper proposes a hybrid solution method algorithm between the GRASP methodology which has not been studied for the solution of this problem (and it is helpful for the initial construction of a solution), and Tabu search (which have obtained very good results for Job Shop) for the local search phase of the algorithm. The results obtained are compared with the local search algorithm proposed by genetic (Essafi Mati, - DauzèrePeres, 2008). This paper first presents the problem approach and its justification, followed by an explanation of the problem, metaheuristics used, relating literature to research the problem and proposed methods of solution for this. Then the objectives and scope of the document, along with the development, analysis of results, and finally some recommendations for future work are suggested.Ingeniero (a) IndustrialPregrad
Automatic Design of Dispatching Rules for Job Shop Scheduling with Genetic Programming
Scheduling is an important planning activity in manufacturing systems to help optimise the usage of scarce resources and improve the customer satisfaction. In the job shop manufacturing environment, scheduling problems are challenging due to the complexity of production flows and practical requirements such as dynamic changes, uncertainty, multiple objectives, and multiple scheduling decisions. Also, job shop scheduling (JSS) is very common in small manufacturing businesses and JSS is considered one of the most popular research topics in this domain due to its potential to dramatically decrease the costs and increase the throughput.
Practitioners and researchers have applied different computational techniques, from different fields such as operations research and computer science, to deal with JSS problems. Although optimisation methods usually show their dominance in the literature, applying optimisation techniques in practical situations is not straightforward because of the practical constraints and conditions in the shop. Dispatching rules are a very useful approach to dealing with these environments because they are easy to implement(by computers and shop floor operators) and can cope with dynamic changes. However, designing an effective dispatching rule is not a trivial task and requires extensive knowledge about the scheduling problem.
The overall goal of this thesis is to develop a genetic programming based hyper-heuristic (GPHH) approach for automatic heuristic design of reusable and competitive dispatching rules in job shop scheduling environments. This thesis focuses on incorporating special features of JSS in the representations and evolutionary search mechanisms of genetic programming(GP) to help enhance the quality of dispatching rules obtained.
This thesis shows that representations and evaluation schemes are the important factors that significantly influence the performance of GP for evolving dispatching rules. The thesis demonstrates that evolved rules which are trained to adapt their decisions based on the changes in shops are better than conventional rules. Moreover, by applying a new evaluation scheme, the evolved rules can effectively learn from the mistakes made in previous completed schedules to construct better scheduling decisions. The GP method using the newproposed evaluation scheme shows better performance than the GP method using the conventional scheme.
This thesis proposes a new multi-objective GPHH to evolve a Pareto front of non-dominated dispatching rules. Instead of evolving a single rule with assumed preferences over different objectives, the advantage of this GPHH method is to allow GP to evolve rules to handle multiple conflicting objectives simultaneously. The Pareto fronts obtained by the GPHH method can be used as an effective tool to help decision makers select appropriate rules based on their knowledge regarding possible trade-offs. The thesis shows that evolved rules can dominate well-known dispatching rules when a single objective and multiple objectives are considered. Also, the obtained Pareto fronts show that many evolved rules can lead to favourable trade-offs, which have not been explored in the literature.
This thesis tackles one of themost challenging issues in job shop scheduling, the interactions between different scheduling decisions. New GPHH methods have been proposed to help evolve scheduling policies containing multiple scheduling rules for multiple scheduling decisions. The two decisions examined in this thesis are sequencing and due date assignment. The experimental results show that the evolved scheduling rules are significantly better than scheduling policies in the literature. A cooperative coevolution approach has also been developed to reduce the complexity of evolving sophisticated scheduling policies. A new evolutionary search mechanisms and customised genetic operations are proposed in this approach to improve the diversity of the obtained Pareto fronts
Recommended from our members
Personal mobile grids with a honeybee inspired resource scheduler
This thesis was submitted for the degree of Doctor of Philosophy and awarded by Brunel University.The overall aim of the thesis has been to introduce Personal Mobile Grids (PMGrids)
as a novel paradigm in grid computing that scales grid infrastructures to mobile devices and extends grid entities to individual personal users. In this thesis, architectural designs as well as simulation models for PM-Grids are developed.
The core of any grid system is its resource scheduler. However, virtually all current conventional grid schedulers do not address the non-clairvoyant scheduling problem, where job information is not available before the end of execution. Therefore, this thesis proposes a honeybee inspired resource scheduling heuristic for PM-Grids (HoPe) incorporating a radical approach to grid resource scheduling to tackle this problem. A detailed design and implementation of HoPe with a decentralised self-management and adaptive policy are initiated.
Among the other main contributions are a comprehensive taxonomy of grid systems as well as a detailed analysis of the honeybee colony and its nectar acquisition process (NAP), from the resource scheduling perspective, which have not been presented in any previous work, to the best of our knowledge.
PM-Grid designs and HoPe implementation were evaluated thoroughly through a strictly controlled empirical evaluation framework with a well-established heuristic in high throughput computing, the opportunistic scheduling heuristic (OSH), as a benchmark algorithm. Comparisons with optimal values and worst bounds are conducted to gain a clear insight into HoPe behaviour, in terms of stability, throughput, turnaround time and speedup, under different running conditions of number of jobs and grid scales.
Experimental results demonstrate the superiority of HoPe performance where it
has successfully maintained optimum stability and throughput in more than 95%
of the experiments, with HoPe achieving three times better than the OSH under
extremely heavy loads. Regarding the turnaround time and speedup, HoPe has
effectively achieved less than 50% of the turnaround time incurred by the OSH, while doubling its speedup in more than 60% of the experiments.
These results indicate the potential of both PM-Grids and HoPe in realising futuristic grid visions. Therefore considering the deployment of PM-Grids in real life scenarios and the utilisation of HoPe in other parallel processing and high throughput computing systems are recommended
Effective and efficient estimation of distribution algorithms for permutation and scheduling problems.
Estimation of Distribution Algorithm (EDA) is a branch of evolutionary computation that learn a probabilistic model of good solutions. Probabilistic models are used to represent relationships between solution variables which may give useful, human-understandable insights into real-world problems. Also, developing an effective PM has been shown to significantly reduce function evaluations needed to reach good solutions. This is also useful for real-world problems because their representations are often complex needing more computation to arrive at good solutions. In particular, many real-world problems are naturally represented as permutations and have expensive evaluation functions. EDAs can, however, be computationally expensive when models are too complex. There has therefore been much recent work on developing suitable EDAs for permutation representation. EDAs can now produce state-of-the-art performance on some permutation benchmark problems. However, models are still complex and computationally expensive making them hard to apply to real-world problems. This study investigates some limitations of EDAs in solving permutation and scheduling problems. The focus of this thesis is on addressing redundancies in the Random Key representation, preserving diversity in EDA, simplifying the complexity attributed to the use of multiple local improvement procedures and transferring knowledge from solving a benchmark project scheduling problem to a similar real-world problem. In this thesis, we achieve state-of-the-art performance on the Permutation Flowshop Scheduling Problem benchmarks as well as significantly reducing both the computational effort required to build the probabilistic model and the number of function evaluations. We also achieve competitive results on project scheduling benchmarks. Methods adapted for solving a real-world project scheduling problem presents significant improvements
Recommended from our members
How to Make the Most Productive Intervention in a Complex Economic System
Information about supply and demand propagates through supply chains in a queueing network with people and computers as batch information processors. As each batch processor delays propagation of information whilst pursuing optimal local decisions, the effect is delay and distortion of the information that is used to commit resources to actions in the supply chain. This thesis investigates the effect of delay and imperfect information as a source of error, to establish the case for change in research focus from optimal exploitation of physical constraints to optimal exploitation of information. In the context of real world supply chains, the thesis asks "How does one make the most productive intervention in a complex economic system?" and pursues a meta-intervention which perpetually minimises the discovered error-term. Evidence from literature indicates that agent-based modelling permits real-time peer-to-peer communication and distributed optimisation. Based on the literature the research project designs and develops an agent-based model which operates in real-time without batch-processes and can perform incremental multi-objective optimisation under realistic (chronologically progressive) conditions for decision making. The agent based model is then used to investigate two real-world supply chains, as case studies, which reveals a significant improvement of profitability and order-fulfilment. The thesis concludes that agent-based modelling is a very promising direction for "making the most productive intervention" as it reduces delay to a minimum. Finally it recommends that continuous improvement of decision making methods is a role better suited for humans, rather than operational decision making where computers cope much better with the high amount of detailed information
Holistic, data-driven, service and supply chain optimisation: linked optimisation.
The intensity of competition and technological advancements in the business environment has made companies collaborate and cooperate together as a means of survival. This creates a chain of companies and business components with unified business objectives. However, managing the decision-making process (like scheduling, ordering, delivering and allocating) at the various business components and maintaining a holistic objective is a huge business challenge, as these operations are complex and dynamic. This is because the overall chain of business processes is widely distributed across all the supply chain participants; therefore, no individual collaborator has a complete overview of the processes. Increasingly, such decisions are automated and are strongly supported by optimisation algorithms - manufacturing optimisation, B2B ordering, financial trading, transportation scheduling and allocation. However, most of these algorithms do not incorporate the complexity associated with interacting decision-making systems like supply chains. It is well-known that decisions made at one point in supply chains can have significant consequences that ripple through linked production and transportation systems. Recently, global shocks to supply chains (COVID-19, climate change, blockage of the Suez Canal) have demonstrated the importance of these interdependencies, and the need to create supply chains that are more resilient and have significantly reduced impact on the environment. Such interacting decision-making systems need to be considered through an optimisation process. However, the interactions between such decision-making systems are not modelled. We therefore believe that modelling such interactions is an opportunity to provide computational extensions to current optimisation paradigms. This research study aims to develop a general framework for formulating and solving holistic, data-driven optimisation problems in service and supply chains. This research achieved this aim and contributes to scholarship by firstly considering the complexities of supply chain problems from a linked problem perspective. This leads to developing a formalism for characterising linked optimisation problems as a model for supply chains. Secondly, the research adopts a method for creating a linked optimisation problem benchmark by linking existing classical benchmark sets. This involves using a mix of classical optimisation problems, typically relating to supply chain decision problems, to describe different modes of linkages in linked optimisation problems. Thirdly, several techniques for linking supply chain fragmented data have been proposed in the literature to identify data relationships. Therefore, this thesis explores some of these techniques and combines them in specific ways to improve the data discovery process. Lastly, many state-of-the-art algorithms have been explored in the literature and these algorithms have been used to tackle problems relating to supply chain problems. This research therefore investigates the resilient state-of-the-art optimisation algorithms presented in the literature, and then designs suitable algorithmic approaches inspired by the existing algorithms and the nature of problem linkages to address different problem linkages in supply chains. Considering research findings and future perspectives, the study demonstrates the suitability of algorithms to different linked structures involving two sub-problems, which suggests further investigations on issues like the suitability of algorithms on more complex structures, benchmark methodologies, holistic goals and evaluation, processmining, game theory and dependency analysis
- …