5 research outputs found
Secuenciación de trabajos en sistemas de producción flexibles
Objetivos y método de estudio: Uno de los objetivos de este trabajo es el de estudiar un problema complejo de secuenciación de trabajos en sistemas flexibles, conocido en la literatura como Flexible Job Shop Scheduling Problem (FJSP); y proponer un algoritmo de optimización para resolverlo. El algoritmo se basa en un esquema tipo ALNS (Adaptive Large Neighborhood Search) híbrido, el cual, en ciertas iteraciones, hace llamadas al branch-and-bound de CPLEX para resolver el FJSP, usando un modelo propuesto por Vahid Roshanaei[47] . El segundo de los objetivos es el estudio de un problema de secuenciación de trabajos presente en una empresa cervecera de la localidad. Puesto que las características y restricciones del problema de producción de cerveza difieren con las del clásico FJSP, se desarrolla un algoritmo de optimización de tipo GRASP (Greedy Randomized Adaptive Search Procedure), el cual será el punto de inicio para una futura implementación en la empresa. Contribuciones y conclusiones: El ALNS propuesto para la solución del FJSP probó ser eficiente para un gran número de instancias tomadas de la literatura, alcanzando soluciones óptimas para más de la mitad de las instancias en las que el óptimo ha sido reportado en la literatura. Para mejorar la calidad de las soluciones (no óptimas) generadas por el algoritmo propuesto, se propone variar los parámetros del algoritmo, o bien, añadirle otro tipo de reactividad para que ajuste de manera automática los parámetros. En cuanto al caso estudiado de la compañía cervecera, nos proporcionaron dos instancias reales junto con la solución implementada en la planta, esta se comparó con la solución reportada por el GRASP propuesto y se observó que la solución reportada por GRASP permite un ahorro de hasta el 28 % (6 días) con respecto al tiempo de producción requerido por la solución implementada por la compañía
Diseño y desarrollo de estructuras de planificación eficientes a través de técnicas de simulación y optimización aplicables a entornos productivos complejos
La tesis aborda problemas de secuenciamiento en entornos productivos del tipo flow shop en los que se retira la condición de ordenamientos permutativos. Este tipo de problemas se encuentran inmersos dentro de los sistemas de Planificación y Control de la Producción que dan soporte en la toma de decisiones a las organizaciones o empresas que producen bienes del tipo manufactura. Como primera aproximación al problema se presenta una revisión exhaustiva de la literatura científica sobre problemas flow shop no permutativos (NPFS). De esta forma se pudo enmarcar la tesis doctoral en la literatura de la temática y se definió concretamente la contribución a la literatura del tema. Como resultado del estudio de la literatura se planteó abordar los problemas NPFS desde una perspectiva que permitiera estudiar la estructura de las soluciones para así poder compararlos con los resultados de los problemas flow shop permutativos (PFS). Primeramente, se propuso estudiar los problemas NPFS con makespan como función objetivo bajo un nuevo enfoque de planificación. Para ello se utilizará la metodología de lotes de transferencia o lot streaming, la cual modifica el problema clásico de secuenciamiento incorporando nuevas variables de decisión al problema a optimizar. Las nuevas variables de decisión van asociadas al dimensionamiento del tamaño del lote de producción. Este estudio reportó resultados para NPFS y PFS bastante similares, aunque el caso NPFS obtuvo leves mejoras para las instancias más grandes. No obstante, el esfuerzo computacional requerido para resolver el caso NPFS fue considerablemente mayor que requerido para PFS. A partir de estos resultados, se planteó un estudio conceptual de las soluciones NPFS y PFS para el caso de dos trabajos en términos de caminos críticos (conjunto de actividades que definen el makespan) que posibilitaron caracterizar ambos conjuntos de soluciones de forma no-paramétrica, es decir, independizarse de los parámetros que definen un escenario. De este estudio de caminos críticos, se pudieron analizar una serie de propiedades y definir criterios de dominancia entre las soluciones NPFS y PFS que permitirían reducir el espacio factible. A su vez, el estudio no-paramétrico permitió realizar una serie experimentaciones computacionales innovadoras, que dieron sustento al desarrollo de algunas hipótesis sobre la relación de las soluciones NPFS y PFS para el caso de que los problemas sean evaluados en escenarios paramétricamente definidos. Para evaluar estas hipótesis se implementaron experimentaciones paramétricas a través de programación matemática, las cuales validaron las hipótesis planteadas.This dissertation focuseson non-permutation scheduling problems in flow shop production settings. These problems, proper of systems of Production Planning and Control, are central to the decision making processes in organizations or firms producing manufactured goods. A first look into these problems requires a thorough review of the scientific literature on non-permutation flow shop (NPFS) problems. This review provides a background on this issue and defines precisely the contribution of this thesis to the literature. A novel and interesting approach to address NPFS problems is by studying the structure of the solutions, comparing it to the corresponding structure of permutation flow shop (PFS) problems. In this light, we study NPFS problems where makespan is minimized considering a special planning technique involving lot streaming. This technique modifies the regular scheduling problem adding new decision variables, related to production lot sizing. From the implementation of lot streaming on these problems we obtain new results. The main conclusion is that the makespans of NPFS and PFS problems are quite similar, although NPFS yields a better makespan for larger instances. The computational effort required by NPFS problems is much larger than that of solving PFS ones. Up from these results, we develop a new approach to the analysis of solutions to NPFS and PFS problems. We center on the two jobs case, and on the concept of critical path (enumerating the set of activities that defines makespan). This allows the non-parametric characterization of the solutions, freeing them from the dependence on particular parameters. We analyze a family of propertiesthat yield dominance criteria for the comparison between NPFS and PFS solutions, reducing, in general, the number of feasible solutions. In addition, this non-parametric method allows the design of novel computational experimental frameworks, yielding newinsights on the relation between NPFS and PFS solutions for parametric scenarios. To assess these hypotheses, we obtain via an application of mathematical programming a set of parametric results that we test in experiments that confirm the aforementioned hypotheses.Fil: Rossit, Daniel Alejandro. Consejo Nacional de Investigaciones Científicas y Técnicas. Centro Científico Tecnológico Conicet - Bahía Blanca. Instituto de Matemática Bahía Blanca. Universidad Nacional del Sur. Departamento de Matemática. Instituto de Matemática Bahía Blanca; Argentin
Um estudo do problema de flow shop permutacional, uma proposta de solução através da metaheurística colônia de formigas
Orientador : Profª. Drª. Neida Maria Patias VolpiDissertação (mestrado) - Universidade Federal do Paraná, Setor de Tecnologia, Programa de Pós-Graduação em Métodos Numéricos em Engenharia. Defesa: Curitiba, 04/10/2016Inclui referências : f.101-103Resumo: Este trabalho estuda o problema de flow shop permutacional, com sequência dependente dos tempos de setup. Descreve-se este problema em um modelo de programação linear. Este modelo é testado com o otimizador CPLEX, em um conjunto de problemas gerado aleatoriamente. Propõe-se um algoritmo, baseado na metaheurística colônia de formigas, com considerações a respeito dos tempos de setup nas regras de transição. Verifica-se a eficiência deste algoritmo, em relação à qualidade das soluções obtidas e, ao tempo de resolução fazendo-se comparações com os resultados obtidos com o modelo, no otimizador. A inicialização das trilhas de feromônios da metaheurística é feita a partir de soluções iniciais. As soluções iniciais são geradas com base em alguns métodos já propostos na literatura, com adicional de considerações a respeito dos tempos de setup. Testa-se a influência da consideração destes tempos de setup na qualidade destas soluções iniciais. Verificou-se que o modelo, resolvido com o otimizador CPLEX, utiliza um tempo computacional maior que o algoritmo colônia de formigas e, teve dificuldade em resolver instâncias de problemas maiores com a limitação de tempo em 3 horas. O algoritmo baseado na otimização colônia de formigas encontrou boas soluções em um tempo computacional pequeno. Palavras-chave: Flow shop Permutacional. Sequência dependente dos tempos de setup. Metaheurística colônia de formigas.Abstract: This paper studies the problem of permutation flowshop, with sequence- dependent setup times. Describes this problem in a linear programming model. This model is tested with CPLEX optimizer, in a set of randomly generated problems. Proposed an algorithm based on ant colony metaheuristic, with considerations about the setup times in the transitional rules. There is the e_ciency of this algorithm on the quality of the solutions and the time resolution, by making comparisons with the results obtained with the model in the optimizer. In metaheuristic, the initialization of the pheromone trails is made from initial solutions. The initial solutions are generated based on some methods that have been proposed in the literature, with additional considerations regarding the setup times. Was tested the influence the consideration setup times in quality the initial solutions. It was found that the model solved with CPLEX optimizer uses more computational time that the ant colony algorithm, and they have di_culty in solving the major problems with the time limit in 3 hours. The algorithm based on ant colony optimization found good solutions in a short computational time. Key-words: Permutation flowshop. Sequence-dependent setup times. Ant colony metaheuristic
Recommended from our members
Bi-Criteria Batching and Scheduling in Hybrid Flow Shops
In this research, a bi-criteria batching and scheduling problem is investigated in hybrid flow shop environments, where unrelated-parallel machines are run simultaneously with different capacities and eligibilities in processing, in some stages. The objective is to simultaneously minimize a linear combination of the total weighted completion time and total weighted tardiness. The first favors the producer’s interest by minimizing work-in-process inventory, inventory holding cost, and energy consumption as well as maximizing machine utilization, while the second favors the customers’ interest by maximizing customers’ service level and delivery speed. In particular, it disregards the group technology assumptions (GTAs) by allowing for the possibility of splitting pre-determined groups of jobs into inconsistent batches in order to improve the operational efficiency. A comparison between the group scheduling and batch scheduling approaches reveals the outstanding performance of the batch scheduling approach. As a result, contrary to the GTAs, jobs belonging to a group might be processed on more than one machine as batches, but not all machines may be capable of processing all jobs. A sequence- and machine-dependent setup time is required between each of two consecutively scheduled batches belonging to different groups. Based on manufacturing company policy, the desired lower bounds on batch sizes are considered for the number of jobs assigned to batches. Although, the direction in which all jobs move through production line is the same, some jobs may skip some stages. Furthermore, to reflect real industry requirements, the job release times and the machine availability times are considered to be dynamic, which means not all machines and jobs are available at the beginning of the planning horizon.The problem is formulated with the help of four mixed-integer linear programming (MILP) models. Two out of four MILP models are formulated as two integrated phases, i.e., batching and scheduling phases, with respect to the precedence constraints between each pair of jobs batches and or the position concept within batches. The optimal combination between batch compositions of groups are determined in the batching phase, while the optimal assignment and sequence of batches on machines and sequence of jobs within batches are determined in the scheduling phase, with respect to a set of operational constraints. A batch composition of a group corresponding to a particular stage, determined in the batching phase of the MILP model, represents the number of batches assigned to the group as well as the number and type of jobs belonging to each batch of that group. Since the first and second MILP models lead to unmanageable solution space, the relaxed MILP model, which allocates one and only one job to each batch of each group in each stage, can be developed to focus on the non-dominated solution space. The optimal solutions of MILP models and relaxed MILP model are equal, if and only if the optimal solution of the relaxed MILP model does not violate the desired lower bounds on batch sizes. Since the relaxed MILP model cannot guarantee the optimal solution of the MILP models, a third MILP model is developed by integrating batching and scheduling phases. This MILP model eliminates an exhaustive combination enumeration between batch compositions of all groups in all stages. Although the third MILP model converges to the optimal solution slower than the relaxed MILP model, it guarantees finding the optimal solution of the first and second MILP models. A comparison between four MILP models shows the superior performance of the third MILP model. However, since the problem is strongly NP-hard, it is not possible to find its optimal solution within a reasonable time as the problem size increases from small to medium to large, even by the relaxed MILP model or the fourth MILP model. Therefore, several meta-heuristic algorithms based upon basic local search, basic population-based search, and hybridization of local search and population-based searches are developed, which move back and forth between batching and scheduling phases. Tabu Search (TS) is implemented as a basic local search algorithm, while Tabu Search Path-Relinking (TS PR) is implemented as a local search algorithm enhanced with a population-based structure. TS is incorporated into the framework of path-relinking to exploit the information on good solutions. The TS PR algorithm comprises several distinguishing features including relinking procedures to effectively explore trajectories connecting elite solutions and the methods used to choose the reference solution. Particle Swarm Optimization (PSO) is implemented as a basic population-based algorithm, while Particle Swarm Optimization enhanced with a local search algorithm (PSO LSA) is developed to realize the benefits of batching and, consequently, enhance the quality of solutions.Since there is interdependency between positions of a job in different stages of a hybrid flow shop in batch scheduling, a meta-heuristic algorithm is not capable of capturing these interdependencies and, subsequently, its efficacy can be diminished. In order to capture this interdependency, the non-, partial- complete-, and stage-based interdependency strategy are developed. In the stage-based-interdependency strategy, a complete sequence related to all of the stages is gradually determined, stage by stage. An initial solution finding mechanism is developed to trigger the search into the solution space and generate an initial population. The performances of these algorithms are compared to each other in order to identify which algorithm(s) outperforms the others. Nevertheless, the performances of the best algorithm(s) are evaluated with respect to a tight lower bound obtained from a branch-and-price (B&P) algorithm. The B&P algorithm uses Dantzig-Wolfe decomposition (DWD) to divide the original problem into a master problem and several sub-problems (SPs) corresponding to each stage. The original problem is decomposed into the SPs by three DWDs corresponding to the three MILP models. Although, by applying DWD technique in the first and second MILP models, an exhaustive combination enumeration between batch compositions of all groups in all stages is excluded and, as a result, the SPs are easier to solve than the original problem, they are still strongly NP-hard because of an enormous number of combinations between batch compositions of all groups in each stage. However, the DWD technique corresponding to the relaxed MILP model not only drastically reduces the number of variables and constraints in the SPs, but also eliminates the batching phase of the first and second MILP models. Decomposing the original problem based on the relaxed MILP model and implementing the B&P algorithm cannot guarantee optimal solutions or tight lower bounds of problems unless the number of violations in the desired lower bounds on batch sizes is not significant. Therefore, the third MILP model is decomposed by DWD so that the B&P algorithm is capable of finding tight lower bounds even for large-size instances of the problem. A comparison between the lower bounds obtained from the B&P algorithm and CPLEX reveals the impressive performance of the B&P algorithm, particularly for large-size problems. The evaluation of the best algorithms based upon these tight lower bounds developed by the B&P algorithm, uncovers the outstanding performance of hybrid algorithms compared to the results obtained from CPLEX.Keywords: Dantzig-Wolfe Decomposition, Mixed-Integer Linear Programming Model, Branch-and-Price Optimization Algorithm, Sequence- and Machine-Dependent Setup Time, Column Generation, Group Scheduling, Particle Swarm Optimization, Batching and Scheduling, Hybrid Flow Shop, Tabu Search, Desired Lower Bounds on Batch Sizes, Bi-Criteria Objective, Path-Relinkin
Recommended from our members
Bi-Criteria Batching and Scheduling in Hybrid Flow Shops
In this research, a bi-criteria batching and scheduling problem is investigated in hybrid flow shop environments, where unrelated-parallel machines are run simultaneously with different capacities and eligibilities in processing, in some stages. The objective is to simultaneously minimize a linear combination of the total weighted completion time and total weighted tardiness. The first favors the producer’s interest by minimizing work-in-process inventory, inventory holding cost, and energy consumption as well as maximizing machine utilization, while the second favors the customers’ interest by maximizing customers’ service level and delivery speed. In particular, it disregards the group technology assumptions (GTAs) by allowing for the possibility of splitting pre-determined groups of jobs into inconsistent batches in order to improve the operational efficiency. A comparison between the group scheduling and batch scheduling approaches reveals the outstanding performance of the batch scheduling approach. As a result, contrary to the GTAs, jobs belonging to a group might be processed on more than one machine as batches, but not all machines may be capable of processing all jobs. A sequence- and machine-dependent setup time is required between each of two consecutively scheduled batches belonging to different groups. Based on manufacturing company policy, the desired lower bounds on batch sizes are considered for the number of jobs assigned to batches. Although, the direction in which all jobs move through production line is the same, some jobs may skip some stages. Furthermore, to reflect real industry requirements, the job release times and the machine availability times are considered to be dynamic, which means not all machines and jobs are available at the beginning of the planning horizon.The problem is formulated with the help of four mixed-integer linear programming (MILP) models. Two out of four MILP models are formulated as two integrated phases, i.e., batching and scheduling phases, with respect to the precedence constraints between each pair of jobs batches and or the position concept within batches. The optimal combination between batch compositions of groups are determined in the batching phase, while the optimal assignment and sequence of batches on machines and sequence of jobs within batches are determined in the scheduling phase, with respect to a set of operational constraints. A batch composition of a group corresponding to a particular stage, determined in the batching phase of the MILP model, represents the number of batches assigned to the group as well as the number and type of jobs belonging to each batch of that group. Since the first and second MILP models lead to unmanageable solution space, the relaxed MILP model, which allocates one and only one job to each batch of each group in each stage, can be developed to focus on the non-dominated solution space. The optimal solutions of MILP models and relaxed MILP model are equal, if and only if the optimal solution of the relaxed MILP model does not violate the desired lower bounds on batch sizes. Since the relaxed MILP model cannot guarantee the optimal solution of the MILP models, a third MILP model is developed by integrating batching and scheduling phases. This MILP model eliminates an exhaustive combination enumeration between batch compositions of all groups in all stages. Although the third MILP model converges to the optimal solution slower than the relaxed MILP model, it guarantees finding the optimal solution of the first and second MILP models. A comparison between four MILP models shows the superior performance of the third MILP model. However, since the problem is strongly NP-hard, it is not possible to find its optimal solution within a reasonable time as the problem size increases from small to medium to large, even by the relaxed MILP model or the fourth MILP model. Therefore, several meta-heuristic algorithms based upon basic local search, basic population-based search, and hybridization of local search and population-based searches are developed, which move back and forth between batching and scheduling phases. Tabu Search (TS) is implemented as a basic local search algorithm, while Tabu Search Path-Relinking (TS PR) is implemented as a local search algorithm enhanced with a population-based structure. TS is incorporated into the framework of path-relinking to exploit the information on good solutions. The TS PR algorithm comprises several distinguishing features including relinking procedures to effectively explore trajectories connecting elite solutions and the methods used to choose the reference solution. Particle Swarm Optimization (PSO) is implemented as a basic population-based algorithm, while Particle Swarm Optimization enhanced with a local search algorithm (PSO LSA) is developed to realize the benefits of batching and, consequently, enhance the quality of solutions.Since there is interdependency between positions of a job in different stages of a hybrid flow shop in batch scheduling, a meta-heuristic algorithm is not capable of capturing these interdependencies and, subsequently, its efficacy can be diminished. In order to capture this interdependency, the non-, partial- complete-, and stage-based interdependency strategy are developed. In the stage-based-interdependency strategy, a complete sequence related to all of the stages is gradually determined, stage by stage. An initial solution finding mechanism is developed to trigger the search into the solution space and generate an initial population. The performances of these algorithms are compared to each other in order to identify which algorithm(s) outperforms the others. Nevertheless, the performances of the best algorithm(s) are evaluated with respect to a tight lower bound obtained from a branch-and-price (B&P) algorithm. The B&P algorithm uses Dantzig-Wolfe decomposition (DWD) to divide the original problem into a master problem and several sub-problems (SPs) corresponding to each stage. The original problem is decomposed into the SPs by three DWDs corresponding to the three MILP models. Although, by applying DWD technique in the first and second MILP models, an exhaustive combination enumeration between batch compositions of all groups in all stages is excluded and, as a result, the SPs are easier to solve than the original problem, they are still strongly NP-hard because of an enormous number of combinations between batch compositions of all groups in each stage. However, the DWD technique corresponding to the relaxed MILP model not only drastically reduces the number of variables and constraints in the SPs, but also eliminates the batching phase of the first and second MILP models. Decomposing the original problem based on the relaxed MILP model and implementing the B&P algorithm cannot guarantee optimal solutions or tight lower bounds of problems unless the number of violations in the desired lower bounds on batch sizes is not significant. Therefore, the third MILP model is decomposed by DWD so that the B&P algorithm is capable of finding tight lower bounds even for large-size instances of the problem. A comparison between the lower bounds obtained from the B&P algorithm and CPLEX reveals the impressive performance of the B&P algorithm, particularly for large-size problems. The evaluation of the best algorithms based upon these tight lower bounds developed by the B&P algorithm, uncovers the outstanding performance of hybrid algorithms compared to the results obtained from CPLEX.Keywords: Bi-Criteria Objective, Column Generation, Batch Scheduling, Tabu Search, Batching and Scheduling, Desired Lower Bounds on Batch Sizes, Path-Relinking, Branch-and-Price Optimization Algorithm, Particle Swarm Optimization, Group Scheduling, Hybrid Flow Shop, Dantzig-Wolfe Decomposition, Mixed-Integer Linear Programming Model, Sequence- and Machine-Dependent Setup Tim