115 research outputs found

    A Comprehensive Survey on Particle Swarm Optimization Algorithm and Its Applications

    Get PDF
    Particle swarm optimization (PSO) is a heuristic global optimization method, proposed originally by Kennedy and Eberhart in 1995. It is now one of the most commonly used optimization techniques. This survey presented a comprehensive investigation of PSO. On one hand, we provided advances with PSO, including its modifications (including quantum-behaved PSO, bare-bones PSO, chaotic PSO, and fuzzy PSO), population topology (as fully connected, von Neumann, ring, star, random, etc.), hybridization (with genetic algorithm, simulated annealing, Tabu search, artificial immune system, ant colony algorithm, artificial bee colony, differential evolution, harmonic search, and biogeography-based optimization), extensions (to multiobjective, constrained, discrete, and binary optimization), theoretical analysis (parameter selection and tuning, and convergence analysis), and parallel implementation (in multicore, multiprocessor, GPU, and cloud computing forms). On the other hand, we offered a survey on applications of PSO to the following eight fields: electrical and electronic engineering, automation control systems, communication theory, operations research, mechanical engineering, fuel and energy, medicine, chemistry, and biology. It is hoped that this survey would be beneficial for the researchers studying PSO algorithms

    Particle Swarm Optimization

    Get PDF
    Particle swarm optimization (PSO) is a population based stochastic optimization technique influenced by the social behavior of bird flocking or fish schooling.PSO shares many similarities with evolutionary computation techniques such as Genetic Algorithms (GA). The system is initialized with a population of random solutions and searches for optima by updating generations. However, unlike GA, PSO has no evolution operators such as crossover and mutation. In PSO, the potential solutions, called particles, fly through the problem space by following the current optimum particles. This book represents the contributions of the top researchers in this field and will serve as a valuable tool for professionals in this interdisciplinary field

    Distributed and Lightweight Meta-heuristic Optimization method for Complex Problems

    Get PDF
    The world is becoming more prominent and more complex every day. The resources are limited and efficiently use them is one of the most requirement. Finding an Efficient and optimal solution in complex problems needs to practical methods. During the last decades, several optimization approaches have been presented that they can apply to different optimization problems, and they can achieve different performance on various problems. Different parameters can have a significant effect on the results, such as the type of search spaces. Between the main categories of optimization methods (deterministic and stochastic methods), stochastic optimization methods work more efficient on big complex problems than deterministic methods. But in highly complex problems, stochastic optimization methods also have some issues, such as execution time, convergence to local optimum, incompatible with distributed systems, and dependence on the type of search spaces. Therefore this thesis presents a distributed and lightweight metaheuristic optimization method (MICGA) for complex problems focusing on four main tracks. 1) The primary goal is to improve the execution time by MICGA. 2) The proposed method increases the stability and reliability of the results by using the multi-population strategy in the second track. 3) MICGA is compatible with distributed systems. 4) Finally, MICGA is applied to the different type of optimization problems with other kinds of search spaces (continuous, discrete and order based optimization problems). MICGA has been compared with other efficient optimization approaches. The results show the proposed work has been achieved enough improvement on the main issues of the stochastic methods that are mentioned before.Maailmasta on päivä päivältä tulossa yhä monimutkaisempi. Resurssit ovat rajalliset, ja siksi niiden tehokas käyttö on erittäin tärkeää. Tehokkaan ja optimaalisen ratkaisun löytäminen monimutkaisiin ongelmiin vaatii tehokkaita käytännön menetelmiä. Viime vuosikymmenien aikana on ehdotettu useita optimointimenetelmiä, joilla jokaisella on vahvuutensa ja heikkoutensa suorituskyvyn ja tarkkuuden suhteen erityyppisten ongelmien ratkaisemisessa. Parametreilla, kuten hakuavaruuden tyypillä, voi olla merkittävä vaikutus tuloksiin. Optimointimenetelmien pääryhmistä (deterministiset ja stokastiset menetelmät) stokastinen optimointi toimii suurissa monimutkaisissa ongelmissa tehokkaammin kuin deterministinen optimointi. Erittäin monimutkaisissa ongelmissa stokastisilla optimointimenetelmillä on kuitenkin myös joitain ongelmia, kuten korkeat suoritusajat, päätyminen paikallisiin optimipisteisiin, yhteensopimattomuus hajautetun toteutuksen kanssa ja riippuvuus hakuavaruuden tyypistä. Tämä opinnäytetyö esittelee hajautetun ja kevyen metaheuristisen optimointimenetelmän (MICGA) monimutkaisille ongelmille keskittyen neljään päätavoitteeseen: 1) Ensisijaisena tavoitteena on pienentää suoritusaikaa MICGA:n avulla. 2) Lisäksi ehdotettu menetelmä lisää tulosten vakautta ja luotettavuutta käyttämällä monipopulaatiostrategiaa. 3) MICGA tukee hajautettua toteutusta. 4) Lopuksi MICGA-menetelmää sovelletaan erilaisiin optimointiongelmiin, jotka edustavat erityyppisiä hakuavaruuksia (jatkuvat, diskreetit ja järjestykseen perustuvat optimointiongelmat). Työssä MICGA-menetelmää verrataan muihin tehokkaisiin optimointimenetelmiin. Tulokset osoittavat, että ehdotetulla menetelmällä saavutetaan selkeitä parannuksia yllä mainittuihin stokastisten menetelmien pääongelmiin liittyen

    Balancing and lot-sizing mixed-model lines in the footwear industry

    Get PDF
    This report describes the full research proposal for the project \Balancing and lot-sizing mixed-model lines in the footwear industry", to be developed as part of the master program in Engenharia Electrotécnica e de Computadores - Sistemas de Planeamento Industrial of the Instituto Superior de Engenharia do Porto. The Portuguese footwear industry is undergoing a period of great development and innovation. The numbers speak for themselves, Portugal footwear exported 71 million pairs of shoes to over 130 countries in 2012. It is a diverse sector, which covers different categories of women, men and children shoes, each of them with various models. New and technologically advanced mixed-model assembly lines are being projected and installed to replace traditional mass assembly lines. Obviously there is a need to manage them conveniently and to improve their operations. This work focuses on balancing and lot-sizing stitching mixed-model lines in a real world environment. For that purpose it will be fundamental to develop and evaluate adequate effective solution methods. Different objectives may be considered, which are relevant for the companies, such as minimizing the number of workstations, and minimizing the makespan, while taking into account a lot of practical restrictions. The solution approaches will be based on approximate methods, namely by resorting to metaheuristics. To show the impact of having different lots in production the initial maximum amount for each lot is changed and a Tabu Search based procedure is used to improve the solutions. The developed approaches will be evaluated and tested. A special attention will be given to the solution of real applied problems. Future work may include the study of other neighbourhood structures related to Tabu Search and the development of ways to speed up the evaluation of neighbours, as well as improving the balancing solution method

    A study on flexible flow shop and job shop scheduling using meta-heuristic approaches

    Get PDF
    Scheduling aims at allocation of resources to perform a group of tasks over a period of time in such a manner that some performance goals such as flow time, tardiness, lateness, and makespan can be minimized. Today, manufacturers face the challenges in terms of shorter product life cycles, customized products and changing demand pattern of customers. Due to intense competition in the market place, effective scheduling has now become an important issue for the growth and survival of manufacturing firms. To sustain in the current competitive environment, it is essential for the manufacturing firms to improve the schedule based on simultaneous optimization of performance measures such as makespan, flow time and tardiness. Since all the scheduling criteria are important from business operation point of view, it is vital to optimize all the objectives simultaneously instead of a single objective. It is also essentially important for the manufacturing firms to improve the performance of production scheduling systems that can address internal uncertainties such as machine breakdown, tool failure and change in processing times. The schedules must meet the deadline committed to customers because failure to do so may result in a significant loss of goodwill. Often, it is necessary to reschedule an existing plan due to uncertainty event like machine breakdowns. The problem of finding robust schedules (schedule performance does not deteriorate in disruption situation) or flexible schedules (schedules expected to perform well after some degree of modification when uncertain condition is encountered) is of utmost importance for real world applications as they operate in dynamic environments

    Application of nature-inspired optimization algorithms to improve the production efficiency of small and medium-sized bakeries

    Get PDF
    Increasing production efficiency through schedule optimization is one of the most influential topics in operations research that contributes to decision-making process. It is the concept of allocating tasks among available resources within the constraints of any manufacturing facility in order to minimize costs. It is carried out by a model that resembles real-world task distribution with variables and relevant constraints in order to complete a planned production. In addition to a model, an optimizer is required to assist in evaluating and improving the task allocation procedure in order to maximize overall production efficiency. The entire procedure is usually carried out on a computer, where these two distinct segments combine to form a solution framework for production planning and support decision-making in various manufacturing industries. Small and medium-sized bakeries lack access to cutting-edge tools, and most of their production schedules are based on personal experience. This makes a significant difference in production costs when compared to the large bakeries, as evidenced by their market dominance. In this study, a hybrid no-wait flow shop model is proposed to produce a production schedule based on actual data, featuring the constraints of the production environment in small and medium-sized bakeries. Several single-objective and multi-objective nature-inspired optimization algorithms were implemented to find efficient production schedules. While makespan is the most widely used quality criterion of production efficiency because it dominates production costs, high oven idle time in bakeries also wastes energy. Combining these quality criteria allows for additional cost reduction due to energy savings as well as shorter production time. Therefore, to obtain the efficient production plan, makespan and oven idle time were included in the objectives of optimization. To find the optimal production planning for an existing production line, particle swarm optimization, simulated annealing, and the Nawaz-Enscore-Ham algorithms were used. The weighting factor method was used to combine two objectives into a single objective. The classical optimization algorithms were found to be good enough at finding optimal schedules in a reasonable amount of time, reducing makespan by 29 % and oven idle time by 8 % of one of the analyzed production datasets. Nonetheless, the algorithms convergence was found to be poor, with a lower probability of obtaining the best or nearly the best result. In contrast, a modified particle swarm optimization (MPSO) proposed in this study demonstrated significant improvement in convergence with a higher probability of obtaining better results. To obtain trade-offs between two objectives, state-of-the-art multi-objective optimization algorithms, non-dominated sorting genetic algorithm (NSGA-II), strength Pareto evolutionary algorithm, generalized differential evolution, improved multi-objective particle swarm optimization (OMOPSO) and speed-constrained multi-objective particle swarm optimization (SMPSO) were implemented. Optimization algorithms provided efficient production planning with up to a 12 % reduction in makespan and a 26 % reduction in oven idle time based on data from different production days. The performance comparison revealed a significant difference between these multi-objective optimization algorithms, with NSGA-II performing best and OMOPSO and SMPSO performing worst. Proofing is a key processing stage that contributes to the quality of the final product by developing flavor and fluffiness texture in bread. However, the duration of proofing is uncertain due to the complex interaction of multiple parameters: yeast condition, temperature in the proofing chamber, and chemical composition of flour. Due to the uncertainty of proofing time, a production plan optimized with the shortest makespan can be significantly inefficient. The computational results show that the schedules with the shortest and nearly shortest makespan have a significant (up to 18 %) increase in makespan due to proofing time deviation from expected duration. In this thesis, a method for developing resilient production planning that takes into account uncertain proofing time is proposed, so that even if the deviation in proofing time is extreme, the fluctuation in makespan is minimal. The experimental results with a production dataset revealed a proactive production plan, with only 5 minutes longer than the shortest makespan, but only 21 min fluctuating in makespan due to varying the proofing time from -10 % to +10 % of actual proofing time. This study proposed a common framework for small and medium-sized bakeries to improve their production efficiency in three steps: collecting production data, simulating production planning with the hybrid no-wait flow shop model, and running the optimization algorithm. The study suggests to use MPSO for solving single objective optimization problem and NSGA-II for multi-objective optimization problem. Based on real bakery production data, the results revealed that existing plans were significantly inefficient and could be optimized in a reasonable computational time using a robust optimization algorithm. Implementing such a framework in small and medium-sized bakery manufacturing operations could help to achieve an efficient and resilient production system.Die Steigerung der Produktionseffizienz durch die Optimierung von Arbeitsplänen ist eines der am meisten erforschten Themen im Bereich der Unternehmensplanung, die zur Entscheidungsfindung beiträgt. Es handelt sich dabei um die Aufteilung von Aufgaben auf die verfügbaren Ressourcen innerhalb der Beschränkungen einer Produktionsanlage mit dem Ziel der Kostenminimierung. Diese Optimierung von Arbeitsplänen wird mit Hilfe eines Modells durchgeführt, das die Aufgabenverteilung in der realen Welt mit Variablen und relevanten Einschränkungen nachbildet, um die Produktion zu simulieren. Zusätzlich zu einem Modell sind Optimierungsverfahren erforderlich, die bei der Bewertung und Verbesserung der Aufgabenverteilung helfen, um eine effiziente Gesamtproduktion zu erzielen. Das gesamte Verfahren wird in der Regel auf einem Computer durchgeführt, wobei diese beiden unterschiedlichen Komponenten (Modell und Optimierungsverfahren) zusammen einen Lösungsrahmen für die Produktionsplanung bilden und die Entscheidungsfindung in verschiedenen Fertigungsindustrien unterstützen. Kleine und mittelgroße Bäckereien haben zumeist keinen Zugang zu den modernsten Werkzeugen und die meisten ihrer Produktionspläne beruhen auf persönlichen Erfahrungen. Dies macht einen erheblichen Unterschied bei den Produktionskosten im Vergleich zu den großen Bäckereien aus, was sich in deren Marktdominanz widerspiegelt. In dieser Studie wird ein hybrides No-Wait-Flow-Shop-Modell vorgeschlagen, um einen Produktionsplan auf der Grundlage tatsächlicher Daten zu erstellen, der die Beschränkungen der Produktionsumgebung in kleinen und mittleren Bäckereien berücksichtigt. Mehrere einzel- und mehrzielorientierte, von der Natur inspirierte Optimierungsalgorithmen wurden implementiert, um effiziente Produktionspläne zu berechnen. Die Minimierung der Produktionsdauer ist das am häufigsten verwendete Qualitätskriterium für die Produktionseffizienz, da sie die Produktionskosten dominiert. Jedoch wird in Bäckereien durch hohe Leerlaufzeiten der Öfen Energie verschwendet was wiederum die Produktionskosten erhöht. Die Kombination beider Qualitätskriterien (minimale Produktionskosten, minimale Leerlaufzeiten der Öfen) ermöglicht eine zusätzliche Kostenreduzierung durch Energieeinsparungen und kurze Produktionszeiten. Um einen effizienten Produktionsplan zu erhalten, wurden daher die Minimierung der Produktionsdauer und der Ofenleerlaufzeit in die Optimierungsziele einbezogen. Um optimale Produktionspläne für bestehende Produktionsprozesse von Bäckereien zu ermitteln, wurden folgende Algorithmen untersucht: Particle Swarm Optimization, Simulated Annealing und Nawaz-Enscore-Ham. Die Methode der Gewichtung wurde verwendet, um zwei Ziele zu einem einzigen Ziel zu kombinieren. Die Optimierungsalgorithmen erwiesen sich als gut genug, um in angemessener Zeit optimale Pläne zu berechnen, wobei bei einem untersuchten Datensatz die Produktionsdauer um 29 % und die Leerlaufzeit des Ofens um 8 % reduziert wurde. Allerdings erwies sich die Konvergenz der Algorithmen als unzureichend, da nur mit einer geringen Wahrscheinlichkeit das beste oder nahezu beste Ergebnis berechnet wurde. Im Gegensatz dazu zeigte der in dieser Studie ebenfalls untersuchte modifizierte Particle-swarm-Optimierungsalgorithmus (mPSO) eine deutliche Verbesserung der Konvergenz mit einer höheren Wahrscheinlichkeit, bessere Ergebnisse zu erzielen im Vergleich zu den anderen Algorithmen. Um Kompromisse zwischen zwei Zielen zu erzielen, wurden moderne Algorithmen zur Mehrzieloptimierung implementiert: Non-dominated Sorting Genetic Algorithm (NSGA-II), Strength Pareto Evolutionary Algorithm, Generalized Differential Evolution, Improved Multi-objective Particle Swarm Optimization (OMOPSO), and Speed-constrained Multi-objective Particle Swarm Optimization (SMPSO). Die Optimierungsalgorithmen ermöglichten eine effiziente Produktionsplanung mit einer Verringerung der Produktionsdauer um bis zu 12 % und einer Verringerung der Leerlaufzeit der Öfen um 26 % auf der Grundlage von Daten aus unterschiedlichen Produktionsprozessen. Der Leistungsvergleich zeigte signifikante Unterschiede zwischen diesen Mehrziel-Optimierungsalgorithmen, wobei NSGA-II am besten und OMOPSO und SMPSO am schlechtesten abschnitten. Die Gärung ist ein wichtiger Verarbeitungsschritt, der zur Qualität des Endprodukts beiträgt, indem der Geschmack und die Textur des Brotes positiv beeinflusst werden kann. Die Dauer der Gärung ist jedoch aufgrund der komplexen Interaktion von mehreren Größen abhängig wie der Hefezustand, der Temperatur in der Gärkammer und der chemischen Zusammensetzung des Mehls. Aufgrund der Variabilität der Gärzeit kann jedoch ein Produktionsplan, der auf die kürzeste Produktionszeit optimiert ist, sehr ineffizient sein. Die Berechnungsergebnisse zeigen, dass die Pläne mit der kürzesten und nahezu kürzesten Produktionsdauer eine erhebliche (bis zu 18 %) Erhöhung der Produktionsdauer aufgrund der Abweichung der Gärzeit von der erwarteten Dauer aufweisen. In dieser Arbeit wird eine Methode zur Entwicklung einer robusten Produktionsplanung vorgeschlagen, die Veränderungen in den Gärzeiten berücksichtigt, so dass selbst bei einer extremen Abweichung der Gärzeit die Schwankung der Produktionsdauer minimal ist. Die experimentellen Ergebnisse für einen Produktionsprozess ergaben einen robusten Produktionsplan, der nur 5 Minuten länger ist als die kürzeste Produktionsdauer, aber nur 21 Minuten in der Produktionsdauer schwankt, wenn die Gärzeit von -10 % bis +10 % der ermittelten Gärzeit variiert. In dieser Studie wird ein Vorgehen für kleine und mittlere Bäckereien vorgeschlagen, um ihre Produktionseffizienz in drei Schritten zu verbessern: Erfassung von Produktionsdaten, Simulation von Produktionsplänen mit dem hybrid No-Wait Flow Shop Modell und Ausführung der Optimierung. Für die Einzieloptimierung wird der mPSO-Algorithmus und für die Mehrzieloptimierung NSGA-II-Algorithmus empfohlen. Auf der Grundlage realer Bäckereiproduktionsdaten zeigten die Ergebnisse, dass die in den Bäckereien verwendeten Pläne ineffizient waren und mit Hilfe eines effizienten Optimierungsalgorithmus in einer angemessenen Rechenzeit optimiert werden konnten. Die Umsetzung eines solchen Vorgehens in kleinen und mittelgroßen Bäckereibetrieben trägt dazu bei effiziente und robuste Produktionspläne zu erstellen und somit die Wettbewerbsfähigkeit dieser Bäckereien zu erhöhen

    Parallel optimization algorithms for high performance computing : application to thermal systems

    Get PDF
    The need of optimization is present in every field of engineering. Moreover, applications requiring a multidisciplinary approach in order to make a step forward are increasing. This leads to the need of solving complex optimization problems that exceed the capacity of human brain or intuition. A standard way of proceeding is to use evolutionary algorithms, among which genetic algorithms hold a prominent place. These are characterized by their robustness and versatility, as well as their high computational cost and low convergence speed. Many optimization packages are available under free software licenses and are representative of the current state of the art in optimization technology. However, the ability of optimization algorithms to adapt to massively parallel computers reaching satisfactory efficiency levels is still an open issue. Even packages suited for multilevel parallelism encounter difficulties when dealing with objective functions involving long and variable simulation times. This variability is common in Computational Fluid Dynamics and Heat Transfer (CFD & HT), nonlinear mechanics, etc. and is nowadays a dominant concern for large scale applications. Current research in improving the performance of evolutionary algorithms is mainly focused on developing new search algorithms. Nevertheless, there is a vast knowledge of sequential well-performing algorithmic suitable for being implemented in parallel computers. The gap to be covered is efficient parallelization. Moreover, advances in the research of both new search algorithms and efficient parallelization are additive, so that the enhancement of current state of the art optimization software can be accelerated if both fronts are tackled simultaneously. The motivation of this Doctoral Thesis is to make a step forward towards the successful integration of Optimization and High Performance Computing capabilities, which has the potential to boost technological development by providing better designs, shortening product development times and minimizing the required resources. After conducting a thorough state of the art study of the mathematical optimization techniques available to date, a generic mathematical optimization tool has been developed putting a special focus on the application of the library to the field of Computational Fluid Dynamics and Heat Transfer (CFD & HT). Then the main shortcomings of the standard parallelization strategies available for genetic algorithms and similar population-based optimization methods have been analyzed. Computational load imbalance has been identified to be the key point causing the degradation of the optimization algorithm¿s scalability (i.e. parallel efficiency) in case the average makespan of the batch of individuals is greater than the average time required by the optimizer for performing inter-processor communications. It occurs because processors are often unable to finish the evaluation of their queue of individuals simultaneously and need to be synchronized before the next batch of individuals is created. Consequently, the computational load imbalance is translated into idle time in some processors. Several load balancing algorithms have been proposed and exhaustively tested, being extendable to any other population-based optimization method that needs to synchronize all processors after the evaluation of each batch of individuals. Finally, a real-world engineering application that consists on optimizing the refrigeration system of a power electronic device has been presented as an illustrative example in which the use of the proposed load balancing algorithms is able to reduce the simulation time required by the optimization tool.El aumento de las aplicaciones que requieren de una aproximación multidisciplinar para poder avanzar se constata en todos los campos de la ingeniería, lo cual conlleva la necesidad de resolver problemas de optimización complejos que exceden la capacidad del cerebro humano o de la intuición. En estos casos es habitual el uso de algoritmos evolutivos, principalmente de los algoritmos genéticos, caracterizados por su robustez y versatilidad, así como por su gran coste computacional y baja velocidad de convergencia. La multitud de paquetes de optimización disponibles con licencias de software libre representan el estado del arte actual en tecnología de optimización. Sin embargo, la capacidad de adaptación de los algoritmos de optimización a ordenadores masivamente paralelos alcanzando niveles de eficiencia satisfactorios es todavía una tarea pendiente. Incluso los paquetes adaptados al paralelismo multinivel tienen dificultades para gestionar funciones objetivo que requieren de tiempos de simulación largos y variables. Esta variabilidad es común en la Dinámica de Fluidos Computacional y la Transferencia de Calor (CFD & HT), mecánica no lineal, etc. y es una de las principales preocupaciones en aplicaciones a gran escala a día de hoy. La investigación actual que tiene por objetivo la mejora del rendimiento de los algoritmos evolutivos está enfocada principalmente al desarrollo de nuevos algoritmos de búsqueda. Sin embargo, ya se conoce una gran variedad de algoritmos secuenciales apropiados para su implementación en ordenadores paralelos. La tarea pendiente es conseguir una paralelización eficiente. Además, los avances en la investigación de nuevos algoritmos de búsqueda y la paralelización son aditivos, por lo que el proceso de mejora del software de optimización actual se verá incrementada si se atacan ambos frentes simultáneamente. La motivación de esta Tesis Doctoral es avanzar hacia una integración completa de las capacidades de Optimización y Computación de Alto Rendimiento para así impulsar el desarrollo tecnológico proporcionando mejores diseños, acortando los tiempos de desarrollo del producto y minimizando los recursos necesarios. Tras un exhaustivo estudio del estado del arte de las técnicas de optimización matemática disponibles a día de hoy, se ha diseñado una librería de optimización orientada al campo de la Dinámica de Fluidos Computacional y la Transferencia de Calor (CFD & HT). A continuación se han analizado las principales limitaciones de las estrategias de paralelización disponibles para algoritmos genéticos y otros métodos de optimización basados en poblaciones. En el caso en que el tiempo de evaluación medio de la tanda de individuos sea mayor que el tiempo medio que necesita el optimizador para llevar a cabo comunicaciones entre procesadores, se ha detectado que la causa principal de la degradación de la escalabilidad o eficiencia paralela del algoritmo de optimización es el desequilibrio de la carga computacional. El motivo es que a menudo los procesadores no terminan de evaluar su cola de individuos simultáneamente y deben sincronizarse antes de que se cree la siguiente tanda de individuos. Por consiguiente, el desequilibrio de la carga computacional se convierte en tiempo de inactividad en algunos procesadores. Se han propuesto y testado exhaustivamente varios algoritmos de equilibrado de carga aplicables a cualquier método de optimización basado en una población que necesite sincronizar los procesadores tras cada tanda de evaluaciones. Finalmente, se ha presentado como ejemplo ilustrativo un caso real de ingeniería que consiste en optimizar el sistema de refrigeración de un dispositivo de electrónica de potencia. En él queda demostrado que el uso de los algoritmos de equilibrado de carga computacional propuestos es capaz de reducir el tiempo de simulación que necesita la herramienta de optimización

    Energy Efficient Policies, Scheduling, and Design for Sustainable Manufacturing Systems

    Get PDF
    Climate mitigation, more stringent regulations, rising energy costs, and sustainable manufacturing are pushing researchers to focus on energy efficiency, energy flexibility, and implementation of renewable energy sources in manufacturing systems. This thesis aims to analyze the main works proposed regarding these hot topics, and to fill the gaps in the literature. First, a detailed literature review is proposed. Works regarding energy efficiency in different manufacturing levels, in the assembly line, energy saving policies, and the implementation of renewable energy sources are analyzed. Then, trying to fill the gaps in the literature, different topics are analyzed more in depth. In the single machine context, a mathematical model aiming to align the manufacturing power required to a renewable energy supply in order to obtain the maximum profit is developed. The model is applied to a single work center powered by the electric grid and by a photovoltaic system; afterwards, energy storage is also added to the power system. Analyzing the job shop context, switch off policies implementing workload approach and scheduling considering variable speed of the machines and power constraints are proposed. The direct and indirect workloads of the machines are considered to support the switch on/off decisions. A simulation model is developed to test the proposed policies compared to others presented in the literature. Regarding the job shop scheduling, a fixed and variable power constraints are considered, assuming the minimization of the makespan as the objective function. Studying the factory level, a mathematical model to design a flow line considering the possibility of using switch-off policies is developed. The design model for production lines includes a targeted imbalance among the workstations to allow for defined idle time. Finally, the main findings, results, and the future directions and challenges are presented
    corecore