868 research outputs found

    Genetic Algorithms For Flow-shop Scheduling Optimization Of An Automated Assembly Line

    Get PDF
    Manufacturing process is a process of producing and creating a product with the use of technologies and machinery resources. In manufacturing process there are three dimensions, which are important in improving the system. These are cost, quality, and speed that can be considered as basics of every process. In this thesis speed of the manufacturing process is enhanced, which leads to reduction in cost as well. Assembly lines are the part of manufacturing process to convert raw materials into finished products. Considering optimization problems in assembly lines, applying genetic algorithms to the established model could lead to efficient manufacturing. Genetic algorithm is a programming search technique for maximizing productivity, minimizing inefficiency and reducing production time. This work presents an approach for developing simulation models used for optimization of production lines. The results are demonstrated using the assembly line which is located in FAST-Lab. at Tampere University of Technology. The simulation of the line is created to assess cycle times and utilization of workstations using MATLAB and SimEvents library. The optimization, in the context of presented work, is the process of locating and scheduling the products in the line achieving best timing to fulfil production orders. The workstations can be first balanced for better performance and then products are scheduled based on reduction of the production time

    Holistic, data-driven, service and supply chain optimisation: linked optimisation.

    Get PDF
    The intensity of competition and technological advancements in the business environment has made companies collaborate and cooperate together as a means of survival. This creates a chain of companies and business components with unified business objectives. However, managing the decision-making process (like scheduling, ordering, delivering and allocating) at the various business components and maintaining a holistic objective is a huge business challenge, as these operations are complex and dynamic. This is because the overall chain of business processes is widely distributed across all the supply chain participants; therefore, no individual collaborator has a complete overview of the processes. Increasingly, such decisions are automated and are strongly supported by optimisation algorithms - manufacturing optimisation, B2B ordering, financial trading, transportation scheduling and allocation. However, most of these algorithms do not incorporate the complexity associated with interacting decision-making systems like supply chains. It is well-known that decisions made at one point in supply chains can have significant consequences that ripple through linked production and transportation systems. Recently, global shocks to supply chains (COVID-19, climate change, blockage of the Suez Canal) have demonstrated the importance of these interdependencies, and the need to create supply chains that are more resilient and have significantly reduced impact on the environment. Such interacting decision-making systems need to be considered through an optimisation process. However, the interactions between such decision-making systems are not modelled. We therefore believe that modelling such interactions is an opportunity to provide computational extensions to current optimisation paradigms. This research study aims to develop a general framework for formulating and solving holistic, data-driven optimisation problems in service and supply chains. This research achieved this aim and contributes to scholarship by firstly considering the complexities of supply chain problems from a linked problem perspective. This leads to developing a formalism for characterising linked optimisation problems as a model for supply chains. Secondly, the research adopts a method for creating a linked optimisation problem benchmark by linking existing classical benchmark sets. This involves using a mix of classical optimisation problems, typically relating to supply chain decision problems, to describe different modes of linkages in linked optimisation problems. Thirdly, several techniques for linking supply chain fragmented data have been proposed in the literature to identify data relationships. Therefore, this thesis explores some of these techniques and combines them in specific ways to improve the data discovery process. Lastly, many state-of-the-art algorithms have been explored in the literature and these algorithms have been used to tackle problems relating to supply chain problems. This research therefore investigates the resilient state-of-the-art optimisation algorithms presented in the literature, and then designs suitable algorithmic approaches inspired by the existing algorithms and the nature of problem linkages to address different problem linkages in supply chains. Considering research findings and future perspectives, the study demonstrates the suitability of algorithms to different linked structures involving two sub-problems, which suggests further investigations on issues like the suitability of algorithms on more complex structures, benchmark methodologies, holistic goals and evaluation, processmining, game theory and dependency analysis

    Metaheuristics for single and multiple objectives production scheduling for the capital goods industry

    Get PDF
    In the capital goods industry, companies produce plant and machinery that is used to produce consumer products or commodities such as electricity or gas. Typical products produced in these companies include steam turbines, large boilers and oil rigs. Scheduling of these products is difficult due to the complexity of the product structure, which involves many levels of assembly and long complex routings of many operations which are operated in multiple machines. There are also many scheduling constraints such as machine capacity as well as operation and assembly precedence relationships. Products manufactured in the capital goods industry are usually highly customised in order to meet specific customer requirements. Delivery performance is a particularly important aspect of customer service and it is common for contracts to include severe penalties for late deliveries. Holding costs are incurred if items are completed before the due date. Effective planning and inventory control are important to ensure that products are delivered on time and that inventory costs are minimised. Capital goods companies also give priority to resource utilisation to ensure production efficiency. In practice there are tradeoffs between achieving on time delivery, minimising inventory costs whilst simultaneously maximising resource utilisation. Most production scheduling research has focused on job-shops or flow-shops which ignored assembly relationships. There is a limited literature that has focused on assembly production. However, production scheduling in capital goods industry is a combination of component manufacturing (using jobbing, batch and flow processes), assembly and construction. Some components have complex operations and routings. The product structures for major products are usually complex and deep. A practical scheduling tool not only needs to solve some extremely large scheduling problems, but also needs to solve these problems within a realistic time. Multiple objectives are usually encountered in production scheduling in the capital goods industry. Most literature has focused on minimisation of total flow time, or makespan and earliness and tardiness of jobs. In the capital goods industry, inventory costs, delivery performance and machine utilisation are crucial competitive. This research develops a scheduling tool that can successfully optimise these criteria simultaneously within a realistic time. ii The aim of this research was firstly to develop the Enhanced Single-Objective Genetic Algorithm Scheduling Tool (ESOGAST) to make it suitable for solving very large production scheduling problems in capital goods industry within a realistic time. This tool aimed to minimise the combination of earliness and lateness penalties caused by early or late completion of items. The tool was compared with previous approaches in literature and was proved superior in terms of the solution quality and the computational time. Secondly, this research developed a Multi-Objective Genetic Algorithm Scheduling Tool (MOGAST) that was based upon the development of ESOGAST but was able to solve scheduling problems with multiple objectives. The objectives of this tool were to optimise delivery performance, minimise inventory costs, and maximise resource utilisation simultaneously. Thirdly, this research developed an Artificial Immune System Scheduling Tool (AISST) that achieved the same objective of the ESOGAST. The performances of both tools were compared and analysed. Results showed that AISST performs better than ESOGAST on relatively small scheduling problems, but the computation time required by the AISST was several times longer. However ESOGAST performed better than the AISST for larger problems. Optimum configurations were identified in a series of experiments that conducted for each tool. The most efficient configuration was also successfully applied for each tool to solve the full size problem and all three tools achieved satisfactory results.EThOS - Electronic Theses Online ServiceGBUnited Kingdo

    Evolutionary Computation

    Get PDF
    This book presents several recent advances on Evolutionary Computation, specially evolution-based optimization methods and hybrid algorithms for several applications, from optimization and learning to pattern recognition and bioinformatics. This book also presents new algorithms based on several analogies and metafores, where one of them is based on philosophy, specifically on the philosophy of praxis and dialectics. In this book it is also presented interesting applications on bioinformatics, specially the use of particle swarms to discover gene expression patterns in DNA microarrays. Therefore, this book features representative work on the field of evolutionary computation and applied sciences. The intended audience is graduate, undergraduate, researchers, and anyone who wishes to become familiar with the latest research work on this field

    Using MapReduce Streaming for Distributed Life Simulation on the Cloud

    Get PDF
    Distributed software simulations are indispensable in the study of large-scale life models but often require the use of technically complex lower-level distributed computing frameworks, such as MPI. We propose to overcome the complexity challenge by applying the emerging MapReduce (MR) model to distributed life simulations and by running such simulations on the cloud. Technically, we design optimized MR streaming algorithms for discrete and continuous versions of Conway’s life according to a general MR streaming pattern. We chose life because it is simple enough as a testbed for MR’s applicability to a-life simulations and general enough to make our results applicable to various lattice-based a-life models. We implement and empirically evaluate our algorithms’ performance on Amazon’s Elastic MR cloud. Our experiments demonstrate that a single MR optimization technique called strip partitioning can reduce the execution time of continuous life simulations by 64%. To the best of our knowledge, we are the first to propose and evaluate MR streaming algorithms for lattice-based simulations. Our algorithms can serve as prototypes in the development of novel MR simulation algorithms for large-scale lattice-based a-life models.https://digitalcommons.chapman.edu/scs_books/1014/thumbnail.jp

    Preventing premature convergence and proving the optimality in evolutionary algorithms

    Get PDF
    http://ea2013.inria.fr//proceedings.pdfInternational audienceEvolutionary Algorithms (EA) usually carry out an efficient exploration of the search-space, but get often trapped in local minima and do not prove the optimality of the solution. Interval-based techniques, on the other hand, yield a numerical proof of optimality of the solution. However, they may fail to converge within a reasonable time due to their inability to quickly compute a good approximation of the global minimum and their exponential complexity. The contribution of this paper is a hybrid algorithm called Charibde in which a particular EA, Differential Evolution, cooperates with a Branch and Bound algorithm endowed with interval propagation techniques. It prevents premature convergence toward local optima and outperforms both deterministic and stochastic existing approaches. We demonstrate its efficiency on a benchmark of highly multimodal problems, for which we provide previously unknown global minima and certification of optimality

    Particle Swarm Optimization

    Get PDF
    Particle swarm optimization (PSO) is a population based stochastic optimization technique influenced by the social behavior of bird flocking or fish schooling.PSO shares many similarities with evolutionary computation techniques such as Genetic Algorithms (GA). The system is initialized with a population of random solutions and searches for optima by updating generations. However, unlike GA, PSO has no evolution operators such as crossover and mutation. In PSO, the potential solutions, called particles, fly through the problem space by following the current optimum particles. This book represents the contributions of the top researchers in this field and will serve as a valuable tool for professionals in this interdisciplinary field

    Cultural Algorithm based on Decomposition to solve Optimization Problems

    Get PDF
    Decomposition is used to solve optimization problems by introducing many simple scalar optimization subproblems and optimizing them simultaneously. Dynamic Multi-Objective Optimization Problems (DMOP) have several objective functions and constraints that vary over time. As a consequence of such dynamic changes, the optimal solutions may vary over time, affecting the performance of convergence. In this thesis, we propose a new Cultural Algorithm (CA) based on decomposition (CA/D). The objective of the CA/D algorithm is to decompose DMOP into a number of subproblems that can be optimized using the information shared by neighboring problems. The proposed CA/D approach is evaluated using a number of CEC 2015 optimization benchmark functions. When compared to CA, Multi-population CA (MPCA), and MPCA incorporating game strategies (MPCA-GS), the results obtained showed that CA/D outperformed them in 7 out of the 15 benchmark functions
    • …
    corecore