713 research outputs found

    The relevance of outsourcing and leagile strategies in performance optimization of an integrated process planning and scheduling

    Get PDF
    Over the past few years growing global competition has forced the manufacturing industries to upgrade their old production strategies with the modern day approaches. As a result, recent interest has been developed towards finding an appropriate policy that could enable them to compete with others, and facilitate them to emerge as a market winner. Keeping in mind the abovementioned facts, in this paper the authors have proposed an integrated process planning and scheduling model inheriting the salient features of outsourcing, and leagile principles to compete in the existing market scenario. The paper also proposes a model based on leagile principles, where the integrated planning management has been practiced. In the present work a scheduling problem has been considered and overall minimization of makespan has been aimed. The paper shows the relevance of both the strategies in performance enhancement of the industries, in terms of their reduced makespan. The authors have also proposed a new hybrid Enhanced Swift Converging Simulated Annealing (ESCSA) algorithm, to solve the complex real-time scheduling problems. The proposed algorithm inherits the prominent features of the Genetic Algorithm (GA), Simulated Annealing (SA), and the Fuzzy Logic Controller (FLC). The ESCSA algorithm reduces the makespan significantly in less computational time and number of iterations. The efficacy of the proposed algorithm has been shown by comparing the results with GA, SA, Tabu, and hybrid Tabu-SA optimization methods

    Spatial-temporal data modelling and processing for personalised decision support

    Get PDF
    The purpose of this research is to undertake the modelling of dynamic data without losing any of the temporal relationships, and to be able to predict likelihood of outcome as far in advance of actual occurrence as possible. To this end a novel computational architecture for personalised ( individualised) modelling of spatio-temporal data based on spiking neural network methods (PMeSNNr), with a three dimensional visualisation of relationships between variables is proposed. In brief, the architecture is able to transfer spatio-temporal data patterns from a multidimensional input stream into internal patterns in the spiking neural network reservoir. These patterns are then analysed to produce a personalised model for either classification or prediction dependent on the specific needs of the situation. The architecture described above was constructed using MatLab© in several individual modules linked together to form NeuCube (M1). This methodology has been applied to two real world case studies. Firstly, it has been applied to data for the prediction of stroke occurrences on an individual basis. Secondly, it has been applied to ecological data on aphid pest abundance prediction. Two main objectives for this research when judging outcomes of the modelling are accurate prediction and to have this at the earliest possible time point. The implications of these findings are not insignificant in terms of health care management and environmental control. As the case studies utilised here represent vastly different application fields, it reveals more of the potential and usefulness of NeuCube (M1) for modelling data in an integrated manner. This in turn can identify previously unknown (or less understood) interactions thus both increasing the level of reliance that can be placed on the model created, and enhancing our human understanding of the complexities of the world around us without the need for over simplification. Read less Keywords Personalised modelling; Spiking neural network; Spatial-temporal data modelling; Computational intelligence; Predictive modelling; Stroke risk predictio

    Spatial-temporal data modelling and processing for personalised decision support

    Get PDF
    The purpose of this research is to undertake the modelling of dynamic data without losing any of the temporal relationships, and to be able to predict likelihood of outcome as far in advance of actual occurrence as possible. To this end a novel computational architecture for personalised ( individualised) modelling of spatio-temporal data based on spiking neural network methods (PMeSNNr), with a three dimensional visualisation of relationships between variables is proposed. In brief, the architecture is able to transfer spatio-temporal data patterns from a multidimensional input stream into internal patterns in the spiking neural network reservoir. These patterns are then analysed to produce a personalised model for either classification or prediction dependent on the specific needs of the situation. The architecture described above was constructed using MatLab© in several individual modules linked together to form NeuCube (M1). This methodology has been applied to two real world case studies. Firstly, it has been applied to data for the prediction of stroke occurrences on an individual basis. Secondly, it has been applied to ecological data on aphid pest abundance prediction. Two main objectives for this research when judging outcomes of the modelling are accurate prediction and to have this at the earliest possible time point. The implications of these findings are not insignificant in terms of health care management and environmental control. As the case studies utilised here represent vastly different application fields, it reveals more of the potential and usefulness of NeuCube (M1) for modelling data in an integrated manner. This in turn can identify previously unknown (or less understood) interactions thus both increasing the level of reliance that can be placed on the model created, and enhancing our human understanding of the complexities of the world around us without the need for over simplification. Read less Keywords Personalised modelling; Spiking neural network; Spatial-temporal data modelling; Computational intelligence; Predictive modelling; Stroke risk predictio

    Efficient heuristics for the parallel blocking flow shop scheduling problem

    Get PDF
    We consider the NP-hard problem of scheduling n jobs in F identical parallel flow shops, each consisting of a series of m machines, and doing so with a blocking constraint. The applied criterion is to minimize the makespan, i.e., the maximum completion time of all the jobs in F flow shops (lines). The Parallel Flow Shop Scheduling Problem (PFSP) is conceptually similar to another problem known in the literature as the Distributed Permutation Flow Shop Scheduling Problem (DPFSP), which allows modeling the scheduling process in companies with more than one factory, each factory with a flow shop configuration. Therefore, the proposed methods can solve the scheduling problem under the blocking constraint in both situations, which, to the best of our knowledge, has not been studied previously. In this paper, we propose a mathematical model along with some constructive and improvement heuristics to solve the parallel blocking flow shop problem (PBFSP) and thus minimize the maximum completion time among lines. The proposed constructive procedures use two approaches that are totally different from those proposed in the literature. These methods are used as initial solution procedures of an iterated local search (ILS) and an iterated greedy algorithm (IGA), both of which are combined with a variable neighborhood search (VNS). The proposed constructive procedure and the improved methods take into account the characteristics of the problem. The computational evaluation demonstrates that both of them –especially the IGA– perform considerably better than those algorithms adapted from the DPFSP literature.Peer ReviewedPostprint (author's final draft

    A particle swarm optimisation for the no-wait flow shop problem with due date constraints.

    Get PDF
    Peer ReviewedThis paper considers the no-wait flow shop scheduling problem with due date constraints. In the no-wait flow shop problem, waiting time is not allowed between successive operations of jobs. Moreover, a due date is associated with the completion of each job. The considered objective function is makespan. This problem is proved to be strongly NP-Hard. In this paper, a particle swarm optimisation (PSO) is developed to deal with the problem. Moreover, the effect of some dispatching rules for generating initial solutions are studied. A Taguchi-based design of experience approach has been followed to determine the effect of the different values of the parameters on the performance of the algorithm. To evaluate the performance of the proposed PSO, a large number of benchmark problems are selected from the literature and solved with different due date and penalty settings. Computational results confirm that the proposed PSO is efficient and competitive; the developed framework is able to improve many of the best-known solutions of the test problems available in the literature

    Studying the effect of server side constraints on the makespan of the no-wait flow shop problem with sequence dependent setup times.

    Get PDF
    Peer ReviewedThis paper deals with the problem of scheduling the no-wait flow-shop system with sequence-dependent set-up times and server side-constraints. No-wait constraints state that there should be no waiting time between consecutive operations of jobs. In addition, sequence-dependent set-up times are considered for each operation. This means that the set-up time of an operation on its respective machine is dependent on the previous operation on the same machine. Moreover, the problem consists of server side-constraints i.e. not all machines have a dedicated server to prepare them for an operation. In other words, several machines share a common server. The considered performance measure is makespan. This problem is proved to be strongly NP-Hard. To deal with the problem, two genetic algorithms are developed. In order to evaluate the performance of the developed frameworks, a large number of benchmark problems are selected and solved with different server limitation scenarios. Computational results confirm that both of the proposed algorithms are efficient and competitive. The developed algorithms are able to improve many of the best-known solutions of the test problems from the literature. Moreover, the effect of the server side-constraints on the makespan of the test problems is explained using the computational results

    A Linear Programming Model for Renewable Energy Aware Discrete Production Planning and Control

    Get PDF
    Industrial production in the EU, like other sectors of the economy, is obliged to stop producing greenhouse gas emissions by 2050. With its Green Deal, the European Union has already set the corresponding framework in 2019. To achieve Net Zero in the remaining time, while not endangering one's own competitiveness on a globalized market, a transformation of industrial value creation has to be started already today. In terms of energy supply, this means a comprehensive electrification of processes and a switch to fully renewable power generation. However, due to a growing share of renewable energy sources, increasing volatility can be observed in the European electricity market already. For companies, there are mainly two ways to deal with the accompanying increase in average electricity prices. The first is to reduce consumption by increasing efficiency, which naturally has its physical limits. Secondly, an increasing volatile electricity price makes it possible to take advantage of periods of relatively low prices. To do this, companies must identify their energy-intensive processes and design them in such a way as to enable these activities to be shifted in time. This article explains the necessary differentiation between labor-intensive and energy intensive processes. A general mathematical model for the holistic optimization of discrete industrial production is presented. With the help of this MILP model, it is simulated that a flexibilization of energy intensive processes with volatile energy prices can help to reduce costs and thus secure competitiveness while getting it in line with European climate goals. On the basis of real electricity market data, different production scenarios are compared, and it is investigated under which conditions the flexibilization of specific processes is worthwhile

    Deep Reinforcement Learning Techniques For Solving Hybrid Flow Shop Scheduling Problems: Proximal Policy Optimization (PPO) and Asynchronous Advantage Actor-Critic (A3C)

    Get PDF
    Well-studied scheduling practices are fundamental for the successful support of core business processes in any manufacturing environment. Particularly, the Hybrid Flow Shop (HFS) scheduling problems are present in many manufacturing environments. The current advances in the field of Deep Reinforcement Learning (DRL) attracted the attention of both practitioners and academics to investigate their adoption beyond synthetic game-like applications. Therefore, we present an approach that is based on DRL techniques in conjunction with a discrete event simulation model to solve a real-world four-stage HFS scheduling problem. The main narrative behind the presented concepts is to expose a DRL agent to a game-like environment using an indirect encoding. Two types of DRL techniques namely, Proximal Policy Optimization (PPO) and Asynchronous Advantage Actor-Critic (A3C), are evaluated for solving problems of different complexity. The computational results suggest that the DRL agents successfully learn appropriate policies for solving the investigated problem. In addition, the investigation shows that the agent can adjust their policies when we expose them to a different problem. We further evaluate the approach to solving problem instances published in the literature to establish a comparison
    • 

    corecore