444 research outputs found

    INTEGRATED APPROACH OF SCHEDULING A FLEXIBLE JOB SHOP USING ENHANCED FIREFLY AND HYBRID FLOWER POLLINATION ALGORITHMS

    Get PDF
    Manufacturing industries are undergoing tremendous transformation due to Industry 4.0. Flexibility, consumer demands, product customization, high product quality, and reduced delivery times are mandatory for the survival of a manufacturing plant, for which scheduling plays a major role. A job shop problem modified with flexibility is called flexible job shop scheduling. It is an integral part of smart manufacturing. This study aims to optimize scheduling using an integrated approach, where assigning machines and their routing are concurrently performed. Two hybrid methods have been proposed: 1) The Hybrid Adaptive Firefly Algorithm (HAdFA) and 2) Hybrid Flower Pollination Algorithm (HFPA). To address the premature convergence problem inherent in the classic firefly algorithm, the proposed HAdFA employs two novel adaptive strategies: employing an adaptive randomization parameter (α), which dynamically modifies at each step, and Gray relational analysis updates firefly at each step, thereby maintaining a balance between diversification and intensification. HFPA is inspired by the pollination strategy of flowers. Additionally, both HAdFA and HFPA are incorporated with a local search technique of enhanced simulated annealing to accelerate the algorithm and prevent local optima entrapment. Tests on standard benchmark cases have been performed to demonstrate the proposed algorithm’s efficacy. The proposed HAdFA surpasses the performance of the HFPA and other metaheuristics found in the literature. A case study was conducted to further authenticate the efficiency of our algorithm. Our algorithm significantly improves convergence speed and enables the exploration of a large number of rich optimal solutions.

    A comprehensive survey on cultural algorithms

    Get PDF
    Peer reviewedPostprin

    Rámec pro plánování problémy

    Get PDF
    Import 22/07/2015Scheduling problems form an important subclass of combinatorial optimisation problems with many applications in manufacturing and logistics. Predominately these problems are NP-complete (decision based) and NP-hard (optimisation based), hence the main course of research in solving them concentrates on the design of efficient heuristic algorithms. Two main categories of these algorithms exist: deterministic algorithms and evolutionary metaheuristics. The deterministic algorithms comprise local improvement techniques, such as k-opt algorithm, which try to improve existing feasible solution, and constructive heuristics, such as NEH, which build a solution starting from scratch, adding one job at a time. Evolutionary metaheuristics have prospered in the past decades, owing to their efficiency and flexibility. Drawing inspiration from the theory of natural evolution or swarm behavioural patterns, the most popular of these algorithms in practice include for instance Genetic Algorithms, Differential Evolution, Particle Swarm Optimisation, amongst others. However, even though these heuristics provide in most cases close to optimal solution at reasonable execution time, this time is still impractically long for many applications. Therefore much effort has been dedicated to accelerating these algorithms. Since the development of hardware turns away from increasing the clock speed towards the parallel processing units, owing to reaching the limits of technology due to the increased power consumption and heat dissipation, this effort goes into parallelisation of the existing algorithms, to enable exploitation of the computing power of multi-core or many-core platforms. This is the goal of the first part of the thesis, accelerating two of the deterministic algorithms, NEH and 2-opt, with interesting results. Another approach has been taken in the second part, with the core premise of exploring the influence of stochasticity on the performance of an evolutionary algorithm, selecting the relatively recent and promising Discrete Artificial Bee Colony algorithm. The pseudo-random number generator has been replaced with the different types of dissipative chaos maps, with some of them improving the algorithm significantly. It has been shown that the population based evolutionary algorithms often form complex networks, taken from the point of view of the information exchange between individual solutions during the course of population development. The final part of this thesis puts this observation into practice by embedding the complex network analysis based self-adaptive mechanism into the ABC algorithm, a continuous optimisation problems solving evolutionary algorithm, which is however the basis for the afore mentioned DABC algorithm, and proving the effectiveness for some of the developed versions, currently on the standard continuous optimisation test functions, with the possibility to extend this modification to the combinatorial optimisations problems in the future being discussed in the conclusion.Rozvrhovací problémy jsou důležitou podtřídou úloh kombinatorické optimalizace s řadou aplikací ve výrobě a logistice. Většina těchto problémů je NP-úplných (rozhodovací forma) a NP-těžkých (optimalizační forma), proto se výzkum zaměřuje na návrh efektivních heuristických algoritmů. Dvě hlavní kategorie těchto algoritmů jsou deterministické algoritmy a evoluční metaheuristiky. Deterministické algoritmy zahrnují techniky lokálního prohledávání, například algoritmus k-opt, jejichž cílem je zlepšení existujícího přípustného řešení problému, dále pak konstruktivní heuristiky, jejichž příkladem je algoritmus NEH, které hledané řešení vytvářejí inkrementálně, bez potřeby znalosti vstupního bodu v prohledávaném prostoru řešení. Evoluční metaheuristiky mají za sebou historii úspěšného vývoje v posledních desetiletích, zejména díky jejich efektivitě a flexibilitě. Jejich inspirací jsou poznatky převzaté z biologie, teorie evoluce a inteligence hejna. Mezi nejpopulárnějšími z těchto algoritmů jsou, mimo jiné, genetické algoritmy, diferenciální evoluce, rojení částic (Particle Swarm Optimisation). Ačkoli tyto heuristiky nalézají ve většině případů řešení blížící se globálnímu optimu v přípustném výpočetním čase, pro řadu aplikací mohou být stále ještě nepřijatelně pomalé. Velké úsilí bylo věnováno zrychlení těchto algoritmů. Protože se vývoj hardware díky dosažení technologických limitů, vzhledem ke zvyšující se spotřebě energie a tepelnému vyzařování, obrací od zvyšování frekvence jednojádrového procesoru k vícejádrovým procesorům a paralelnímu zpracování, je tato snaha většinou orientovaná na paralelizaci existujících algoritmů, aby bylo umožněno využití výpočetní síly vícejádrových platforem (multi-core a many-core). Prvním cílem této práce je tudíž akcelerace dvou deterministických algoritmů, NEH a 2-opt, přičemž bylo dosaženo zajímavých výsledků. Jiný přístup byl zvolen ve druhé části, s hlavní myšlenkou prozkoumání vlivu náhodnosti na výkon evolučního algoritmu. Za tímto účelem byl zvolen relativně nový a slibný algoritmus Discrete Artificial Bee Colony. Generátor pseudonáhodných čísel byl nahrazen několika různými chaotickými mapami, z nichž některé znatelně zlepšily výsledky algoritmu. Bylo ukázáno, že evoluční algoritmy založené na populaci často formují komplexní sítě, vzato z pohledu výměny informací mezi jednotlivými řešeními v populaci během jejího vývoje. Závěrečná část práce aplikuje toto pozorování vložením samo přizpůsobivého mechanismu založeném na analýze komplexní sítě do algoritmu ABC, který je evolučním algoritmem pro spojitou optimalizaci a zároveň základem dříve zmíněného DABC algoritmu. Efektivita několika verzí algoritmu založeném na této myšlence je dokázána na standardní sadě testovacích funkcí pro spojitou optimalizaci. Možnost rozšíření této modifikace na kombinatorické optimalizační problémy je diskutována v závěru práce.460 - Katedra informatikyvýborn

    Aco-based feature selection algorithm for classification

    Get PDF
    Dataset with a small number of records but big number of attributes represents a phenomenon called “curse of dimensionality”. The classification of this type of dataset requires Feature Selection (FS) methods for the extraction of useful information. The modified graph clustering ant colony optimisation (MGCACO) algorithm is an effective FS method that was developed based on grouping the highly correlated features. However, the MGCACO algorithm has three main drawbacks in producing a features subset because of its clustering method, parameter sensitivity, and the final subset determination. An enhanced graph clustering ant colony optimisation (EGCACO) algorithm is proposed to solve the three (3) MGCACO algorithm problems. The proposed improvement includes: (i) an ACO feature clustering method to obtain clusters of highly correlated features; (ii) an adaptive selection technique for subset construction from the clusters of features; and (iii) a genetic-based method for producing the final subset of features. The ACO feature clustering method utilises the ability of various mechanisms such as intensification and diversification for local and global optimisation to provide highly correlated features. The adaptive technique for ant selection enables the parameter to adaptively change based on the feedback of the search space. The genetic method determines the final subset, automatically, based on the crossover and subset quality calculation. The performance of the proposed algorithm was evaluated on 18 benchmark datasets from the University California Irvine (UCI) repository and nine (9) deoxyribonucleic acid (DNA) microarray datasets against 15 benchmark metaheuristic algorithms. The experimental results of the EGCACO algorithm on the UCI dataset are superior to other benchmark optimisation algorithms in terms of the number of selected features for 16 out of the 18 UCI datasets (88.89%) and the best in eight (8) (44.47%) of the datasets for classification accuracy. Further, experiments on the nine (9) DNA microarray datasets showed that the EGCACO algorithm is superior than the benchmark algorithms in terms of classification accuracy (first rank) for seven (7) datasets (77.78%) and demonstrates the lowest number of selected features in six (6) datasets (66.67%). The proposed EGCACO algorithm can be utilised for FS in DNA microarray classification tasks that involve large dataset size in various application domains

    Holistic, data-driven, service and supply chain optimisation: linked optimisation.

    Get PDF
    The intensity of competition and technological advancements in the business environment has made companies collaborate and cooperate together as a means of survival. This creates a chain of companies and business components with unified business objectives. However, managing the decision-making process (like scheduling, ordering, delivering and allocating) at the various business components and maintaining a holistic objective is a huge business challenge, as these operations are complex and dynamic. This is because the overall chain of business processes is widely distributed across all the supply chain participants; therefore, no individual collaborator has a complete overview of the processes. Increasingly, such decisions are automated and are strongly supported by optimisation algorithms - manufacturing optimisation, B2B ordering, financial trading, transportation scheduling and allocation. However, most of these algorithms do not incorporate the complexity associated with interacting decision-making systems like supply chains. It is well-known that decisions made at one point in supply chains can have significant consequences that ripple through linked production and transportation systems. Recently, global shocks to supply chains (COVID-19, climate change, blockage of the Suez Canal) have demonstrated the importance of these interdependencies, and the need to create supply chains that are more resilient and have significantly reduced impact on the environment. Such interacting decision-making systems need to be considered through an optimisation process. However, the interactions between such decision-making systems are not modelled. We therefore believe that modelling such interactions is an opportunity to provide computational extensions to current optimisation paradigms. This research study aims to develop a general framework for formulating and solving holistic, data-driven optimisation problems in service and supply chains. This research achieved this aim and contributes to scholarship by firstly considering the complexities of supply chain problems from a linked problem perspective. This leads to developing a formalism for characterising linked optimisation problems as a model for supply chains. Secondly, the research adopts a method for creating a linked optimisation problem benchmark by linking existing classical benchmark sets. This involves using a mix of classical optimisation problems, typically relating to supply chain decision problems, to describe different modes of linkages in linked optimisation problems. Thirdly, several techniques for linking supply chain fragmented data have been proposed in the literature to identify data relationships. Therefore, this thesis explores some of these techniques and combines them in specific ways to improve the data discovery process. Lastly, many state-of-the-art algorithms have been explored in the literature and these algorithms have been used to tackle problems relating to supply chain problems. This research therefore investigates the resilient state-of-the-art optimisation algorithms presented in the literature, and then designs suitable algorithmic approaches inspired by the existing algorithms and the nature of problem linkages to address different problem linkages in supply chains. Considering research findings and future perspectives, the study demonstrates the suitability of algorithms to different linked structures involving two sub-problems, which suggests further investigations on issues like the suitability of algorithms on more complex structures, benchmark methodologies, holistic goals and evaluation, processmining, game theory and dependency analysis

    From metaheuristics to learnheuristics: Applications to logistics, finance, and computing

    Get PDF
    Un gran nombre de processos de presa de decisions en sectors estratègics com el transport i la producció representen problemes NP-difícils. Sovint, aquests processos es caracteritzen per alts nivells d'incertesa i dinamisme. Les metaheurístiques són mètodes populars per a resoldre problemes d'optimització difícils en temps de càlcul raonables. No obstant això, sovint assumeixen que els inputs, les funcions objectiu, i les restriccions són deterministes i conegudes. Aquests constitueixen supòsits forts que obliguen a treballar amb problemes simplificats. Com a conseqüència, les solucions poden conduir a resultats pobres. Les simheurístiques integren la simulació a les metaheurístiques per resoldre problemes estocàstics d'una manera natural. Anàlogament, les learnheurístiques combinen l'estadística amb les metaheurístiques per fer front a problemes en entorns dinàmics, en què els inputs poden dependre de l'estructura de la solució. En aquest context, les principals contribucions d'aquesta tesi són: el disseny de les learnheurístiques, una classificació dels treballs que combinen l'estadística / l'aprenentatge automàtic i les metaheurístiques, i diverses aplicacions en transport, producció, finances i computació.Un gran número de procesos de toma de decisiones en sectores estratégicos como el transporte y la producción representan problemas NP-difíciles. Frecuentemente, estos problemas se caracterizan por altos niveles de incertidumbre y dinamismo. Las metaheurísticas son métodos populares para resolver problemas difíciles de optimización de manera rápida. Sin embargo, suelen asumir que los inputs, las funciones objetivo y las restricciones son deterministas y se conocen de antemano. Estas fuertes suposiciones conducen a trabajar con problemas simplificados. Como consecuencia, las soluciones obtenidas pueden tener un pobre rendimiento. Las simheurísticas integran simulación en metaheurísticas para resolver problemas estocásticos de una manera natural. De manera similar, las learnheurísticas combinan aprendizaje estadístico y metaheurísticas para abordar problemas en entornos dinámicos, donde los inputs pueden depender de la estructura de la solución. En este contexto, las principales aportaciones de esta tesis son: el diseño de las learnheurísticas, una clasificación de trabajos que combinan estadística / aprendizaje automático y metaheurísticas, y varias aplicaciones en transporte, producción, finanzas y computación.A large number of decision-making processes in strategic sectors such as transport and production involve NP-hard problems, which are frequently characterized by high levels of uncertainty and dynamism. Metaheuristics have become the predominant method for solving challenging optimization problems in reasonable computing times. However, they frequently assume that inputs, objective functions and constraints are deterministic and known in advance. These strong assumptions lead to work on oversimplified problems, and the solutions may demonstrate poor performance when implemented. Simheuristics, in turn, integrate simulation into metaheuristics as a way to naturally solve stochastic problems, and, in a similar fashion, learnheuristics combine statistical learning and metaheuristics to tackle problems in dynamic environments, where inputs may depend on the structure of the solution. The main contributions of this thesis include (i) a design for learnheuristics; (ii) a classification of works that hybridize statistical and machine learning and metaheuristics; and (iii) several applications for the fields of transport, production, finance and computing

    Evolutionary Computation 2020

    Get PDF
    Intelligent optimization is based on the mechanism of computational intelligence to refine a suitable feature model, design an effective optimization algorithm, and then to obtain an optimal or satisfactory solution to a complex problem. Intelligent algorithms are key tools to ensure global optimization quality, fast optimization efficiency and robust optimization performance. Intelligent optimization algorithms have been studied by many researchers, leading to improvements in the performance of algorithms such as the evolutionary algorithm, whale optimization algorithm, differential evolution algorithm, and particle swarm optimization. Studies in this arena have also resulted in breakthroughs in solving complex problems including the green shop scheduling problem, the severe nonlinear problem in one-dimensional geodesic electromagnetic inversion, error and bug finding problem in software, the 0-1 backpack problem, traveler problem, and logistics distribution center siting problem. The editors are confident that this book can open a new avenue for further improvement and discoveries in the area of intelligent algorithms. The book is a valuable resource for researchers interested in understanding the principles and design of intelligent algorithms
    corecore