1,617 research outputs found

    Combinatorial Ant Optimization and the Flowshop Problem

    Full text link
    Researchers have developed efficient techniques, meta-heuristics to solve many Combinatorial Optimization (CO) problems, e.g., Flow shop Scheduling Problem, Travelling Salesman Problem (TSP) since the early 60s of the last century. Ant Colony Optimization (ACO) and its variants were introduced by Dorigo et al. [DBS06] in the early 1990s which is a technique to solve CO problems. In this thesis, we used the ACO technique to find solutions to the classic Flow shop Scheduling Problem and proposed a novel method for solution improvement. Our solution is composed of two phases; in the first phase, we solved TSP using ACO technique which gave us an initial permutation or tour. We used the same trip as an initial solution for our problem and then improved it by using 2-opt exchanges which yielded in a promising result. Furthermore, we introduced another improvement technique which gave us a more promising result. We have compared our results with the best (optimal) and worst solution known till date. A comprehensive experimental study using existing dataset proves that our approach remarkably gives good results

    Performance analysis for network coding using ant colony routing

    Get PDF
    This thesis was submitted for the degree of Doctor of Philosophy and awarded by Brunel University.The aim of this thesis is to conduct performance investigation of a combined system of Network Coding (NC) technique with Ant-Colony (ACO) routing protocol. This research analyses the impact of several workload characteristics, on system performance. Network coding is a significant key development of information transmission and processing. Network coding enhances the performance of multicast by employing encoding operations at intermediate nodes. Two steps should realize while using network coding in multicast communication: determining appropriate transmission paths from source to multi-receivers and using the suitable coding scheme. Intermediate nodes would combine several packets and relay them as a single packet. Although network coding can make a network achieve the maximum multicast rate, it always brings additional overheads. It is necessary to minimize unneeded overhead by using an optimization technique. On other hand, Ant Colony Optimization can be transformed into useful technique that seeks imitate the ant’s behaviour in finding the shortest path to its destination using quantities of pheromone that is left by former ants as guidance, so by using the same concept of the communication network environment, shorter paths can be formulated. The simulation results show that the resultant system considerably improves the performance of the network, by combining Ant Colony Optimization with network coding. 25% improvement in the bandwidth consumption can be achieved in comparison with conventional routing protocols. Additionally simulation results indicate that the proposed algorithm can decrease the computation time of system by a factor of 20%

    An improved bees algorithm local search mechanism for numerical dataset

    Get PDF
    Bees Algorithm (BA), a heuristic optimization procedure, represents one of the fundamental search techniques is based on the food foraging activities of bees. This algorithm performs a kind of exploitative neighbourhoods search combined with random explorative search. However, the main issue of BA is that it requires long computational time as well as numerous computational processes to obtain a good solution, especially in more complicated issues. This approach does not guarantee any optimum solutions for the problem mainly because of lack of accuracy. To solve this issue, the local search in the BA is investigated by Simple swap, 2-Opt and 3-Opt were proposed as Massudi methods for Bees Algorithm Feature Selection (BAFS). In this study, the proposed extension methods is 4-Opt as search neighbourhood is presented. This proposal was implemented and comprehensively compares and analyse their performances with respect to accuracy and time. Furthermore, in this study the feature selection algorithm is implemented and tested using most popular dataset from Machine Learning Repository (UCI). The obtained results from experimental work confirmed that the proposed extension of the search neighbourhood including 4-Opt approach has provided better accuracy with suitable time than the Massudi methods

    Ant Colony Optimisation for Dynamic and Dynamic Multi-objective Railway Rescheduling Problems

    Get PDF
    Recovering the timetable after a delay is essential to the smooth and efficient operation of the railways for both passengers and railway operators. Most current railway rescheduling research concentrates on static problems where all delays are known about in advance. However, due to the unpredictable nature of the railway system, it is possible that further unforeseen incidents could occur while the trains are running to the new rescheduled timetable. This will change the problem, making it a dynamic problem that changes over time. The aim of this work is to investigate the application of ant colony optimisation (ACO) to dynamic and dynamic multiobjective railway rescheduling problems. ACO is a promising approach for dynamic combinatorial optimisation problems as its inbuilt mechanisms allow it to adapt to the new environment while retaining potentially useful information from the previous environment. In addition, ACO is able to handle multi-objective problems by the addition of multiple colonies and/or multiple pheromone and heuristic matrices. The contributions of this work are the development of a junction simulator to model unique dynamic and multi-objective railway rescheduling problems and an investigation into the application of ACO algorithms to solve those problems. A further contribution is the development of a unique two-colony ACO framework to solve the separate problems of platform reallocation and train resequencing at a UK railway station in dynamic delay scenarios. Results showed that ACO can be e ectively applied to the rescheduling of trains in both dynamic and dynamic multi-objective rescheduling problems. In the dynamic junction rescheduling problem ACO outperformed First Come First Served (FCFS), while in the dynamic multi-objective rescheduling problem ACO outperformed FCFS and Non-dominated Sorting Genetic Algorithm II (NSGA-II), a stateof- the-art multi-objective algorithm. When considering platform reallocation and rescheduling in dynamic environments, ACO outperformed Variable Neighbourhood Search (VNS), Tabu Search (TS) and running with no rescheduling algorithm. These results suggest that ACO shows promise for the rescheduling of trains in both dynamic and dynamic multi-objective environments.Engineering and Physical Sciences Research Council (EPSRC

    Scheduling Problems

    Get PDF
    Scheduling is defined as the process of assigning operations to resources over time to optimize a criterion. Problems with scheduling comprise both a set of resources and a set of a consumers. As such, managing scheduling problems involves managing the use of resources by several consumers. This book presents some new applications and trends related to task and data scheduling. In particular, chapters focus on data science, big data, high-performance computing, and Cloud computing environments. In addition, this book presents novel algorithms and literature reviews that will guide current and new researchers who work with load balancing, scheduling, and allocation problems

    Planning and Scheduling Optimization

    Get PDF
    Although planning and scheduling optimization have been explored in the literature for many years now, it still remains a hot topic in the current scientific research. The changing market trends, globalization, technical and technological progress, and sustainability considerations make it necessary to deal with new optimization challenges in modern manufacturing, engineering, and healthcare systems. This book provides an overview of the recent advances in different areas connected with operations research models and other applications of intelligent computing techniques used for planning and scheduling optimization. The wide range of theoretical and practical research findings reported in this book confirms that the planning and scheduling problem is a complex issue that is present in different industrial sectors and organizations and opens promising and dynamic perspectives of research and development

    Prognostics-Based Two-Operator Competition for Maintenance and Service Part Logistics

    Get PDF
    Prognostics and timely maintenance of components are critical to the continuing operation of a system. By implementing prognostics, it is possible for the operator to maintain the system in the right place at the right time. However, the complexity in the real world makes near-zero downtime difficult to achieve partly because of a possible shortage of required service parts. This is realistic and quite important in maintenance practice. To coordinate with a prognostics-based maintenance schedule, the operator must decide when to order service parts and how to compete with other operators who also need the same parts. This research addresses a joint decision-making approach that assists two operators in making proactive maintenance decisions and strategically competing for a service part that both operators rely on for their individual operations. To this end, a maintenance policy involving competition in service part procurement is developed based on the Stackelberg game-theoretic model. Variations of the policy are formulated for three different scenarios and solved via either backward induction or genetic algorithm methods. Unlike the first two scenarios, the possibility for either of the operators being the leader in such competitions is considered in the third scenario. A numerical study on wind turbine operation is provided to demonstrate the use of the joint decision-making approach in maintenance and service part logistics

    Strategic Technology Maturation and Insertion (STMI): a requirements guided, technology development optimization process

    Get PDF
    This research presents a Decision Support System (DSS) process solution to a problem faced by Program Managers (PMs) early in a system lifecycle, when potential technologies are evaluated for placement within a system design. The proposed process for evaluation and selection of technologies incorporates computer based Operational Research techniques which automate and optimize key portions of the decision process. This computerized process allows the PM to rapidly form the basis of a Strategic Technology Plan (STP) designed to manage, mature and insert the technologies into the system design baseline and identify potential follow-on incremental system improvements. This process is designated Strategic Technology Maturation and Insertion (STMI). Traditionally, to build this STP, the PM must juggle system performance, schedule, and cost issues and strike a balance of new and old technologies that can be fielded to meet the requirements of the customer. To complicate this juggling skill, the PM is typically confronted with a short time frame to evaluate hundreds of potential technology solutions with thousands of potential interacting combinations within the system design. Picking the best combination of new and established technologies, plus selecting the critical technologies needing maturation investment is a significant challenge. These early lifecycle decisions drive the entire system design, cost and schedule well into production The STMI process explores a formalized and repeatable DSS to allow PMs to systematically tackle the problems with technology evaluation, selection and maturation. It gives PMs a tool to compare and evaluate the entire design space of candidate technology performance, incorporate lifecycle costs as an optimizer for a best value system design, and generate input for a strategic plan to mature critical technologies. Four enabling concepts are described and brought together to form the basis of STMI: Requirements Engineering (RE), Value Engineering (VE), system optimization and Strategic Technology Planning (STP). STMI is then executed in three distinct stages: Pre-process preparation, process operation and optimization, and post-process analysis. A demonstration case study prepares and implements the proposed STMI process in a multi-system (macro) concept down select and a specific (micro) single system design that ties into the macro design level decision

    A Comprehensive Survey on Particle Swarm Optimization Algorithm and Its Applications

    Get PDF
    Particle swarm optimization (PSO) is a heuristic global optimization method, proposed originally by Kennedy and Eberhart in 1995. It is now one of the most commonly used optimization techniques. This survey presented a comprehensive investigation of PSO. On one hand, we provided advances with PSO, including its modifications (including quantum-behaved PSO, bare-bones PSO, chaotic PSO, and fuzzy PSO), population topology (as fully connected, von Neumann, ring, star, random, etc.), hybridization (with genetic algorithm, simulated annealing, Tabu search, artificial immune system, ant colony algorithm, artificial bee colony, differential evolution, harmonic search, and biogeography-based optimization), extensions (to multiobjective, constrained, discrete, and binary optimization), theoretical analysis (parameter selection and tuning, and convergence analysis), and parallel implementation (in multicore, multiprocessor, GPU, and cloud computing forms). On the other hand, we offered a survey on applications of PSO to the following eight fields: electrical and electronic engineering, automation control systems, communication theory, operations research, mechanical engineering, fuel and energy, medicine, chemistry, and biology. It is hoped that this survey would be beneficial for the researchers studying PSO algorithms

    Population-based algorithms for improved history matching and uncertainty quantification of Petroleum reservoirs

    Get PDF
    In modern field management practices, there are two important steps that shed light on a multimillion dollar investment. The first step is history matching where the simulation model is calibrated to reproduce the historical observations from the field. In this inverse problem, different geological and petrophysical properties may provide equally good history matches. Such diverse models are likely to show different production behaviors in future. This ties the history matching with the second step, uncertainty quantification of predictions. Multiple history matched models are essential for a realistic uncertainty estimate of the future field behavior. These two steps facilitate decision making and have a direct impact on technical and financial performance of oil and gas companies. Population-based optimization algorithms have been recently enjoyed growing popularity for solving engineering problems. Population-based systems work with a group of individuals that cooperate and communicate to accomplish a task that is normally beyond the capabilities of each individual. These individuals are deployed with the aim to solve the problem with maximum efficiency. This thesis introduces the application of two novel population-based algorithms for history matching and uncertainty quantification of petroleum reservoir models. Ant colony optimization and differential evolution algorithms are used to search the space of parameters to find multiple history matched models and, using a Bayesian framework, the posterior probability of the models are evaluated for prediction of reservoir performance. It is demonstrated that by bringing latest developments in computer science such as ant colony, differential evolution and multiobjective optimization, we can improve the history matching and uncertainty quantification frameworks. This thesis provides insights into performance of these algorithms in history matching and prediction and develops an understanding of their tuning parameters. The research also brings a comparative study of these methods with a benchmark technique called Neighbourhood Algorithms. This comparison reveals the superiority of the proposed methodologies in various areas such as computational efficiency and match quality
    • 

    corecore