186 research outputs found

    Dynamic salting route optimisation using evolutionary computation

    Get PDF
    On marginal winter nights, highway authorities face a difficult decision as to whether or not to salt the road network. The consequences of making a wrong decision are serious, as an untreated network is a major hazard. However, if salt is spread when it is not actually required, there are unnecessary financial and environmental consequences. In this paper, a new salting route optimisation system is proposed which combines evolutionary computation (EC) with the next generation road weather information systems (XRWIS). XRWIS is a new high resolution forecast system which predicts road surface temperature and condition across the road network over a 24 hour period. ECs are used to optimise a series of salting routes for winter gritting by considering XRWIS temperature data along with treatment vehicle and road network constraints. This synergy realises daily dynamic routing and it will yield considerable benefits for areas with a marginal ice problem.</p

    Robust Solution of Salting Route Optimisation Using Evolutionary Algorithms

    Get PDF
    The precautionary salting of the road network is an important maintenance issue for countries with a marginal winter climate. On many nights, not all the road network will require treatment as the local geography will mean some road sections are warmer than others. Hence, there is a logic to optimising salting routes based on known road surface temperature distributions. In this paper, a robust solution of Salting Route Optimisation using a training dataset of daily predicted temperature distributions is proposed. Evolutionary Algorithms are used to produce salting routes which group together the colder sections of the road network. Financial savings can then be made by not treating the warmer routes on the more marginal of nights. Experimental results on real data also reveal that the proposed methodology reduced total distance traveled on the new routes by around 10conventional salting routes.</p

    Population-based incremental learning with associative memory for dynamic environments

    Get PDF
    Copyright © 2007 IEEE. Reprinted from IEEE Transactions on Evolutionary Computation. This material is posted here with permission of the IEEE. Such permission of the IEEE does not in any way imply IEEE endorsement of any of Brunel University's products or services. Internal or personal use of this material is permitted. However, permission to reprint/republish this material for advertising or promotional purposes or for creating new collective works for resale or redistribution must be obtained from the IEEE by writing to [email protected]. By choosing to view this document, you agree to all provisions of the copyright laws protecting it.In recent years there has been a growing interest in studying evolutionary algorithms (EAs) for dynamic optimization problems (DOPs) due to its importance in real world applications. Several approaches, such as the memory and multiple population schemes, have been developed for EAs to address dynamic problems. This paper investigates the application of the memory scheme for population-based incremental learning (PBIL) algorithms, a class of EAs, for DOPss. A PBIL-specific associative memory scheme, which stores best solutions as well as corresponding environmental information in the memory, is investigated to improve its adaptability in dynamic environments. In this paper, the interactions between the memory scheme and random immigrants, multi-population, and restart schemes for PBILs in dynamic environments are investigated. In order to better test the performance of memory schemes for PBILs and other EAs in dynamic environments, this paper also proposes a dynamic environment generator that can systematically generate dynamic environments of different difficulty with respect to memory schemes. Using this generator a series of dynamic environments are generated and experiments are carried out to compare the performance of investigated algorithms. The experimental results show that the proposed memory scheme is efficient for PBILs in dynamic environments and also indicate that different interactions exist between the memory scheme and random immigrants, multi-population schemes for PBILs in different dynamic environments

    Robust route optimization for gritting/salting trucks: a CERCIA experience

    Get PDF
    Highway authorities in marginal winter climates are responsible for the precautionary gritting/salting of the road network in order to prevent frozen roads. For efficient and effective road maintenance, accurate road surface temperature prediction is required. However, this information is useless if an effective means of utilizing this information is unavailable. This is where gritting route optimization plays a crucial role. The decision whether to grit the road network at marginal nights is a difficult problem. The consequences of making a wrong decision are serious, as untreated roads are a major hazard. However, if grit/salt is spread when it is not actually required, there are unnecessary financial and environmental costs. The goal here is to minimize the financial and environmental costs while ensuring roads that need treatment will. In this article, a salting route optimization (SRO) system that combines evolutionary algorithms with the neXt generation Road Weather Information System (XRWIS) is introduced. The synergy of these methodologies means that salting route optimization can be done at a level previously not possible.</p

    Particle swarm optimization with composite particles in dynamic environments

    Get PDF
    This article is placed here with the permission of IEEE - Copyright @ 2010 IEEEIn recent years, there has been a growing interest in the study of particle swarm optimization (PSO) in dynamic environments. This paper presents a new PSO model, called PSO with composite particles (PSO-CP), to address dynamic optimization problems. PSO-CP partitions the swarm into a set of composite particles based on their similarity using a "worst first" principle. Inspired by the composite particle phenomenon in physics, the elementary members in each composite particle interact via a velocity-anisotropic reflection scheme to integrate valuable information for effectively and rapidly finding the promising optima in the search space. Each composite particle maintains the diversity by a scattering operator. In addition, an integral movement strategy is introduced to promote the swarm diversity. Experiments on a typical dynamic test benchmark problem provide a guideline for setting the involved parameters and show that PSO-CP is efficient in comparison with several state-of-the-art PSO algorithms for dynamic optimization problems.This work was supported in part by the Key Program of the National Natural Science Foundation (NNSF) of China under Grant 70931001 and 70771021, the Science Fund for Creative Research Group of the NNSF of China under Grant 60821063 and 70721001, the Ph.D. Programs Foundation of the Ministry of education of China under Grant 200801450008, and by the Engineering and Physical Sciences Research Council of U.K. under Grant EP/E060722/1

    Arc routing problems: A review of the past, present, and future

    Full text link
    [EN] Arc routing problems (ARPs) are defined and introduced. Following a brief history of developments in this area of research, different types of ARPs are described that are currently relevant for study. In addition, particular features of ARPs that are important from a theoretical or practical point of view are discussed. A section on applications describes some of the changes that have occurred from early applications of ARP models to the present day and points the way to emerging topics for study. A final section provides information on libraries and instance repositories for ARPs. The review concludes with some perspectives on future research developments and opportunities for emerging applicationsThis research was supported by the Ministerio de Economia y Competitividad and Fondo Europeo de Desarrollo Regional, Grant/Award Number: PGC2018-099428-B-I00. The Research Council of Norway, Grant/Award Numbers: 246825/O70 (DynamITe), 263031/O70 (AXIOM).Corberån, Á.; Eglese, R.; Hasle, G.; Plana, I.; Sanchís Llopis, JM. (2021). Arc routing problems: A review of the past, present, and future. Networks. 77(1):88-115. https://doi.org/10.1002/net.21965S8811577

    Increasing the efficiency of antibody purification process by high throughput technology and intelligent design of experiment

    Get PDF
    Design of experiments (DoE) is used in process development to optimise the operating conditions of unit operations in a cost-effective and time-saving manner. Along with high throughput technologies, the modern high throughput process development lab can turnover a tremendous amount of data with minimal feedstock. These benefits are most useful when applied to the purification bottleneck, which accounts for up to 80% of the total process operating costs. However due to complexities of biochemical reactions and the large number interacting factors in unit operations (which usually cross interact with each other), even carefully planned DoE experiments on high throughput platforms can become difficult to manage and/or not provide useful information. This thesis examines the simplex search method and develops a set of protocols for use of the search method in combination with traditional DoE experimental design protocols. It is that is demonstrated in the developed in chapter 3 whilst also optimising a ammonium sulphate based precipitation step of an industrially relevant feedstock. Comparisons were drawn between a high resolution brute force study, a response surface DoE, the simplex method and then a combination of DoE and the simplex method. Various strategies were demonstrated that get the most out of the simplex method and mitigate against potential pitfalls. The precipitation step was optimised for yield and purity over the 3 factors, pH, ammonium sulphate concentration and initial MAb concentration and the results showed the simplex method was capable of rapidly identifying the optimum conditions in a very large 3 factor design space on an average of 18 experiments. The expansive study not only served as a testing ground for the methods comparison but demonstrated precipitation as a high throughput, low cost substitute for the expensive Protein A step. The DoE –simplex search protocols are then refined in two complex case studies in chapter 4, a PEG precipitation primary capture step and an ammonium sulphate precipitation and centrifugation sequence. The five factor precipitation and centrifugation sequence was especially complicated and utilised ultrascale down models provide accurate scale up data. This involved calibrating an acoustic device to provide shear treatment to the precipitate pre-centrifugation and using jet mixing equations to correlate precipitate conditioning between the TTecan robot’s tips and an impeller in a stirred tank. The techniques developed were all applicable to microscale and high throughput. In both instances, the combined DoE-simplex approach retuned superior results both in terms of experimental savings and generating information-rich data from the final local regions DoE around the simplex located optimums. A microscale chromatography protocol was developed on the Tecan liquid handling robot and demonstrated on screening work with different Protein A and cation exchange media. The caveats encountered when creating the running methods and the analytical methods supporting it for the Atoll robocolumns were highlighted and mitigation solutions implemented. The automated microscale Protein A method was successfully scaled up 50x from a 200 ”L robocolumn to a conventional 10 mL labscale column. After selecting a cation exchange resin for developing an aggregate removal step, the DoE-simplex methodology was applied to an antibody product with an extremely high aggregate level and a comparison optimisation was made with a central composite design DoE. The difficult four factor design space overwhelmed the DoE and having used more experiment numbers than the DoE-simplex methodology, only went as far to show the high levels of curvature in the system and offer a poor prediction of the surface. The DoE-simplex methodology was able to provide a general model of the whole surface from the DoE, locate the optimum with the simplex in fewer experiment numbers. This subsequently allowed a local DoE to be applied to the optimum region to determine a robust operating range for the cation exchange step

    ProblĂšmes de tournĂ©es en viabilitĂ© hivernale utilisant la prĂ©vision des volumes d’épandage

    Get PDF
    RÉSUMÉ : Cette thĂšse combine deux domaines de recherche diffĂ©rents appliquĂ©s au dĂ©neigement : la recherche opĂ©rationnelle et la science des donnĂ©es. La science des donnĂ©es a Ă©tĂ© utilisĂ©e pour dĂ©velopper un modĂšle de prĂ©diction de quantitĂ© de sel et d’abrasif avec une mĂ©thodologie d’apprentissage machine; par la suite, ce modĂšle est pris en compte pour la confection des tournĂ©es de vĂ©hicules. La confection des tournĂ©es a Ă©tĂ© Ă©laborĂ©e en utilisant des outils de la recherche opĂ©rationnelle, qui servent Ă  optimiser les tournĂ©es en considĂ©rant plusieurs contraintes et en intĂ©grant les donnĂ©es rĂ©elles. La thĂšse est le fruit d’une collaboration avec deux villes quĂ©bĂ©coises, Granby et Saint-Jean-sur- Richelieu. Elle traite une application rĂ©elle en viabilitĂ© hivernale, qui est l’opĂ©ration d’épandage. Cette opĂ©ration est une activitĂ© nĂ©cessaire, dont le but est d’assurer une meilleure circulation routiĂšre. Cependant, cela se rĂ©alise avec un coĂ»t Ă©conomique et environnemental important. Par consĂ©quent, la rĂ©duction de ce coĂ»t devient une grande prĂ©occupation. Cette thĂšse contribue significativement aux opĂ©rations d’épandage : premiĂšrement, nous prĂ©disons la quantitĂ© nĂ©cessaire de sel et d’abrasif Ă  Ă©pandre afin d’éviter le surĂ©pandage; deuxiĂšmement, nous optimisons les tournĂ©es des opĂ©rations d’épandage en considĂ©rant la variation de la quantitĂ©. La premiĂšre contribution de cette thĂšse consiste en un modĂšle de prĂ©diction des quantitĂ©s de sel et d’abrasif pour chaque segment de rue et pour chaque heure, en utilisant des algorithmes d’apprentissage machine. L’importance de cette contribution rĂ©side d’une part dans l’intĂ©gration des donnĂ©es gĂ©omatiques avec les donnĂ©es mĂ©tĂ©o-routiĂšres, et d’autre part dans l’extraction des variables importantes (feature engineering) pour le modĂšle de prĂ©diction. Plusieurs algorithmes d’apprentissage machine ont Ă©tĂ© Ă©valuĂ©s : (les forĂȘts alĂ©atoires, les arbres extrĂȘmement alĂ©atoires, les rĂ©seaux de neurones artificiels, Adaboost, Gradient Boosting Machine et XGBoost). Le modĂšle Ă©laborĂ© par XGBoost a rĂ©alisĂ© une meilleure performance. Le modĂšle de prĂ©diction permet non seulement de prĂ©dire les quantitĂ©s de sel et d’abrasif nĂ©cessaires Ă  Ă©pandre mais aussi, d’identifier les variables les plus importantes pour la prĂ©diction. Cette information reprĂ©sente un outil de dĂ©cision intĂ©ressant pour les gestionnaires. L’identification des variables importantes pourrait amĂ©liorer les opĂ©rations de dĂ©neigement. D’aprĂšs les rĂ©sultats trouvĂ©s, le facteur humain (conducteur) influence significativement la quantitĂ© d’épandage; donc, le contrĂŽle de ce facteur peut amĂ©liorer considĂ©rablement ces opĂ©rations. La deuxiĂšme contribution introduit un nouveau problĂšme dans la littĂ©rature : le problĂšme de tournĂ©es de vĂ©hicules gĂ©nĂ©rales avec capacitĂ© dont la quantitĂ© de sel et d’abrasif dĂ©pend du temps. Le problĂšme est basĂ© sur l’hypothĂšse que le modĂšle de prĂ©diction est capable de fournir la quantitĂ© d’épandage pour chaque segment et pour chaque heure avec une bonne prĂ©cision. Le fait d’avoir cette information pour chaque heure et pour chaque segment de rue, introduit la notion du temps dĂ©pendant. Le nouveau problĂšme est modĂ©lisĂ© Ă  l’aide d’une formulation mathĂ©matique sur le graphe original, ce qui prĂ©sente un dĂ©fi de modĂ©lisation. En effet, il est difficile d’associer des temps de dĂ©but et de fin uniques Ă  un arc ou Ă  une arĂȘte. Une mĂ©taheuristique basĂ©e sur la stratĂ©gie de destruction et construction a Ă©tĂ© dĂ©veloppĂ©e pour rĂ©soudre les grandes instances. La mĂ©taheuristique est inspirĂ©e de SISRs (Slack Induction by String Removals). Elle considĂšre la demande dĂ©pendante du temps et la prĂ©sence des arĂȘtes par la mĂ©thode d’évaluation basĂ©e sur la programmation dynamique. De nouvelles instances ont Ă©tĂ© crĂ©Ă©es Ă  partir des instances des problĂšmes de tournĂ©es de vĂ©hicules gĂ©nĂ©rales avec contrainte de capacitĂ© avec demande fixe. Elles ont Ă©tĂ© gĂ©nĂ©rĂ©es Ă  partir de diffĂ©rents types de fonction dont la demande dĂ©pend du temps. La troisiĂšme contribution propose une nouvelle approche, dans le but de prĂ©senter le niveau de prioritĂ© des rues (la hiĂ©rarchie de service) sous forme d’une fonction linĂ©aire dĂ©pendante du temps. Le problĂšme prĂ©sentĂ© dans cette contribution concerne des tournĂ©es de vĂ©hicules gĂ©nĂ©rales hiĂ©rarchiques avec contrainte de capacitĂ© sous l’incertitude de la demande. Lorsque les donnĂ©es collectĂ©es ne permettent pas de dĂ©velopper un bon modĂšle de prĂ©diction, la notion de demande dĂ©pendante du temps n’est plus valide. L’approche robuste a dĂ©montrĂ© une grande rĂ©ussite pour traiter et rĂ©soudre les problĂšmes avec incertitude. Une mĂ©taheuristique robuste a Ă©tĂ© proposĂ©e pour rĂ©soudre les deux cas rĂ©els de Granby et de Saint-Jean-sur-Richelieu. La mĂ©taheuristique a Ă©tĂ© validĂ©e par un modĂšle mathĂ©matique sur les petites instances gĂ©nĂ©rĂ©es Ă  partir des cas rĂ©els. La simulation de Monte Carlo a Ă©tĂ© utilisĂ©e pour Ă©valuer les diffĂ©rentes solutions proposĂ©es. En outre, elle permet d’offrir aux gestionnaires un outil de dĂ©cision pour comparer les diffĂ©rentes solutions robustes, et aussi pour comprendre le compromis entre le niveau de robustesse souhaitĂ© et d’autres mesures de performances (coĂ»t, risque, niveau de service).----------ABSTRACT : This thesis combines two different fields applied to winter road maintenance : operational research and data science. Data science was used to develop a prediction model for the quantity of salt and abrasive with a machine learning methodology, later this model is considered for building vehicles routing. This route planning was developed using operational research which seeks to optimize routes by looking at several constraints and by integrating real data. The thesis which is the fruit of a collaboration with two Canadian cities Granby and Saint-Jean-sur-Richelieu, deals with a real application in winter road maintenance which is the spreading operation. The spreading operation presents an activity necessary for winter road maintenance, in order to ensure better road traffic. However, this road safety comes with a significant economic and environmental cost, which creates a great concern in order to reduce the economic and environmental impact. This thesis contributes significantly in the spreading operations : firstly, predicting the necessary quantity of salt and abrasive to be spread in order to avoid over-spreading, secondly optimizing the spreading operations routes considering quantity variations. The first contribution of this thesis is to develop a prediction model for the quantities of salt and abrasive using machine learning algorithms, for each street segment and for each hour. The importance of this contribution lies in the integration of geomatic data with weather-road data, and also the feature engineering. Several machine learning algorithms were evaluated (Random Forest, Extremely Random Trees, Artificial Neural Networks, Adaboost, Gradient Boosting Machine and XGBoost); ultimately XGBoost performed better. The prediction model not only predicts the amounts of salt and abrasive needed to spread, but also identifies the most important variables in the model. This information presents an interesting decision-making tool for managers. The identification of important variables could improve snow removal operations. According to the results, the human factor (driver) significantly influences the amount of spreading, so controlling this factor can significantly improve the spreading operations.The second contribution introduces a new problem in the literature : the mixed capacitated general routing problem with time-dependent demand; the problem is based on the assumption that the prediction model is able to provide the amount of spreading for each segment and for each hour with good accuracy. Having this information for each hour and for each street segment introduces the concept of time dependency. The new problem was modeled using a mathematical formulation on the original graph, which presents a modeling challenge since it is difficult to associate a unique starting and ending time to an arc or edge. A meta-heuristic based on the destruction and construction strategy has been developed to solve large-scale instances. The meta-heuristic is inspired by SISRs considers time-dependent demand and the presence of edges by an evaluation method based on dynamic programming. New instances were created from the instances of the mixed capacitated general routing problem with fixed demand; the new instances were generated from different types of function where the demand varies with time. The third contribution proposes a new approach to present the service hierarchy or the priority level of streets, as a time-dependent linear function. The problem addressed in this contribution concerns the hierarchical mixed capacitated general routing problems under demand uncertainty. When the collected data does not allow the development of a good prediction model, the concept of time-dependent demand is no longer valid. The robust approach has demonstrated great success in resolving and dealing with problems with uncertainty. A robust meta-heuristic was proposed to solve the two real cases Granby and Saint-Jean-sur-Richelieu, the meta-heuristic was validated by a mathematical model on small instances generated from the real cases. The Monte Carlo simulation was used, on the one hand, to evaluate the different solutions proposed, and, on the other hand, to offer managers a decision tool to compare the different robust solutions and also to understand the trade-off between the desired level of robustness, and other performance measures (cost, risk, level of service)

    Dynamic multi-objective optimization using evolutionary algorithms

    Get PDF
    Dynamic Multi-objective Optimization Problems (DMOPs) offer an opportunity to examine and solve challenging real world scenarios where trade-off solutions between conflicting objectives change over time. Definition of benchmark problems allows modelling of industry scenarios across transport, power and communications networks, manufacturing and logistics. Recently, significant progress has been made in the variety and complexity of DMOP benchmarks and the incorporation of realistic dynamic characteristics. However, significant gaps still exist in standardised methodology for DMOPs, specific problem domain examples and in the understanding of the impacts and explanations of dynamic characteristics. This thesis provides major contributions on these three topics within evolutionary dynamic multi-objective optimization. Firstly, experimental protocols for DMOPs are varied. This limits the applicability and relevance of results produced and conclusions made in the field. A major source of the inconsistency lies in the parameters used to define specific problem instances being examined. The uninformed selection of these has historically held back understanding of their impacts and standardisation in experimental approach to these parameters in the multi-objective problem domain. Using the frequency and severity (or magnitude) of change events, a more informed approach to DMOP experimentation is conceptualized, implemented and evaluated. Establishment of a baseline performance expectation across a comprehensive range of dynamic instances for well-studied DMOP benchmarks is analyzed. To maximize relevance, these profiles are composed from the performance of evolutionary algorithms commonly used for baseline comparisons and those with simple dynamic responses. Comparison and contrast with the coverage of parameter combinations in the sampled literature highlights the importance of these contributions. Secondly, the provision of useful and realistic DMOPs in the combinatorial domain is limited in previous literature. A novel dynamic benchmark problem is presented by the extension of the Travelling Thief Problem (TTP) to include a variety of realistic and contextually justified dynamic changes. Investigation of problem information exploitation and it's potential application as a dynamic response is a key output of these results and context is provided through comparison to results obtained by adapting existing TTP heuristics. Observation driven iterative development prompted the investigation of multi-population island model strategies, together with improvements in the approaches to accurately describe and compare the performance of algorithm models for DMOPs, a contribution which is applicable beyond the dynamic TTP. Thirdly, the purpose of DMOPs is to reconstruct realistic scenarios, or features from them, to allow for experimentation and development of better optimization algorithms. However, numerous important characteristics from real systems still require implementation and will drive research and development of algorithms and mechanisms to handle these industrially relevant problem classes. The novel challenges associated with these implementations are significant and diverse, even for a simple development such as consideration of DMOPs with multiple time dependencies. Real world systems with dynamics are likely to contain multiple temporally changing aspects, particularly in energy and transport domains. Problems with more than one dynamic problem component allow for asynchronous changes and a differing severity between components that leads to an explosion in the size of the possible dynamic instance space. Both continuous and combinatorial problem domains require structured investigation into the best practices for experimental design, algorithm application and performance measurement, comparison and visualization. Highlighting the challenges, the key requirements for effective progress and recommendations on experimentation are explored here

    Towards improving resilience of cities: an optimisation approach to minimising vulnerability to disruption due to natural disasters under budgetary constraints

    Get PDF
    In recent years, climate change emerged as a dominant concern to many parts of the world bringing in huge economic losses disturbing normal business/life. In particular cities are suffering from floods affecting land based transportation systems in a significant manner more frequently than ever. Many local authorities facing funding cuts are suffering from limited budgets and they are put under even higher pressure when looking for resources to recover the damaged networks. The agencies involved with post-disaster reconstruction too struggle to prioritise the network links to recover. This paper addresses the problem of road maintenance/development with the aim of improving resilience of the network by formulating the problem as a mathematical model that minimises the vulnerability to disruption due to natural incidents under budgetary constraints. This paper extends the critical link analysis from a single link being disrupted to the case of multiple links, and for the first time proposes an objective function involving a measure of vulnerability to minimise. Metaheuristic Simulated Annealing method is used to reach near global optimal solution for a real-life network with large demand. A segment of the City of York in England has been used to illustrate the principles involved. Numerical experiments indicate that Simulated Annealing based optimisation method outperforms the ‘volume-priority’ heuristic approach, returning higher value for money spent. The proposed approach spreads the benefits across wider population by including more number of links in the priority list while reducing the vulnerability to disruption
    • 

    corecore