326 research outputs found

    A Hybrid ant colony optimization algorithm for solving a highly constrained nurse rostering problem

    Get PDF
    Distribution of work shifts and off days to nurses in a duty roster is a crucial task. In hospital wards, much effort is spent trying to produce workable and quality rosters for their nurses. However, there are cases, such as mandatory working days per week and balanced distribution of shift types that could not be achieved in the manually generated rosters, which are still being practiced. Hence, this study focused on solving those issues arising in nurse rostering problems (NRPs) strategizing on a hybrid of Ant Colony Optimization (ACO) algorithm with a hill climbing technique. The hybridization with the hill climbing is aiming at fine-tuning the initial solution or roster generated by the ACO algorithm to achieve better rosters. The hybrid model is developed with the goal of satisfying the hard constraints, while minimizing the violation of soft constraints in such a way that fulfill hospital’s rules and nurses’ preferences. The real data used for this highly constrained NRPs was obtained from a large Malaysian hospital. Specifically, three main phases were involved in developing the hybrid model, which are generating an initial roster, updating the roster through the ACO algorithm, and implementing the hill climbing to further search for a refined solution. The results show that at a larger value of pheromone, the chance of obtaining a good solution was found with only small penalty values. This study has proven that the hybrid ACO is able to solve NRPs with good potential solutions that fulfilled all the four important criteria, which are coverage, quality, flexibility, and cost. Subsequently, the hybrid model is also beneficial to the hospital’s management whereby nurses can be scheduled with balanced distribution of shifts, which fulfill their preferences as well

    Soft Computing Techiniques for the Protein Folding Problem on High Performance Computing Architectures

    Get PDF
    The protein-folding problem has been extensively studied during the last fifty years. The understanding of the dynamics of global shape of a protein and the influence on its biological function can help us to discover new and more effective drugs to deal with diseases of pharmacological relevance. Different computational approaches have been developed by different researchers in order to foresee the threedimensional arrangement of atoms of proteins from their sequences. However, the computational complexity of this problem makes mandatory the search for new models, novel algorithmic strategies and hardware platforms that provide solutions in a reasonable time frame. We present in this revision work the past and last tendencies regarding protein folding simulations from both perspectives; hardware and software. Of particular interest to us are both the use of inexact solutions to this computationally hard problem as well as which hardware platforms have been used for running this kind of Soft Computing techniques.This work is jointly supported by the FundaciónSéneca (Agencia Regional de Ciencia y Tecnología, Región de Murcia) under grants 15290/PI/2010 and 18946/JLI/13, by the Spanish MEC and European Commission FEDER under grant with reference TEC2012-37945-C02-02 and TIN2012-31345, by the Nils Coordinated Mobility under grant 012-ABEL-CM-2014A, in part financed by the European Regional Development Fund (ERDF). We also thank NVIDIA for hardware donation within UCAM GPU educational and research centers.Ingeniería, Industria y Construcció

    Methodological review of multicriteria optimization techniques: aplications in water resources

    Get PDF
    Multi-criteria decision analysis (MCDA) is an umbrella approach that has been applied to a wide range of natural resource management situations. This report has two purposes. First, it aims to provide an overview of advancedmulticriteriaapproaches, methods and tools. The review seeks to layout the nature of the models, their inherent strengths and limitations. Analysis of their applicability in supporting real-life decision-making processes is provided with relation to requirements imposed by organizationally decentralized and economically specific spatial and temporal frameworks. Models are categorized based on different classification schemes and are reviewed by describing their general characteristics, approaches, and fundamental properties. A necessity of careful structuring of decision problems is discussed regarding planning, staging and control aspects within broader agricultural context, and in water management in particular. A special emphasis is given to the importance of manipulating decision elements by means ofhierarchingand clustering. The review goes beyond traditionalMCDAtechniques; it describes new modelling approaches. The second purpose is to describe newMCDAparadigms aimed at addressing the inherent complexity of managing water ecosystems, particularly with respect to multiple criteria integrated with biophysical models,multistakeholders, and lack of information. Comments about, and critical analysis of, the limitations of traditional models are made to point out the need for, and propose a call to, a new way of thinking aboutMCDAas they are applied to water and natural resources management planning. These new perspectives do not undermine the value of traditional methods; rather they point to a shift in emphasis from methods for problem solving to methods for problem structuring. Literature review show successfully integrations of watershed management optimization models to efficiently screen a broad range of technical, economic, and policy management options within a watershed system framework and select the optimal combination of management strategies and associated water allocations for designing a sustainable watershed management plan at least cost. Papers show applications in watershed management model that integrates both natural and human elements of a watershed system including the management of ground and surface water sources, water treatment and distribution systems, human demands,wastewatertreatment and collection systems, water reuse facilities,nonpotablewater distribution infrastructure, aquifer storage and recharge facilities, storm water, and land use

    Solving Combinatorial Optimization Problems Using Genetic Algorithms and Ant Colony Optimization

    Get PDF
    This dissertation presents metaheuristic approaches in the areas of genetic algorithms and ant colony optimization to combinatorial optimization problems. Ant colony optimization for the split delivery vehicle routing problem An Ant Colony Optimization (ACO) based approach is presented to solve the Split Delivery Vehicle Routing Problem (SDVRP). SDVRP is a relaxation of the Capacitated Vehicle Routing Problem (CVRP) wherein a customer can be visited by more than one vehicle. The proposed ACO based algorithm is tested on benchmark problems previously published in the literature. The results indicate that the ACO based approach is competitive in both solution quality and solution time. In some instances, the ACO method achieves the best known results to date for the benchmark problems. Hybrid genetic algorithm for the split delivery vehicle routing problem (SDVRP) The Vehicle Routing Problem (VRP) is a combinatory optimization problem in the field of transportation and logistics. There are various variants of VRP which have been developed of the years; one of which is the Split Delivery Vehicle Routing Problem (SDVRP). The SDVRP allows customers to be assigned to multiple routes. A hybrid genetic algorithm comprising a combination of ant colony optimization, genetic algorithm, and heuristics is proposed and tested on benchmark SDVRP test problems. Genetic algorithm approach to solve the hospital physician scheduling problem Emergency departments have repeating 24-hour cycles of non-stationary Poisson arrivals and high levels of service time variation. The problem is to find a shift schedule that considers queuing effects and minimizes average patient waiting time and maximizes physicians’ shift preference subject to constraints on shift start times, shift durations and total physician hours available per day. An approach that utilizes a genetic algorithm and discrete event simulation to solve the physician scheduling problem in a hospital is proposed. The approach is tested on real world datasets for physician schedules

    Solving Challenging Real-World Scheduling Problems

    Get PDF
    This work contains a series of studies on the optimization of three real-world scheduling problems, school timetabling, sports scheduling and staff scheduling. These challenging problems are solved to customer satisfaction using the proposed PEAST algorithm. The customer satisfaction refers to the fact that implementations of the algorithm are in industry use. The PEAST algorithm is a product of long-term research and development. The first version of it was introduced in 1998. This thesis is a result of a five-year development of the algorithm. One of the most valuable characteristics of the algorithm has proven to be the ability to solve a wide range of scheduling problems. It is likely that it can be tuned to tackle also a range of other combinatorial problems. The algorithm uses features from numerous different metaheuristics which is the main reason for its success. In addition, the implementation of the algorithm is fast enough for real-world use.Siirretty Doriast

    Exact and heuristic approaches for multi-component optimisation problems

    Get PDF
    Modern real world applications are commonly complex, consisting of multiple subsystems that may interact with or depend on each other. Our case-study about wave energy converters (WEC) for the renewable energy industry shows that in such a multi-component system, optimising each individual component cannot yield global optimality for the entire system, owing to the influence of their interactions or the dependence on one another. Moreover, modelling a multi-component problem is rarely easy due to the complexity of the issues, which leads to a desire for existent models on which to base, and against which to test, calculations. Recently, the travelling thief problem (TTP) has attracted significant attention in the Evolutionary Computation community. It is intended to offer a better model for multicomponent systems, where researchers can push forward their understanding of the optimisation of such systems, especially for understanding of the interconnections between the components. The TTP interconnects with two classic NP-hard problems, namely the travelling salesman problem and the 0-1 knapsack problem, via the transportation cost that non-linearly depends on the accumulated weight of items. This non-linear setting introduces additional complexity. We study this nonlinearity through a simplified version of the TTP - the packing while travelling (PWT) problem, which aims to maximise the total reward for a given travelling tour. Our theoretical and experimental investigations demonstrate that the difficulty of a given problem instance is significantly influenced by adjusting a single parameter, the renting rate, which prompted our method of creating relatively hard instances using simple evolutionary algorithms. Our further investigations into the PWT problem yield a dynamic programming (DP) approach that can solve the problem in pseudo polynomial time and a corresponding approximation scheme. The experimental investigations show that the new approaches outperform the state-of-the-art ones. We furthermore propose three exact algorithms for the TTP, based on the DP of the PWT problem. By employing the exact DP for the underlying PWT problem as a subroutine, we create a novel indicator-based hybrid evolutionary approach for a new bi-criteria formulation of the TTP. This hybrid design takes advantage of the DP approach, along with a number of novel indicators and selection mechanisms to achieve better solutions. The results of computational experiments show that the approach is capable to outperform the state-of-the-art results.Thesis (Ph.D.) -- University of Adelaide, School of Computer Science, 201

    Search based software engineering: Trends, techniques and applications

    Get PDF
    © ACM, 2012. This is the author's version of the work. It is posted here by permission of ACM for your personal use. Not for redistribution. The definitive version is available from the link below.In the past five years there has been a dramatic increase in work on Search-Based Software Engineering (SBSE), an approach to Software Engineering (SE) in which Search-Based Optimization (SBO) algorithms are used to address problems in SE. SBSE has been applied to problems throughout the SE lifecycle, from requirements and project planning to maintenance and reengineering. The approach is attractive because it offers a suite of adaptive automated and semiautomated solutions in situations typified by large complex problem spaces with multiple competing and conflicting objectives. This article provides a review and classification of literature on SBSE. The work identifies research trends and relationships between the techniques applied and the applications to which they have been applied and highlights gaps in the literature and avenues for further research.EPSRC and E

    Optimización de algoritmos bioinspirados en sistemas heterogéneos CPU-GPU.

    Get PDF
    Los retos científicos del siglo XXI precisan del tratamiento y análisis de una ingente cantidad de información en la conocida como la era del Big Data. Los futuros avances en distintos sectores de la sociedad como la medicina, la ingeniería o la producción eficiente de energía, por mencionar sólo unos ejemplos, están supeditados al crecimiento continuo en la potencia computacional de los computadores modernos. Sin embargo, la estela de este crecimiento computacional, guiado tradicionalmente por la conocida “Ley de Moore”, se ha visto comprometido en las últimas décadas debido, principalmente, a las limitaciones físicas del silicio. Los arquitectos de computadores han desarrollado numerosas contribuciones multicore, manycore, heterogeneidad, dark silicon, etc, para tratar de paliar esta ralentización computacional, dejando en segundo plano otros factores fundamentales en la resolución de problemas como la programabilidad, la fiabilidad, la precisión, etc. El desarrollo de software, sin embargo, ha seguido un camino totalmente opuesto, donde la facilidad de programación a través de modelos de abstracción, la depuración automática de código para evitar efectos no deseados y la puesta en producción son claves para una viabilidad económica y eficiencia del sector empresarial digital. Esta vía compromete, en muchas ocasiones, el rendimiento de las propias aplicaciones; consecuencia totalmente inadmisible en el contexto científico. En esta tesis doctoral tiene como hipótesis de partida reducir las distancias entre los campos hardware y software para contribuir a solucionar los retos científicos del siglo XXI. El desarrollo de hardware está marcado por la consolidación de los procesadores orientados al paralelismo masivo de datos, principalmente GPUs Graphic Processing Unit y procesadores vectoriales, que se combinan entre sí para construir procesadores o computadores heterogéneos HSA. En concreto, nos centramos en la utilización de GPUs para acelerar aplicaciones científicas. Las GPUs se han situado como una de las plataformas con mayor proyección para la implementación de algoritmos que simulan problemas científicos complejos. Desde su nacimiento, la trayectoria y la historia de las tarjetas gráficas ha estado marcada por el mundo de los videojuegos, alcanzando altísimas cotas de popularidad según se conseguía más realismo en este área. Un hito importante ocurrió en 2006, cuando NVIDIA (empresa líder en la fabricación de tarjetas gráficas) lograba hacerse con un hueco en el mundo de la computación de altas prestaciones y en el mundo de la investigación con el desarrollo de CUDA “Compute Unified Device Arquitecture. Esta arquitectura posibilita el uso de la GPU para el desarrollo de aplicaciones científicas de manera versátil. A pesar de la importancia de la GPU, es interesante la mejora que se puede producir mediante su utilización conjunta con la CPU, lo que nos lleva a introducir los sistemas heterogéneos tal y como detalla el título de este trabajo. Es en entornos heterogéneos CPU-GPU donde estos rendimientos alcanzan sus cotas máximas, ya que no sólo las GPUs soportan el cómputo científico de los investigadores, sino que es en un sistema heterogéneo combinando diferentes tipos de procesadores donde podemos alcanzar mayor rendimiento. En este entorno no se pretende competir entre procesadores, sino al contrario, cada arquitectura se especializa en aquella parte donde puede explotar mejor sus capacidades. Donde mayor rendimiento se alcanza es en estos clústeres heterogéneos, donde múltiples nodos son interconectados entre sí, pudiendo dichos nodos diferenciarse no sólo entre arquitecturas CPU-GPU, sino también en las capacidades computacionales dentro de estas arquitecturas. Con este tipo de escenarios en mente, se presentan nuevos retos en los que lograr que el software que hemos elegido como candidato se ejecuten de la manera más eficiente y obteniendo los mejores resultados posibles. Estas nuevas plataformas hacen necesario un rediseño del software para aprovechar al máximo los recursos computacionales disponibles. Se debe por tanto rediseñar y optimizar los algoritmos existentes para conseguir que las aportaciones en este campo sean relevantes, y encontrar algoritmos que, por su propia naturaleza sean candidatos para que su ejecución en dichas plataformas de alto rendimiento sea óptima. Encontramos en este punto una familia de algoritmos denominados bioinspirados, que utilizan la inteligencia colectiva como núcleo para la resolución de problemas. Precisamente esta inteligencia colectiva es la que les hace candidatos perfectos para su implementación en estas plataformas bajo el nuevo paradigma de computación paralela, puesto que las soluciones pueden ser construidas en base a individuos que mediante alguna forma de comunicación son capaces de construir conjuntamente una solución común. Esta tesis se centrará especialmente en uno de estos algoritmos bioinspirados que se engloba dentro del término metaheurísticas bajo el paradigma del Soft Computing, el Ant Colony Optimization “ACO”. Se realizará una contextualización, estudio y análisis del algoritmo. Se detectarán las partes más críticas y serán rediseñadas buscando su optimización y paralelización, manteniendo o mejorando la calidad de sus soluciones. Posteriormente se pasará a implementar y testear las posibles alternativas sobre diversas plataformas de alto rendimiento. Se utilizará el conocimiento adquirido en el estudio teórico-práctico anterior para su aplicación a casos reales, más en concreto se mostrará su aplicación sobre el plegado de proteínas. Todo este análisis es trasladado a su aplicación a un caso concreto. En este trabajo, aunamos las nuevas plataformas hardware de alto rendimiento junto al rediseño e implementación software de un algoritmo bioinspirado aplicado a un problema científico de gran complejidad como es el caso del plegado de proteínas. Es necesario cuando se implementa una solución a un problema real, realizar un estudio previo que permita la comprensión del problema en profundidad, ya que se encontrará nueva terminología y problemática para cualquier neófito en la materia, en este caso, se hablará de aminoácidos, moléculas o modelos de simulación que son desconocidos para los individuos que no sean de un perfil biomédico.Ingeniería, Industria y Construcció

    Ant colony meta-heuristics - Schemes and software framework

    Get PDF
    Master'sMASTER OF SCIENC

    Hybridization of enhanced ant colony system and Tabu search algorithm for packet routing in wireless sensor network

    Get PDF
    In Wireless Sensor Network (WSN), high transmission time occurs when search agent focuses on the same sensor nodes, while local optima problem happens when agent gets trapped in a blind alley during searching. Swarm intelligence algorithms have been applied in solving these problems including the Ant Colony System (ACS) which is one of the ant colony optimization variants. However, ACS suffers from local optima and stagnation problems in medium and large sized environments due to an ineffective exploration mechanism. This research proposes a hybridization of Enhanced ACS and Tabu Search (EACS(TS)) algorithm for packet routing in WSN. The EACS(TS) selects sensor nodes with high pheromone values which are calculated based on the residual energy and current pheromone value of each sensor node. Local optima is prevented by marking the node that has no potential neighbour node as a Tabu node and storing it in the Tabu list. Local pheromone update is performed to encourage exploration to other potential sensor nodes while global pheromone update is applied to encourage the exploitation of optimal sensor nodes. Experiments were performed in a simulated WSN environment supported by a Routing Modelling Application Simulation Environment (RMASE) framework to evaluate the performance of EACS(TS). A total of 6 datasets were deployed to evaluate the effectiveness of the proposed algorithm. Results showed that EACS(TS) outperformed in terms of success rate, packet loss, latency, and energy efficiency when compared with single swarm intelligence routing algorithms which are Energy-Efficient Ant-Based Routing (EEABR), BeeSensor and Termite-hill. Better performances were also achieved for success rate, throughput, and latency when compared to other hybrid routing algorithms such as Fish Swarm Ant Colony Optimization (FSACO), Cuckoo Search-based Clustering Algorithm (ICSCA), and BeeSensor-C. The outcome of this research contributes an optimized routing algorithm for WSN. This will lead to a better quality of service and minimum energy utilization
    corecore