743 research outputs found

    A study of two evolutionary/tabu search approaches for the generalized max-mean dispersion problem

    Get PDF
    Evolutionary computing is a general and powerful framework for solving difficult optimization problems, including those arising in expert and intelligent systems. In this work, we investigate for the first time two hybrid evolutionary algorithms incorporating tabu search for solving the generalized max-mean dispersion problem (GMaxMeanDP) which has a variety of practical applications such as web page ranking, community mining, and trust networks. The proposed algorithms integrate innovative search strategies that help the search to explore the search space effectively. We report extensive computational results of the proposed algorithms on six types of 160 benchmark instances, demonstrating their effectiveness and usefulness. In addition to the GMaxMeanDP, the proposed algorithms can help to better solve other problems that can be formulated as the GMaxMeanDP

    Workload Equity in Vehicle Routing Problems: A Survey and Analysis

    Full text link
    Over the past two decades, equity aspects have been considered in a growing number of models and methods for vehicle routing problems (VRPs). Equity concerns most often relate to fairly allocating workloads and to balancing the utilization of resources, and many practical applications have been reported in the literature. However, there has been only limited discussion about how workload equity should be modeled in VRPs, and various measures for optimizing such objectives have been proposed and implemented without a critical evaluation of their respective merits and consequences. This article addresses this gap with an analysis of classical and alternative equity functions for biobjective VRP models. In our survey, we review and categorize the existing literature on equitable VRPs. In the analysis, we identify a set of axiomatic properties that an ideal equity measure should satisfy, collect six common measures, and point out important connections between their properties and those of the resulting Pareto-optimal solutions. To gauge the extent of these implications, we also conduct a numerical study on small biobjective VRP instances solvable to optimality. Our study reveals two undesirable consequences when optimizing equity with nonmonotonic functions: Pareto-optimal solutions can consist of non-TSP-optimal tours, and even if all tours are TSP optimal, Pareto-optimal solutions can be workload inconsistent, i.e. composed of tours whose workloads are all equal to or longer than those of other Pareto-optimal solutions. We show that the extent of these phenomena should not be underestimated. The results of our biobjective analysis are valid also for weighted sum, constraint-based, or single-objective models. Based on this analysis, we conclude that monotonic equity functions are more appropriate for certain types of VRP models, and suggest promising avenues for further research.Comment: Accepted Manuscrip

    First-principles molecular structure search with a genetic algorithm

    Full text link
    The identification of low-energy conformers for a given molecule is a fundamental problem in computational chemistry and cheminformatics. We assess here a conformer search that employs a genetic algorithm for sampling the low-energy segment of the conformation space of molecules. The algorithm is designed to work with first-principles methods, facilitated by the incorporation of local optimization and blacklisting conformers to prevent repeated evaluations of very similar solutions. The aim of the search is not only to find the global minimum, but to predict all conformers within an energy window above the global minimum. The performance of the search strategy is: (i) evaluated for a reference data set extracted from a database with amino acid dipeptide conformers obtained by an extensive combined force field and first-principles search and (ii) compared to the performance of a systematic search and a random conformer generator for the example of a drug-like ligand with 43 atoms, 8 rotatable bonds and 1 cis/trans bond

    Planning as Optimization: Dynamically Discovering Optimal Configurations for Runtime Situations

    Full text link
    The large number of possible configurations of modern software-based systems, combined with the large number of possible environmental situations of such systems, prohibits enumerating all adaptation options at design time and necessitates planning at run time to dynamically identify an appropriate configuration for a situation. While numerous planning techniques exist, they typically assume a detailed state-based model of the system and that the situations that warrant adaptations are known. Both of these assumptions can be violated in complex, real-world systems. As a result, adaptation planning must rely on simple models that capture what can be changed (input parameters) and observed in the system and environment (output and context parameters). We therefore propose planning as optimization: the use of optimization strategies to discover optimal system configurations at runtime for each distinct situation that is also dynamically identified at runtime. We apply our approach to CrowdNav, an open-source traffic routing system with the characteristics of a real-world system. We identify situations via clustering and conduct an empirical study that compares Bayesian optimization and two types of evolutionary optimization (NSGA-II and novelty search) in CrowdNav

    On green routing and scheduling problem

    Full text link
    The vehicle routing and scheduling problem has been studied with much interest within the last four decades. In this paper, some of the existing literature dealing with routing and scheduling problems with environmental issues is reviewed, and a description is provided of the problems that have been investigated and how they are treated using combinatorial optimization tools

    Multiobjective Simulation Optimization Using Enhanced Evolutionary Algorithm Approaches

    Get PDF
    In today\u27s competitive business environment, a firm\u27s ability to make the correct, critical decisions can be translated into a great competitive advantage. Most of these critical real-world decisions involve the optimization not only of multiple objectives simultaneously, but also conflicting objectives, where improving one objective may degrade the performance of one or more of the other objectives. Traditional approaches for solving multiobjective optimization problems typically try to scalarize the multiple objectives into a single objective. This transforms the original multiple optimization problem formulation into a single objective optimization problem with a single solution. However, the drawbacks to these traditional approaches have motivated researchers and practitioners to seek alternative techniques that yield a set of Pareto optimal solutions rather than only a single solution. The problem becomes much more complicated in stochastic environments when the objectives take on uncertain (or noisy ) values due to random influences within the system being optimized, which is the case in real-world environments. Moreover, in stochastic environments, a solution approach should be sufficiently robust and/or capable of handling the uncertainty of the objective values. This makes the development of effective solution techniques that generate Pareto optimal solutions within these problem environments even more challenging than in their deterministic counterparts. Furthermore, many real-world problems involve complicated, black-box objective functions making a large number of solution evaluations computationally- and/or financially-prohibitive. This is often the case when complex computer simulation models are used to repeatedly evaluate possible solutions in search of the best solution (or set of solutions). Therefore, multiobjective optimization approaches capable of rapidly finding a diverse set of Pareto optimal solutions would be greatly beneficial. This research proposes two new multiobjective evolutionary algorithms (MOEAs), called fast Pareto genetic algorithm (FPGA) and stochastic Pareto genetic algorithm (SPGA), for optimization problems with multiple deterministic objectives and stochastic objectives, respectively. New search operators are introduced and employed to enhance the algorithms\u27 performance in terms of converging fast to the true Pareto optimal frontier while maintaining a diverse set of nondominated solutions along the Pareto optimal front. New concepts of solution dominance are defined for better discrimination among competing solutions in stochastic environments. SPGA uses a solution ranking strategy based on these new concepts. Computational results for a suite of published test problems indicate that both FPGA and SPGA are promising approaches. The results show that both FPGA and SPGA outperform the improved nondominated sorting genetic algorithm (NSGA-II), widely-considered benchmark in the MOEA research community, in terms of fast convergence to the true Pareto optimal frontier and diversity among the solutions along the front. The results also show that FPGA and SPGA require far fewer solution evaluations than NSGA-II, which is crucial in computationally-expensive simulation modeling applications

    The design of effective and robust supply chain networks

    Get PDF
    Tableau d’honneur de la Faculté des études supérieures et postdoctorales, 2009-2010Pour faire face aux risques associés aux aléas des opérations normales et aux périls qui menacent les ressources d'un réseau logistique, une méthodologie générique pour le design de réseaux logistiques efficaces et robustes en univers incertain est développée dans cette thèse. Cette méthodologie a pour objectif de proposer une structure de réseau qui assure, de façon durable, la création de valeur pour l'entreprise pour faire face aux aléas et se prémunir contre les risques de ruptures catastrophiques. La méthodologie s'appuie sur le cadre de prise de décision distribué de Schneeweiss et l'approche de modélisation mathématique qui y est associée intègre des éléments de programmation stochastique, d'analyse de risque et de programmation robuste. Trois types d'événements sont définis pour caractériser l'environnement des réseaux logistiques: des événements aléatoires (ex. la demande, les coûts et les taux de changes), des événements hasardeux (ex. les grèves, les discontinuités d'approvisionnement des fournisseurs et les catastrophes naturelles) et des événements profondément incertains (ex. les actes de sabotage, les attentats et les instabilités politiques). La méthodologie considère que l'environnement futur de l'entreprise est anticipé à l'aide de scénarios, générés partiellement par une méthode Monte-Carlo. Cette méthode fait partie de l'approche de solution et permet de générer des replications d'échantillons de petites tailles et de grands échantillons. Elle aide aussi à tenir compte de l'attitude au risque du décideur. L'approche générique de solution du modèle s'appuie sur ces échantillons de scénarios pour générer des designs alternatifs et sur une approche multicritère pour l'évaluation de ces designs. Afin de valider les concepts méthodologiques introduits dans cette thèse, le problème hiérarchique de localisation d'entrepôts et de transport est modélisé comme un programme stochastique avec recours. Premièrement, un modèle incluant une demande aléatoire est utilisé pour valider en partie la modélisation mathématique du problème et étudier, à travers plusieurs anticipations approximatives, la solvabilité du modèle de design. Une approche de solution heuristique est proposée pour ce modèle afin de résoudre des problèmes de taille réelle. Deuxièmement, un modèle incluant les aléas et les périls est utilisé pour valider l'analyse de risque, les stratégies de resilience et l'approche de solution générique. Plusieurs construits mathématiques sont ajoutés au modèle de base afin de refléter différentes stratégies de resilience et proposer un modèle de décision sous risque incluant l'attitude du décideur face aux événements extrêmes. Les nombreuses expérimentations effectuées, avec les données d'un cas réaliste, nous ont permis de tester les concepts proposés dans cette thèse et d'élaborer une méthode de réduction de complexité pour le modèle générique de design sans compromettre la qualité des solutions associées. Les résultats obtenus par ces expérimentations ont pu confirmer la supériorité des designs obtenus en appliquant la méthodologie proposée en termes d'efficacité et de robustesse par rapport à des solutions produites par des approches déterministes ou des modèles simplifiés proposés dans la littérature

    Traveling Salesman Problem

    Get PDF
    The idea behind TSP was conceived by Austrian mathematician Karl Menger in mid 1930s who invited the research community to consider a problem from the everyday life from a mathematical point of view. A traveling salesman has to visit exactly once each one of a list of m cities and then return to the home city. He knows the cost of traveling from any city i to any other city j. Thus, which is the tour of least possible cost the salesman can take? In this book the problem of finding algorithmic technique leading to good/optimal solutions for TSP (or for some other strictly related problems) is considered. TSP is a very attractive problem for the research community because it arises as a natural subproblem in many applications concerning the every day life. Indeed, each application, in which an optimal ordering of a number of items has to be chosen in a way that the total cost of a solution is determined by adding up the costs arising from two successively items, can be modelled as a TSP instance. Thus, studying TSP can never be considered as an abstract research with no real importance
    • …
    corecore