336 research outputs found

    GPU accelerated Nature Inspired Methods for Modelling Large Scale Bi-Directional Pedestrian Movement

    Full text link
    Pedestrian movement, although ubiquitous and well-studied, is still not that well understood due to the complicating nature of the embedded social dynamics. Interest among researchers in simulating pedestrian movement and interactions has grown significantly in part due to increased computational and visualization capabilities afforded by high power computing. Different approaches have been adopted to simulate pedestrian movement under various circumstances and interactions. In the present work, bi-directional crowd movement is simulated where an equal numbers of individuals try to reach the opposite sides of an environment. Two movement methods are considered. First a Least Effort Model (LEM) is investigated where agents try to take an optimal path with as minimal changes from their intended path as possible. Following this, a modified form of Ant Colony Optimization (ACO) is proposed, where individuals are guided by a goal of reaching the other side in a least effort mode as well as a pheromone trail left by predecessors. The basic idea is to increase agent interaction, thereby more closely reflecting a real world scenario. The methodology utilizes Graphics Processing Units (GPUs) for general purpose computing using the CUDA platform. Because of the inherent parallel properties associated with pedestrian movement such as proximate interactions of individuals on a 2D grid, GPUs are well suited. The main feature of the implementation undertaken here is that the parallelism is data driven. The data driven implementation leads to a speedup up to 18x compared to its sequential counterpart running on a single threaded CPU. The numbers of pedestrians considered in the model ranged from 2K to 100K representing numbers typical of mass gathering events. A detailed discussion addresses implementation challenges faced and averted

    Route Planning in Transportation Networks

    Full text link
    We survey recent advances in algorithms for route planning in transportation networks. For road networks, we show that one can compute driving directions in milliseconds or less even at continental scale. A variety of techniques provide different trade-offs between preprocessing effort, space requirements, and query time. Some algorithms can answer queries in a fraction of a microsecond, while others can deal efficiently with real-time traffic. Journey planning on public transportation systems, although conceptually similar, is a significantly harder problem due to its inherent time-dependent and multicriteria nature. Although exact algorithms are fast enough for interactive queries on metropolitan transit systems, dealing with continent-sized instances requires simplifications or heavy preprocessing. The multimodal route planning problem, which seeks journeys combining schedule-based transportation (buses, trains) with unrestricted modes (walking, driving), is even harder, relying on approximate solutions even for metropolitan inputs.Comment: This is an updated version of the technical report MSR-TR-2014-4, previously published by Microsoft Research. This work was mostly done while the authors Daniel Delling, Andrew Goldberg, and Renato F. Werneck were at Microsoft Research Silicon Valle

    Performance Optimization of Memory Intensive Applications on FPGA Accelerator

    Get PDF
    L'abstract è presente nell'allegato / the abstract is in the attachmen

    Optimization of storage and picking systems in warehouses

    Get PDF
    La croissance du commerce électronique exige une hausse des performances des systèmes d'entreposage, qui sont maintenant repensés pour faire face à un volume massif de demandes à être satisfait le plus rapidement possible. Le système manuel et le système à robots mobile (SRM) sont parmi les plus utilisés pour ces activités. Le premier est un système centré sur l'humain pour réaliser des opérations complexes que les robots actuels ne peuvent pas effectuer. Cependant, les nouvelles générations de robots autonomes mènent à un remplacement progressif par le dernier pour augmenter la productivité. Quel que soit le système utilisé, plusieurs problèmes interdépendants doivent être résolus pour avoir des processus de stockage et de prélèvement efficaces. Les problèmes de stockage concernent les décisions d'où stocker les produits dans l'entrepôt. Les problèmes de prélèvement incluent le regroupement des commandes à exécuter ensemble et les itinéraires que les cueilleurs et les robots doivent suivre pour récupérer les produits demandés. Dans le système manuel, ces problèmes sont traditionnellement résolus à l'aide de politiques simples que les préparateurs peuvent facilement suivre. Malgré l'utilisation de robots, la même stratégie de solution est répliquée aux problèmes équivalents trouvés dans le SRM. Dans cette recherche, nous étudions les problèmes de stockage et de prélèvement rencontrés lors de la conception du système manuel et du SRM. Nous développons des outils d'optimisation pour aider à la prise de décision pour mettre en place leurs processus, en améliorant les mesures de performance typiques de ces systèmes. Certains problèmes traditionnels sont résolus avec des techniques améliorées, tandis que d'autres sont intégrés pour être résolus ensemble au lieu d'optimiser chaque sous-système de manière indépendante. Nous considérons d'abord un système manuel avec un ensemble connu de commandes et intégrons les décisions de stockage et de routage. Le problème intégré et certaines variantes tenant compte des politiques de routage communes sont modélisés mathématiquement. Une métaheuristique générale de recherche de voisinage variable est présentée pour traiter des instances de taille réelle. Des expériences attestent de l'efficience de la métaheuristique proposée par rapport aux modèles exacts et aux politiques de stockage communes. Lorsque les demandes futures sont incertaines, il est courant d'utiliser une stratégie de zonage qui divise la zone de stockage en zones et attribue les produits les plus demandés aux meilleures zones. Les tailles des zones sont à déterminer. Généralement, des dimensions arbitraires sont choisies, mais elles ignorent les caractéristiques de l'entrepôt et des demandes. Nous abordons le problème de dimensionnement des zones pour déterminer quels facteurs sont pertinents pour choisir de meilleures tailles de zone. Les données générées à partir de simulations exhaustives sont utilisées pour trainer quatre modèles de régression d'apprentissage automatique - moindres carrés ordinaire, arbre de régression, forêt aléatoire et perceptron multicouche - afin de prédire les dimensions optimales des zones en fonction de l'ensemble de facteurs pertinents identifiés. Nous montrons que tous les modèles entraînés suggèrent des dimensions sur mesure des zones qui performent meilleur que les dimensions arbitraires couramment utilisées. Une autre approche pour résoudre les problèmes de stockage pour le système manuel et pour le SRM considère les corrélations entre les produits. L'idée est que les produits régulièrement demandés ensemble doivent être stockés près pour réduire les coûts de routage. Cette politique de stockage peut être modélisée comme une variante du problème d'affectation quadratique (PAQ). Le PAQ est un problème combinatoire traditionnel et l'un des plus difficiles à résoudre. Nous examinons les variantes les plus connues du PAQ et développons une puissante métaheuristique itérative de recherche tabou mémétique en parallèle capable de les résoudre. La métaheuristique proposée s'avère être parmi les plus performantes pour le PAQ et surpasse considérablement l'état de l'art pour ses variantes. Les SRM permettent de repositionner facilement les pods d'inventaire pendant les opérations, ce qui peut conduire à un processus de prélèvement plus économe en énergie. Nous intégrons les décisions de repositionnement des pods à l'attribution des commandes et à la sélection des pods à l'aide d'une stratégie de prélèvement par vague. Les pods sont réorganisés en tenant compte du moment et de l'endroit où ils devraient être demandés au futur. Nous résolvons ce problème en utilisant la programmation stochastique en tenant compte de l'incertitude sur les demandes futures et suggérons une matheuristique de recherche locale pour résoudre des instances de taille réelle. Nous montrons que notre schéma d'approximation moyenne de l'échantillon est efficace pour simuler les demandes futures puisque nos méthodes améliorent les solutions trouvées lorsque les vagues sont planifiées sans tenir compte de l'avenir. Cette thèse est structurée comme suit. Après un chapitre d'introduction, nous présentons une revue de la littérature sur le système manuel et le SRM, et les décisions communes prises pour mettre en place leurs processus de stockage et de prélèvement. Les quatre chapitres suivants détaillent les études pour le problème de stockage et de routage intégré, le problème de dimensionnement des zones, le PAQ et le problème de repositionnement de pod. Nos conclusions sont résumées dans le dernier chapitre.The rising of e-commerce is demanding an increase in the performance of warehousing systems, which are being redesigned to deal with a mass volume of demands to be fulfilled as fast as possible. The manual system and the robotic mobile fulfillment system (RMFS) are among the most commonly used for these activities. The former is a human-centered system that handles complex operations that current robots cannot perform. However, newer generations of autonomous robots are leading to a gradual replacement by the latter to increase productivity. Regardless of the system used, several interdependent problems have to be solved to have efficient storage and picking processes. Storage problems concern decisions on where to store products within the warehouse. Picking problems include the batching of orders to be fulfilled together and the routes the pickers and robots should follow to retrieve the products demanded. In the manual system, these problems are traditionally solved using simple policies that pickers can easily follow. Despite using robots, the same solution strategy is being replicated to the equivalent problems found in the RMFS. In this research, we investigate storage and picking problems faced when designing manual and RMFS warehouses. We develop optimization tools to help in the decision-making process to set up their processes and improve typical performance measures considered in these systems. Some classic problems are solved with improved techniques, while others are integrated to be solved together instead of optimizing each subsystem sequentially. We first consider a manual system with a known set of orders and integrate storage and routing decisions. The integrated problem and some variants considering common routing policies are modeled mathematically. A general variable neighborhood search metaheuristic is presented to deal with real-size instances. Computational experiments attest to the effectiveness of the metaheuristic proposed compared to the exact models and common storage policies. When future demands are uncertain, it is common to use a zoning strategy to divide the storage area into zones and assign the most-demanded products to the best zones. Zone sizes are to be determined. Commonly, arbitrary sizes are chosen, which ignore the characteristics of the warehouse and the demands. We approach the zone sizing problem to determine which factors are relevant to choosing better zone sizes. Data generated from exhaustive simulations are used to train four machine learning regression models - ordinary least squares, regression tree, random forest, and multilayer perceptron - to predict the optimal zone sizes given the set of relevant factors identified. We show that all trained models suggest tailor-made zone sizes with better picking performance than the arbitrary ones commonly used. Another approach to solving storage problems, both in the manual and RMFS, considers the correlations between products. The idea is that products constantly demanded together should be stored closer to reduce routing costs. This storage policy can be modeled as a quadratic assignment problem (QAP) variant. The QAP is a traditional combinatorial problem and one of the hardest to solve. We survey the most traditional QAP variants and develop a powerful parallel memetic iterated tabu search metaheuristic capable of solving them. The proposed metaheuristic is shown to be among the best performing ones for the QAP and significantly outperforms the state-of-the-art for its variants. The RMFS allows easy repositioning of inventory pods during operations that can lead to a more energy-efficient picking process. We integrate pod repositioning decisions with order assignment and pod selection using a wave picking strategy such that pods are parked after being requested considering when and where they are expected to be requested next. We solve this integrated problem using stochastic programming considering the uncertainty about future demands and suggest a local search matheuristic to solve real-size instances. We show that our sample average approximation scheme is effective to simulate future demands since our methods improve solutions found when waves are planned without considering the future demands. This thesis is structured as follows. After an introductory chapter, we present a literature review on the manual and RMFS, and common decisions made to set up their storage and picking processes. The next four chapters detail the studies for the integrated storage and routing problem, the zone sizing problem, the QAP, and the pod repositioning problem. Our findings are summarized in the last chapter

    Accelerating supply chains with Ant Colony Optimization across range of hardware solutions

    Get PDF
    This pre-print, arXiv:2001.08102v1 [cs.NE], was published subsequently by Elsevier in Computers and Industrial Engineering, vol. 147, 106610, pp. 1-14 on 29 Jun 2020 and is available at https://doi.org/10.1016/j.cie.2020.106610Ant Colony algorithm has been applied to various optimization problems, however most of the previous work on scaling and parallelism focuses on Travelling Salesman Problems (TSPs). Although, useful for benchmarks and new idea comparison, the algorithmic dynamics does not always transfer to complex real-life problems, where additional meta-data is required during solution construction. This paper looks at real-life outbound supply chain problem using Ant Colony Optimization (ACO) and its scaling dynamics with two parallel ACO architectures - Independent Ant Colonies (IAC) and Parallel Ants (PA). Results showed that PA was able to reach a higher solution quality in fewer iterations as the number of parallel instances increased. Furthermore, speed performance was measured across three different hardware solutions - 16 core CPU, 68 core Xeon Phi and up to 4 Geforce GPUs. State of the art, ACO vectorization techniques such as SS-Roulette were implemented using C++ and CUDA. Although excellent for TSP, it was concluded that for the given supply chain problem GPUs are not suitable due to meta-data access footprint required. Furthermore, compared to their sequential counterpart, vectorized CPU AVX2 implementation achieved 25.4x speedup on CPU while Xeon Phi with its AVX512 instruction set reached 148x on PA with Vectorized (PAwV). PAwV is therefore able to scale at least up to 1024 parallel instances on the supply chain network problem solved

    Computing Performance Benchmarks among CPU, GPU, and FPGA

    Get PDF
    In recent years, the world of high performance computing has been developing rapidly. The goal of this project was to conduct computing performance benchmarks on three major computing platforms, CPUs, GPUs, and FPGAs. A total of 66 benchmarks were evaluated. GPUs outperformed the other platforms in terms of execution time. CPUs outperformed in overall execution combined with transfer time. FPGAs outperformed for fixed algorithms using streaming. The team made several recommendations for further research in this area

    Multi-train trajectory planning

    Get PDF
    Although different parts of the rail industry may have different primary concerns, all are under increasing pressure to minimise their operational energy consumption. Advances in single-train trajectory optimisation have allowed punctuality and traction energy efficiency to be maximised for isolated trains. However, on a railway network safe separation of trains is ensured by signalling and interlocking systems, so the movement of one train will impact the movement of others. This thesis considers methodologies for multi-train trajectory planning. First, a genetic algorithm is implemented and two bespoke genetic operators proposed to improve specific aspects of the optimisation. Compared with published results, the new optimisation is shown to increase the quality of solutions found by an average of 27.6% and increase consistency by a factor of 28. This allows detailed investigation into the effect of the relative priority given to achieving time targets or increasing energy efficiency. Secondly, the performance of optimised control strategies is investigated in a system containing uncertainty. Solutions optimised for a system without uncertainty perform well in those conditions but their performance quickly degrades as the level of uncertainty increases. To address this, a new genetic algorithm-based optimisation procedure is introduced and shown to find robust solutions in a system with multiple different types of uncertainty. Trade-offs are explored between highly optimised trajectories that are unlikely to be achieved, and slightly less optimal trajectories that are robust to real world disturbances. Finally, a massively parallel multi-train simulator is developed to accelerate population-based heuristic optimisations using a graphical processing unit (GPU). Execution time is minimised by implementing all parts of the simulation and optimisation on the GPU, and by designing data structure and algorithms to work efficiently together. This yields a three orders of magnitude increase in rate at which candidate control strategies can be evaluated
    corecore