44 research outputs found

    Speeding up Energy System Models - a Best Practice Guide

    Get PDF
    Background Energy system models (ESM) are widely used in research and industry to analyze todays and future energy systems and potential pathways for the European energy transition. Current studies address future policy design, analysis of technology pathways and of future energy systems. To address these questions and support the transformation of today’s energy systems, ESM have to increase in complexity to provide valuable quantitative insights for policy makers and industry. Especially when dealing with uncertainty and in integrating large shares of renewable energies, ESM require a detailed implementation of the underlying electricity system. The increased complexity of the models makes the application of ESM more and more difficult, as the models are limited by the available computational power of today’s decentralized workstations. Severe simplifications of the models are common strategies to solve problems in a reasonable amount of time – naturally significantly influencing the validity of results and reliability of the models in general. Solutions for Energy-System Modelling Within BEAM-ME a consortium of researchers from different research fields (system analysis, mathematics, operations research and informatics) develop new strategies to increase the computational performance of energy system models and to transform energy system models for usage on high performance computing clusters. Within the project, an ESM will be applied on two of Germany’s fastest supercomputers. To further demonstrate the general application of named techniques on ESM, a model experiment is implemented as part of the project. Within this experiment up to six energy system models will jointly develop, implement and benchmark speed-up methods. Finally, continually collecting all experiences from the project and the experiment, identified efficient strategies will be documented and general standards for increasing computational performance and for applying ESM to high performance computing will be documented in a best-practice guide

    A parallel Branch-and-Fix Coordination based matheuristic algorithm for solving large sized multistage stochastic mixed 0-1 problems

    Get PDF
    A parallel matheuristic algorithm is presented as a spin-off from the exact Branch-and-Fix Coordination (BFC) algorithm for solving multistage stochastic mixed 0-1 problems. Some steps to guarantee the solution’s optimality are relaxed in the BFC algorithm, such that an incomplete backward branching scheme is considered for solving large sized problems. Additionally, a new branching criterion is considered, based on dynamically-guided and stage-wise ordering schemes, such that fewer Twin Node Families are expected to be visited during the execution of the so-called H-DBFC algorithm. The inner parallelization IH-DBFC of the new approach, allows to solve in parallel scenario clusters MIP submodels at different steps of the algorithm. The outer parallel version, OH-DBFC, considers independent paths and allows iterative incumbent solution values exchanges to obtain tighter bounds of the solution value of the original problem. A broad computational experience is reported for assessing the quality of the matheuristic solution for large sized instances. The instances dimensions that are considered are up to two orders of magnitude larger than in some other works that we are aware of. The optimality gap of the H-DBFC solution value versus the one obtained by a state-of-the-artMIP solver is very small, if any. The new approach frequently outperforms it in terms of solution’s quality and computing time. A comparison with our Stochastic Dynamic Programming algorithm is also reported. The use of parallel computing provides, on one hand, a perspective for solving very large sized instances and, on the other hand, an expected large reduction in elapsed time.MTM2015-65317-P, MTM2015-63710-P, IT928-16; UFI BETS 2011; IZO-SGI SGIke

    Parallel algorithms for two-stage stochastic optimization

    Get PDF
    We develop scalable algorithms for two-stage stochastic program optimizations. We propose performance optimizations such as cut-window mechanism in Stage 1 and scenario clustering in Stage 2 of benders method for solving two-stage stochastic programs. A naive implementation of benders method has slow convergence rate and does not scale well to large number of processors especially when the problem size is large and/or there are integer variables in Stage 1. Parallelization of stochastic integer programs pose very unique characteristics that make them very challenging to parallelize. We develop a Parallel Stochastic Integer Program Solver (PSIPS) that exploits nested parallelism by exploring the branch-and-bound tree vertices in parallel along with scenario parallelization. PSIPS has been shown to have high parallel efficiency of greater than 40% at 120 cores which is significantly greater than the parallel efficiency of state-of-the-art mixed-integer program solvers. A significant portion of the time in this branch-and-bound solver is spent in optimizing the stochastic linear program at the root vertex. Stochastic linear programs at other vertices of the branch-and-bound tree take very less iterations to converge because they can inherit benders cut from their parent vertices and/or the root. Therefore, it is important to reduce the optimization time of the stochastic linear program at the root vertex. We propose two decomposition schemes namely the Split-and-Merge (SAM) method and the Lagrangian Decomposition and Merge (LDAM) method that significantly increase the convergence rate of benders decomposition. SAM method gives up to 64% reduction in solution time while also giving significantly higher parallel speedups as compared to the naive benders method. LDAM method, on the other hand, has made it possible to solve otherwise intractable stochastic programs. We further provide a computational engine for many real-time and dynamic problems faced by US Air Mobility Command. We first propose a stochastic programming solution to the military aircraft allocation problem with consideration for disaster management. Then, we study US AMC's dynamic mission re-planning problem and propose a mathematical formulation that is computationally feasible and leads to significant savings in cost as compared to myopic and deterministic optimization. It is expected that this work will provide the springboard for more robust problem solving with HPC in many logistics and planning problems

    On parallel computing for stochastic optimization models and algorithms

    Get PDF
    167 p.Esta tesis tiene como objetivo principal la resolución de problemas de optimización bajo incertidumbre a gran escala, mediante la interconexión entre las disciplinas de Optimización estocástica y Computación en paralelo. Se describen algoritmos de descomposición desde la perspectivas de programación matemática y del aprovechamiento de recursos computacionales con el fin de resolver problemas de manera más rápida, de mayores dimensiones o/y obtener mejores resultados que sus técnicas homónimas en serie. Se han desarrollado dos estrategias de paralelización, denotadas como inner y outer. La primera de las cuales, realiza tareas en paralelo dentro de un esquema algorítmico en serie, mientras que la segunda ejecuta de manera simultánea y coordinada varios algoritmos secuenciales. La mayor descomposición del problema original, compartiendo el área de factibilidad, creando fases de sincronización y comunicación entre ejecuciones paralelas o definiendo condiciones iniciales divergentes, han sido claves en la eficacia de los diseños de los algoritmos propuestos. Como resultado, se presentan tanto algoritmos exactos como matheurísticos, que combinan metodologías metaheurísticas y técnicas de programación matemática. Se analiza la escalabilidad de cada algoritmo propuesto, y se consideran varios bancos de problemas de diferentes dimensiones, hasta un máximo de 58 millones de restricciones y 54 millones de variables (de las cuales 15 millones son binarias). La experiencia computacional ha sido principalmente realizada en el cluster ARINA de SGI/IZO-SGIker de la UPV/EHU

    Book of Abstracts of the Sixth SIAM Workshop on Combinatorial Scientific Computing

    Get PDF
    Book of Abstracts of CSC14 edited by Bora UçarInternational audienceThe Sixth SIAM Workshop on Combinatorial Scientific Computing, CSC14, was organized at the Ecole Normale Supérieure de Lyon, France on 21st to 23rd July, 2014. This two and a half day event marked the sixth in a series that started ten years ago in San Francisco, USA. The CSC14 Workshop's focus was on combinatorial mathematics and algorithms in high performance computing, broadly interpreted. The workshop featured three invited talks, 27 contributed talks and eight poster presentations. All three invited talks were focused on two interesting fields of research specifically: randomized algorithms for numerical linear algebra and network analysis. The contributed talks and the posters targeted modeling, analysis, bisection, clustering, and partitioning of graphs, applied in the context of networks, sparse matrix factorizations, iterative solvers, fast multi-pole methods, automatic differentiation, high-performance computing, and linear programming. The workshop was held at the premises of the LIP laboratory of ENS Lyon and was generously supported by the LABEX MILYON (ANR-10-LABX-0070, Université de Lyon, within the program ''Investissements d'Avenir'' ANR-11-IDEX-0007 operated by the French National Research Agency), and by SIAM

    Algorithms for Scheduling Problems

    Get PDF
    This edited book presents new results in the area of algorithm development for different types of scheduling problems. In eleven chapters, algorithms for single machine problems, flow-shop and job-shop scheduling problems (including their hybrid (flexible) variants), the resource-constrained project scheduling problem, scheduling problems in complex manufacturing systems and supply chains, and workflow scheduling problems are given. The chapters address such subjects as insertion heuristics for energy-efficient scheduling, the re-scheduling of train traffic in real time, control algorithms for short-term scheduling in manufacturing systems, bi-objective optimization of tortilla production, scheduling problems with uncertain (interval) processing times, workflow scheduling for digital signal processor (DSP) clusters, and many more

    Schedulability analysis and optimization of time-partitioned distributed real-time systems

    Get PDF
    RESUMEN: La creciente complejidad de los sistemas de control modernos lleva a muchas empresas a tener que re-dimensionar o re-diseñar sus soluciones para adecuarlas a nuevas funcionalidades y requisitos. Un caso paradigmático de esta situación se ha dado en el sector ferroviario, donde la implementación de las aplicaciones de señalización se ha llevado a cabo empleando técnicas tradicionales que, si bien ahora mismo cumplen con los requisitos básicos, su rendimiento temporal y escalabilidad funcional son sustancialmente mejorables. A partir de las soluciones propuestas en esta tesis, además de contribuir a la validación de sistemas que requieren certificación de seguridad funcional, también se creará la tecnología base de análisis de planificabilidad y optimización de sistemas de tiempo real distribuidos generales y también basados en particionado temporal, que podrá ser aplicada en distintos entornos en los que los sistemas ciberfísicos juegan un rol clave, por ejemplo en aplicaciones de Industria 4.0, en los que pueden presentarse problemas similares en el futuro.ABSTRACT:he increasing complexity of modern control systems leads many companies to have to resize or redesign their solutions to adapt them to new functionalities and requirements. A paradigmatic case of this situation has occurred in the railway sector, where the implementation of signaling applications has been carried out using traditional techniques that, although they currently meet the basic requirements, their time performance and functional scalability can be substantially improved. From the solutions proposed in this thesis, besides contributing to the assessment of systems that require functional safety certification, the base technology for schedulability analysis and optimization of general as well as time-partitioned distributed real-time systems will be derived, which can be applied in different environments where cyber-physical systems play a key role, for example in Industry 4.0 applications, where similar problems may arise in the future

    Inventory-Location Problems for Spare Parts with Time-Based Service Constraints

    Get PDF
    This thesis studies an inventory-location problem faced by a large manufacturer and supplier of small to medium sized aircraft and their spare parts. The sale of after market spare parts is a major source of revenue for the company, but it is a complex industry with many unique challenges. The original problem is a multi-echelon network design problem, which is decomposed into a facility location problem with consolidated shipping challenges, and a spare parts inventory problem. The facility location problem is solved a number of times under different scenarios to give the company's leadership team access to a wide range of feasible solutions. The model itself is an important contribution to industry, allowing the company to solve a spare parts network problem that will guide strategic decision-making for years. The chapter serves as case-study on how to accurately model a large and complicated service parts supply chain through the use of mathematical programming, part aggregation and scenarios. The company used the scenario results to redesign its spare parts distribution network, opening new hubs and consolidating existing service centres. The costs savings associated with this project are estimated to be $4.4 Million USD annually. The proposed solution does increase the burden of customer freight charges on the company's customers compared to the current network, but the operational savings are expected to more than outweigh the increase in customer shipments costs. The project team thus recommended that the company consider subsidizing customer freight costs to offset the expected cost increase the customers face, resulting in lower costs for both the company and their customers. This solution could set a new standard for aircraft spare parts suppliers to follow. Considered next is an integrated inventory-location problem with service requirements based on the first problem. Customer demand is Poisson distributed and the service levels are time-based, leading to highly non-linear, stochastic service constraints and a nonlinear, mixed-integer optimization problem. Unlike previous works in the literature that propose approximations for the nonlinear constraints, this thesis presents an exact solution methodology using logic-based Benders decomposition. The problem is decomposed to separate the location decisions in the master problem from the inventory decisions in the subproblem. A new family of valid cuts is proposed and the algorithm is shown to converge to optimality. This is the first attempt to solve this type of problem exactly. Then, this thesis presents a new restrict-and-decompose scheme to further decompose the Benders master problem by part. The approach is tested on industry instances as well as random instances. The second algorithm is able to solve industry instances with up to 60 parts within two hours of computation time, while the maximum number of parts attempted in the literature is currently five. Finally, this thesis studies a second integrated inventory-location problem under different assumptions. While the previous model uses the backorder assumption for unfilled demand and a strict time window, the third model uses the lost-sales assumption and a soft time window for satisfying time sensitive customer demand. The restrict-and-decompose scheme is applied with little modification, the main difference being the calculation of the Benders cut coefficients. The algorithm is again guaranteed to converge to optimality. The results are compared against previous work under the same assumptions. The results deliver better solutions and certificates of optimality to a large set of test problems
    corecore