438 research outputs found

    Network Target Coordination for Design Optimization of Decomposed Systems

    Get PDF
    A complex engineered system is often decomposed into a number of different subsystems that interact on one another and together produce results not obtainable by the subsystems alone. Effective coordination of the interdependencies shared among these subsystems is critical to fulfill the stakeholder expectations and technical requirements of the original system. The past research has shown that various coordination methods obtain different solution accuracies and exhibit different computational efficiencies when solving a decomposed system. Addressing these coordination decisions may lead to improved complex system design. This dissertation studies coordination methods through two types of decomposition structures, hierarchical, and nonhierarchical. For coordinating hierarchically decomposed systems, linear and proximal cutting plane methods are applied based on augmented Lagrangian relaxation and analytical target cascading (ATC). Three nonconvex, nonlinear design problems are used to verify the numerical performance of the proposed coordination method and the obtained results are compared to traditional update schemes of subgradient-based algorithm. The results suggest that the cutting plane methods can significantly improve the solution accuracy and computational efficiency of the hierarchically decomposed systems. In addition, a biobjective optimization method is also used to capture optimality and feasibility. The numerical performance of the biobjective algorithm is verified by solving an analytical mass allocation problem. For coordinating nonhierarchically decomposed complex systems, network target coordination (NTC) is developed by modeling the distributed subsystems as different agents in a network. To realize parallel computing of the subsystems, NTC via a consensus alternating direction method of multipliers is applied to eliminate the use of the master problem, which is required by most distributed coordination methods. In NTC, the consensus is computed using a locally update scheme, providing the potential to realize an asynchronous solution process. The numerical performance of NTC is verified using a geometrical programming problem and two engineering problems

    On the Computational Cost and Complexity of Stochastic Inverse Solvers

    Get PDF
    The goal of this paper is to provide a starting point for investigations into a mainly underdeveloped area of research regarding the computational cost analysis of complex stochastic strategies for solving parametric inverse problems. This area has two main components: solving global optimization problems and solving forward problems (to evaluate the misfit function that we try to minimize). For the first component, we pay particular attention to genetic algorithms with heuristics and to multi-deme algorithms that can be modeled as ergodic Markov chains. We recall a simple method for evaluating the first hitting time for the single-deme algorithm and we extend it to the case of HGS, a multi-deme hierarchic strategy. We focus on the case in which at least the demes in the leaves are well tuned. Finally, we also express the problems of finding local and global optima in terms of a classic complexity theory. We formulate the natural result that finding a local optimum of a function is an NP-complete task, and we argue that finding a global optimum is a much harder, DP-complete, task. Furthermore, we argue that finding all global optima is, possibly, even harder (#P-hard) task. Regarding the second component of solving parametric inverse problems (i.e., regarding the forward problem solvers), we discuss the computational cost of hp-adaptive Finite Element solvers and their rates of convergence with respect to the increasing number of degrees of freedom. The presented results provide a useful taxonomy of problems and methods of studying the computational cost and complexity of various strategies for solving inverse parametric problems. Yet, we stress that our goal was not to deliver detailed evaluations for particular algorithms applied to particular inverse problems, but rather to try to identify possible ways of obtaining such results

    Parallel Lagrangian particle transport : application to respiratory system airways

    Get PDF
    This thesis is focused on particle transport in the context of high computing performance (HPC) in its widest range, from the numerical modeling to the physics involved, including its parallelization and post-process. The main goal is to obtain a general framework that enables understanding all the requirements and characteristics of particle transport using the Lagrangian frame of reference. Although the idea is to provide a suitable model for any engineering application that involves particle transport simulation, this thesis uses the respiratory system framework. This means that all the simulations are focused on this topic, including the benchmarks for testing, verifying and optimizing the results. Other applications, such as combustion, ocean residuals, or automotive, have also been simulated by other researchers using the same numerical model proposed here. However, they have not been included here in the interest of allowing the project to advance in a specific direction, and facilitate the structure and comprehension of this work. Human airways and respiratory system simulations are of special interest for medical purposes. Indeed, human airways can be significantly different in every individual. This complicates the study of drug delivery efficiency, deposition of polluted particles, etc., using classic in-vivo or in-vitro techniques. In other words, flow and deposition results may vary depending on the geometry of the patient and simulations allow customized studies using specific geometries. With the help of the new computational techniques, in the near future it may be possible to optimize nasal drugs delivery, surgery or other medical studies for each individual patient though a more personalized medicine. In summary, this thesis prioritizes numerical modeling, wide usability, performance, parallelization, and the study of the physics that affects particle transport. In addition, the simulation of the respiratory system should carry out interesting biological and medical results. However, the interpretation of these results will be only done from a pure numerical point of view.Aquesta tesi se centra en el transport de partícules dins el context de la computació d'alt rendiment (HPC), en el seu ventall més ampli; des del model numèric fins a la física involucrada, incloent-hi la part de paral·lelització del codi i de post-procés. L'objectiu principal és obtenir un esquema general que permeti entendre tant els requeriments com les característiques del transport de partícules fent servir el marc de referència Lagrangià. Encara que la idea sigui definir un model capaç¸ de simular qualsevol aplicació en el camp de l'enginyeria que involucri el transport de partícules, aquesta tesi utilitza el sistema respiratori com a temàtica de referència. Això significa que totes les simulacions estan emmarcades en aquest camp d'estudi, incloent-hi els tests de referència, verificacions i optimitzacions de resultats. L'estudi d'altres aplicacions, com ara la combustió, els residus oceànics, l'automoció o l'aeronàutica també han estat dutes a terme per altres investigadors utilitzant el mateix model numèric proposat aquí. Tot i així, aquests resultats no han estat inclosos en aquesta tesi per simplificar-la i avançar en una sola direcció; facilitant així l'estructura i millor comprensió d'aquest treball. Pel que fa al sistema respiratori humà i les seves simulacions, tenen especial interès per a propòsits mèdics. Particularment, la geometria dels conductes respiratoris pot variar de manera considerable en cada persona. Això complica l'estudi en aspectes com el subministrament de medicaments o la deposició de partícules contaminants, per exemple, utilitzant les tècniques clàssiques de laboratori (in-vivo o in-vitro). En altres paraules, tant el flux com la deposició poden canviar en funció de la geometria del pacient i aquí és on les simulacions permeten estudis adaptats a geometries concretes. Gràcies a les noves tècniques de computació, en un futur proper és probable que puguem optimitzar el subministrament de medicaments per via nasal, la cirurgia o altres estudis mèdics per a cada pacient mitjançant una medicina més personalitzada. En resum, aquesta tesi prioritza el model numèric, l'amplitud d'usos, el rendiment, la paral·lelització i l'estudi de la física que afecta directament a les partícules. A més, el fet de basar les nostres simulacions en el sistema respiratori dota aquesta tesi d'un interès biològic i mèdic pel que fa als resultats

    Python framework for HP adaptive discontinuous Galerkin methods for two phase flow in porous media

    Get PDF
    In this paper we present a framework for solving two-phase flow problems in porous media. The discretization is based on a Discontinuous Galerkin method and includes local grid adaptivity and local choice of polynomial degree. The method is implemented using the new Python frontend Dune-FemPy to the open source framework Dune. The code used for the simulations is made available as Jupyter notebook and can be used through a Docker container. We present a number of time stepping approaches ranging from a classical IMPES method to a fully coupled implicit scheme. The implementation of the discretization is very flexible allowing to test different formulations of the two-phase flow model and adaptation strategies

    Python Framework for HP Adaptive Discontinuous Galerkin Method for Two Phase Flow in Porous Media

    Get PDF
    In this paper we present a framework for solving two phase flow problems in porous media. The discretization is based on a Discontinuous Galerkin method and includes local grid adaptivity and local choice of polynomial degree. The method is implemented using the new Python frontend Dune-FemPy to the open source framework Dune. The code used for the simulations is made available as Jupyter notebook and can be used through a Docker container. We present a number of time stepping approaches ranging from a classical IMPES method to fully coupled implicit scheme. The implementation of the discretization is very flexible allowing for test different formulations of the two phase flow model and adaptation strategies.Comment: Keywords: DG, hp-adaptivity, Two-phase flow, IMPES, Fully implicit, Dune, Python, Porous media. 28 pages, 9 figures, various code snippet

    Schnelle Löser für partielle Differentialgleichungen

    Get PDF
    [no abstract available

    Cloud-efficient modelling and simulation of magnetic nano materials

    Get PDF
    Scientific simulations are rarely attempted in a cloud due to the substantial performance costs of virtualization. Considerable communication overheads, intolerable latencies, and inefficient hardware emulation are the main reasons why this emerging technology has not been fully exploited. On the other hand, the progress of computing infrastructure nowadays is strongly dependent on perspective storage medium development, where efficient micromagnetic simulations play a vital role in future memory design. This thesis addresses both these topics by merging micromagnetic simulations with the latest OpenStack cloud implementation while providing a time and costeffective alternative to expensive computing centers. However, many challenges have to be addressed before a high-performance cloud platform emerges as a solution for problems in micromagnetic research communities. First, the best solver candidate has to be selected and further improved, particularly in the parallelization and process communication domain. Second, a 3-level cloud communication hierarchy needs to be recognized and each segment adequately addressed. The required steps include breaking the VMisolation for the host’s shared memory activation, cloud network-stack tuning, optimization, and efficient communication hardware integration. The project work concludes with practical measurements and confirmation of successfully implemented simulation into an open-source cloud environment. It is achieved that the renewed Magpar solver runs for the first time in the OpenStack cloud by using ivshmem for shared memory communication. Also, extensive measurements proved the effectiveness of our solutions, yielding from sixty percent to over ten times better results than those achieved in the standard cloud.Aufgrund der erheblichen Leistungskosten der Virtualisierung werden wissenschaftliche Simulationen in einer Cloud selten versucht. Beträchtlicher Kommunikationsaufwand, erhebliche Latenzen und ineffiziente Hardwareemulation sind die Hauptgründe, warum diese aufkommende Technologie nicht vollständig genutzt wurde. Andererseits hängt der Fortschritt der Computertechnologie heutzutage stark von der Entwicklung perspektivischer Speichermedien ab, bei denen effiziente mikromagnetische Simulationen eine wichtige Rolle für die zukünftige Speichertechnologie spielen. Diese Arbeit befasst sich mit diesen beiden Themen, indem mikromagnetische Simulationen mit der neuesten OpenStack Cloud-Implementierung zusammengeführt werden, um eine zeit- und kostengünstige Alternative zu teuren Rechenzentren bereitzustellen. Viele Herausforderungen müssen jedoch angegangen werden, bevor eine leistungsstarke Cloud-Plattform als Lösung für Probleme in mikromagnetischen Forschungsgemeinschaften entsteht. Zunächst muss der beste Kandidat für die Lösung ausgewählt und weiter verbessert werden, insbesondere im Bereich der Parallelisierung und Prozesskommunikation. Zweitens muss eine 3-stufige CloudKommunikationshierarchie erkannt und jedes Segment angemessen adressiert werden. Die erforderlichen Schritte umfassen das Aufheben der VM-Isolation, um den gemeinsam genutzten Speicher zwischen Cloud-Instanzen zu aktivieren, die Optimierung des Cloud-Netzwerkstapels und die effiziente Integration von Kommunikationshardware. Die praktische Arbeit endet mit Messungen und der Bestätigung einer erfolgreich implementierten Simulation in einer Open-Source Cloud-Umgebung. Als Ergebnis haben wir erreicht, dass der neu erstellte Magpar-Solver zum ersten Mal in der OpenStack Cloud ausgeführt wird, indem ivshmem für die Shared-Memory Kommunikation verwendet wird. Umfangreiche Messungen haben auch die Wirksamkeit unserer Lösungen bewiesen und von sechzig Prozent bis zu zehnmal besseren Ergebnissen als in der Standard Cloud geführt

    Teadusarvutuse algoritmide taandamine hajusarvutuse raamistikele

    Get PDF
    Teadusarvutuses kasutatakse arvuteid ja algoritme selleks, et lahendada probleeme erinevates reaalteadustes nagu geneetika, bioloogia ja keemia. Tihti on eesmärgiks selliste loodusnähtuste modelleerimine ja simuleerimine, mida päris keskkonnas oleks väga raske uurida. Näiteks on võimalik luua päikesetormi või meteoriiditabamuse mudel ning arvutisimulatsioonide abil hinnata katastroofi mõju keskkonnale. Mida keerulisemad ja täpsemad on sellised simulatsioonid, seda rohkem arvutusvõimsust on vaja. Tihti kasutatakse selleks suurt hulka arvuteid, mis kõik samaaegselt töötavad ühe probleemi kallal. Selliseid arvutusi nimetatakse paralleel- või hajusarvutusteks. Hajusarvutuse programmide loomine on aga keeruline ning nõuab palju rohkem aega ja ressursse, kuna vaja on sünkroniseerida erinevates arvutites samaaegselt tehtavat tööd. On loodud mitmeid tarkvararaamistikke, mis lihtsustavad seda tööd automatiseerides osa hajusprogrammeerimisest. Selle teadustöö eesmärk oli uurida selliste hajusarvutusraamistike sobivust keerulisemate teadusarvutuse algoritmide jaoks. Tulemused näitasid, et olemasolevad raamistikud on üksteisest väga erinevad ning neist ükski ei ole sobiv kõigi erinevat tüüpi algoritmide jaoks. Mõni raamistik on sobiv ainult lihtsamate algoritmide jaoks; mõni ei sobi olukorras, kus andmed ei mahu arvutite mällu. Algoritmi jaoks kõige sobivama hajusarvutisraamistiku valimine võib olla väga keeruline ülesanne, kuna see nõuab olemasolevate raamistike uurimist ja rakendamist. Sellele probleemile lahendust otsides otsustati luua dünaamiline algoritmide modelleerimise rakendus (DAMR), mis oskab simuleerida algoritmi implementatsioone erinevates hajusarvutusraamistikes. DAMR aitab hinnata milline hajusraamistik on kõige sobivam ette antud algoritmi jaoks, ilma algoritmi reaalselt ühegi hajusraamistiku peale implementeerimata. Selle uurimustöö peamine panus on hajusarvutusraamistike kasutuselevõtu lihtsamaks tegemine teadlastele, kes ei ole varem nende kasutamisega kokku puutunud. See peaks märkimisväärselt aega ja ressursse kokku hoidma, kuna ei pea ükshaaval kõiki olemasolevaid hajusraamistikke tundma õppima ja rakendama.Scientific computing uses computers and algorithms to solve problems in various sciences such as genetics, biology and chemistry. Often the goal is to model and simulate different natural phenomena which would otherwise be very difficult to study in real environments. For example, it is possible to create a model of a solar storm or a meteor hit and run computer simulations to assess the impact of the disaster on the environment. The more sophisticated and accurate the simulations are the more computing power is required. It is often necessary to use a large number of computers, all working simultaneously on a single problem. These kind of computations are called parallel or distributed computing. However, creating distributed computing programs is complicated and requires a lot more time and resources, because it is necessary to synchronize different computers working at the same time. A number of software frameworks have been created to simplify this process by automating part of a distributed programming. The goal of this research was to assess the suitability of such distributed computing frameworks for complex scientific computing algorithms. The results showed that existing frameworks are very different from each other and none of them are suitable for all different types of algorithms. Some frameworks are only suitable for simple algorithms; others are not suitable when data does not fit into the computer memory. Choosing the most appropriate distributed computing framework for an algorithm can be a very complex task, because it requires studying and applying the existing frameworks. While searching for a solution to this problem, it was decided to create a Dynamic Algorithms Modelling Application (DAMA), which is able to simulate the implementation of the algorithm in different distributed computing frameworks. DAMA helps to estimate which distributed framework is the most appropriate for a given algorithm, without actually implementing it in any of the available frameworks. This main contribution of this study is simplifying the adoption of distributed computing frameworks for researchers who are not yet familiar with using them. It should save significant time and resources as it is not necessary to study each of the available distributed computing frameworks in detail
    corecore