142 research outputs found

    Domänen parallele Maschinen

    Get PDF
    A computational model is introduced, which abstracts and idealizes computers with access to fragment shaders. While the set of functions computable by this model remains the same, the running times can be drastically reduced through parallelization compared to conventional models. Some of the algorithms designed for the model can be approximated using fragment shaders. With an automatic transcompilation scheme, fragment shader programs can be generated automatically from a description in a high-level language.In dieser Arbeit wird ein Rechenmodell, das Computer mit Zugriff zu Fragment Shader abstrahiert und idealisiert, eingeführt. Zwar bleibt der Umfang der durch dieses Modell berechenbarer Funktionen gleich, jedoch können die Laufzeiten durch Parallelisierung im Vergleich zu herkömmlichen Modellen drastisch verkürzt werden. Einige der für das Modell entworfenen Algorithmen lassen sich mithilfe von Fragment Shadern approximieren. In einer Hochsprache beschriebene Algorithmen werden automatisiert in Fragment Shader Programme übersetzt

    FiCoS: A fine-grained and coarse-grained GPU-powered deterministic simulator for biochemical networks.

    Get PDF
    Mathematical models of biochemical networks can largely facilitate the comprehension of the mechanisms at the basis of cellular processes, as well as the formulation of hypotheses that can be tested by means of targeted laboratory experiments. However, two issues might hamper the achievement of fruitful outcomes. On the one hand, detailed mechanistic models can involve hundreds or thousands of molecular species and their intermediate complexes, as well as hundreds or thousands of chemical reactions, a situation generally occurring in rule-based modeling. On the other hand, the computational analysis of a model typically requires the execution of a large number of simulations for its calibration, or to test the effect of perturbations. As a consequence, the computational capabilities of modern Central Processing Units can be easily overtaken, possibly making the modeling of biochemical networks a worthless or ineffective effort. To the aim of overcoming the limitations of the current state-of-the-art simulation approaches, we present in this paper FiCoS, a novel "black-box" deterministic simulator that effectively realizes both a fine-grained and a coarse-grained parallelization on Graphics Processing Units. In particular, FiCoS exploits two different integration methods, namely, the Dormand-Prince and the Radau IIA, to efficiently solve both non-stiff and stiff systems of coupled Ordinary Differential Equations. We tested the performance of FiCoS against different deterministic simulators, by considering models of increasing size and by running analyses with increasing computational demands. FiCoS was able to dramatically speedup the computations up to 855×, showing to be a promising solution for the simulation and analysis of large-scale models of complex biological processes

    Plasma Physics Computations on Emerging Hardware Architectures

    Get PDF
    This thesis explores the potential of emerging hardware architectures to increase the impact of high performance computing in fusion plasma physics research. For next generation tokamaks like ITER, realistic simulations and data-processing tasks will become significantly more demanding of computational resources than current facilities. It is therefore essential to investigate how emerging hardware such as the graphics processing unit (GPU) and field-programmable gate array (FPGA) can provide the required computing power for large data-processing tasks and large scale simulations in plasma physics specific computations. The use of emerging technology is investigated in three areas relevant to nuclear fusion: (i) a GPU is used to process the large amount of raw data produced by the synthetic aperture microwave imaging (SAMI) plasma diagnostic, (ii) the use of a GPU to accelerate the solution of the Bateman equations which model the evolution of nuclide number densities when subjected to neutron irradiation in tokamaks, and (iii) an FPGA-based dataflow engine is applied to compute massive matrix multiplications, a feature of many computational problems in fusion and more generally in scientific computing. The GPU data processing code for SAMI provides a 60x acceleration over the previous IDL-based code, enabling inter-shot analysis in future campaigns and the data-mining (and therefore analysis) of stored raw data from previous MAST campaigns. The feasibility of porting the whole Bateman solver to a GPU system is demonstrated and verified against the industry standard FISPACT code. Finally a dataflow approach to matrix multiplication is shown to provide a substantial acceleration compared to CPU-based approaches and, whilst not performing as well as a GPU for this particular problem, is shown to be much more energy efficient. Emerging hardware technologies will no doubt continue to provide a positive contribution in terms of performance to many areas of fusion research and several exciting new developments are on the horizon with tighter integration of GPUs and FPGAs with their host central processor units. This should not only improve performance and reduce data transfer bottlenecks, but also allow more user-friendly programming tools to be developed. All of this has implications for ITER and beyond where emerging hardware technologies will no doubt provide the key to delivering the computing power required to handle the large amounts of data and more realistic simulations demanded by these complex systems

    Efficient and High-Quality Rendering of Higher-Order Geometric Data Representations

    Get PDF
    Computer-Aided Design (CAD) bezeichnet den Entwurf industrieller Produkte mit Hilfe von virtuellen 3D Modellen. Ein CAD-Modell besteht aus parametrischen Kurven und Flächen, in den meisten Fällen non-uniform rational B-Splines (NURBS). Diese mathematische Beschreibung wird ebenfalls zur Analyse, Optimierung und Präsentation des Modells verwendet. In jeder dieser Entwicklungsphasen wird eine unterschiedliche visuelle Darstellung benötigt, um den entsprechenden Nutzern ein geeignetes Feedback zu geben. Designer bevorzugen beispielsweise illustrative oder realistische Darstellungen, Ingenieure benötigen eine verständliche Visualisierung der Simulationsergebnisse, während eine immersive 3D Darstellung bei einer Benutzbarkeitsanalyse oder der Designauswahl hilfreich sein kann. Die interaktive Darstellung von NURBS-Modellen und -Simulationsdaten ist jedoch aufgrund des hohen Rechenaufwandes und der eingeschränkten Hardwareunterstützung eine große Herausforderung. Diese Arbeit stellt vier neuartige Verfahren vor, welche sich mit der interaktiven Darstellung von NURBS-Modellen und Simulationensdaten befassen. Die vorgestellten Algorithmen nutzen neue Fähigkeiten aktueller Grafikkarten aus, um den Stand der Technik bezüglich Qualität, Effizienz und Darstellungsgeschwindigkeit zu verbessern. Zwei dieser Verfahren befassen sich mit der direkten Darstellung der parametrischen Beschreibung ohne Approximationen oder zeitaufwändige Vorberechnungen. Die dabei vorgestellten Datenstrukturen und Algorithmen ermöglichen die effiziente Unterteilung, Klassifizierung, Tessellierung und Darstellung getrimmter NURBS-Flächen und einen interaktiven Ray-Casting-Algorithmus für die Isoflächenvisualisierung von NURBSbasierten isogeometrischen Analysen. Die weiteren zwei Verfahren beschreiben zum einen das vielseitige Konzept der programmierbaren Transparenz für illustrative und verständliche Visualisierungen tiefenkomplexer CAD-Modelle und zum anderen eine neue hybride Methode zur Reprojektion halbtransparenter und undurchsichtiger Bildinformation für die Beschleunigung der Erzeugung von stereoskopischen Bildpaaren. Die beiden letztgenannten Ansätze basieren auf rasterisierter Geometrie und sind somit ebenfalls für normale Dreiecksmodelle anwendbar, wodurch die Arbeiten auch einen wichtigen Beitrag in den Bereichen der Computergrafik und der virtuellen Realität darstellen. Die Auswertung der Arbeit wurde mit großen, realen NURBS-Datensätzen durchgeführt. Die Resultate zeigen, dass die direkte Darstellung auf Grundlage der parametrischen Beschreibung mit interaktiven Bildwiederholraten und in subpixelgenauer Qualität möglich ist. Die Einführung programmierbarer Transparenz ermöglicht zudem die Umsetzung kollaborativer 3D Interaktionstechniken für die Exploration der Modelle in virtuellenUmgebungen sowie illustrative und verständliche Visualisierungen tiefenkomplexer CAD-Modelle. Die Erzeugung stereoskopischer Bildpaare für die interaktive Visualisierung auf 3D Displays konnte beschleunigt werden. Diese messbare Verbesserung wurde zudem im Rahmen einer Nutzerstudie als wahrnehmbar und vorteilhaft befunden.In computer-aided design (CAD), industrial products are designed using a virtual 3D model. A CAD model typically consists of curves and surfaces in a parametric representation, in most cases, non-uniform rational B-splines (NURBS). The same representation is also used for the analysis, optimization and presentation of the model. In each phase of this process, different visualizations are required to provide an appropriate user feedback. Designers work with illustrative and realistic renderings, engineers need a comprehensible visualization of the simulation results, and usability studies or product presentations benefit from using a 3D display. However, the interactive visualization of NURBS models and corresponding physical simulations is a challenging task because of the computational complexity and the limited graphics hardware support. This thesis proposes four novel rendering approaches that improve the interactive visualization of CAD models and their analysis. The presented algorithms exploit latest graphics hardware capabilities to advance the state-of-the-art in terms of quality, efficiency and performance. In particular, two approaches describe the direct rendering of the parametric representation without precomputed approximations and timeconsuming pre-processing steps. New data structures and algorithms are presented for the efficient partition, classification, tessellation, and rendering of trimmed NURBS surfaces as well as the first direct isosurface ray-casting approach for NURBS-based isogeometric analysis. The other two approaches introduce the versatile concept of programmable order-independent semi-transparency for the illustrative and comprehensible visualization of depth-complex CAD models, and a novel method for the hybrid reprojection of opaque and semi-transparent image information to accelerate stereoscopic rendering. Both approaches are also applicable to standard polygonal geometry which contributes to the computer graphics and virtual reality research communities. The evaluation is based on real-world NURBS-based models and simulation data. The results show that rendering can be performed directly on the underlying parametric representation with interactive frame rates and subpixel-precise image results. The computational costs of additional visualization effects, such as semi-transparency and stereoscopic rendering, are reduced to maintain interactive frame rates. The benefit of this performance gain was confirmed by quantitative measurements and a pilot user study

    Transmission and Distribution Co-Simulation and Applications

    Get PDF
    As the penetration of flexible loads and distributed energy resources (DERs) increases in distribution networks, demand dispatch schemes need to consider the effects of large-scale load control on distribution grid reliability. Thus, we need demand dispatch schemes that actively ensure that distribution grid operational constraints are network-admissible and still deliver valuable market services. In this context, this work develops and evaluates the performance of a new network-admissible version of the device-driven demand dispatch scheme called Packetized Energy Management (PEM). Specifically, this work develops and investigates the live grid constraint-based coordinator and metrics for performance evaluation. The effects of grid measurements for a practical-sized, 2,522-bus, unbalanced distribution test feeder with a 3000 flexible kW-scale loads operating under the network-admissible PEM scheme is discussed. The results demonstrate the value of live grid measurements in managing distribution grid operational constraints while PEM can effectively deliver frequency regulation services. Increased penetration of flexible loads and DERs on distribution system (DS) will lead to increased interaction of transmission and distribution (T&D) system operators to ensure reliable operation of the interconnected power grids, as well as the control actions at LV/MV grid in aggregation will have significant impact on the transmission systems (TS). Thus, a need arises to study the coupling of the transmission and distribution (T&D) systems. Therefore, this work develops a co-simulation platform based on decoupled approach to study integrated T&D systems collectively. Additionally, the results of a decoupled method applied for solving T&D power flow co-simulation is benchmarked against the collaborator developed unified solution which proves the accuracy of the decoupled approach. The existing approaches in the literature to study steady-state interaction of TS-DS have several shortcomings including that the existing methods exhibit scalability, solve-time and computational memory usage concerns. In this regard, this work develops comprehensive mathematical models of T&D systems for integrated power flow analysis and brings advancements from the algorithmic perspective to efficiently solve large-scale T&D circuits. Further, the models are implemented in low-cost CPU-GPU hybrid computing platform to further speed up the computational performance. The efficacy of the proposed models, solution algorithms, and their hardware implementation are demonstrated with more than 13,000 nodes using an integrated system that consists of 2383-bus Polish TS and multiple instances of medium voltage part of the IEEE 8,500-node DS. Case studies demonstrate that the proposed approach is scalable and can provide more than tenfold speed up on the solve time of very large-scale integrated T&D systems. Overall, this work develops practically applicable and efficient demand dispatch coordinator able to integrate DERs into DS while ensuring the grid operational constraints are not violated. Additionally, the dynamics introduced in the DS with such integration that travels to TS is also studied collectively using integrated T&D co-simulation and in the final step, a mathematically comprehensive model tackles the scalability, solve-time and computational memory usage concerns for large scale integrated T&D co-simulation and applications

    Plasma propulsion simulation using particles

    Full text link
    This perspective paper deals with an overview of particle-in-cell / Monte Carlo collision models applied to different plasma-propulsion configurations and scenarios, from electrostatic (E x B and pulsed arc) devices to electromagnetic (RF inductive, helicon, electron cyclotron resonance) thrusters, with an emphasis on plasma plumes and their interaction with the satellite. The most important elements related to the modeling of plasma-wall interaction are also presented. Finally, the paper reports new progress in the particle-in-cell computational methodology, in particular regarding accelerating computational techniques for multi-dimensional simulations and plasma chemistry Monte Carlo modules for molecular and alternative propellan

    Parallelization of the ADI method exploring vector computing in GPUs

    Get PDF
    Dissertação de mestrado integrado em Engenharia InformáticaThe 2D convection-diffusion is a well-known problem in scientific simulation that often uses a direct method to solve a system of N linear equations, which requires N3 operations. This problem can be solved using a more efficient computational method, known as the alternating direction implicit (ADI). It solves a system of N linear equations in 2N times with N operations each, implemented in two steps, one to solve row by row, the other column by column. Each N operation is fully independent in each step, which opens an opportunity to an embarrassingly parallel solution. This method also explores the way matrices are stored in computer memory, either in row-major or column-major, by splitting each iteration in two. The major bottleneck of this method is solving the system of linear equations. These systems of linear equations can be described as tridiagonal matrices since the elements are always stored on the three main diagonals of the matrices. Algorithms tailored for tridiagonal matrices, can significantly improve the performance. These can be sequential (i.e. the Thomas algorithm) or parallel (i.e. the cyclic reduction CR, and the parallel cyclic reduction PCR). Current vector extensions in conventional scalar processing units, such as x86-64 and ARM devices, require the vector elements to be in contiguous memory locations to avoid performance penalties. To overcome these limitations in dot products several approaches are proposed and evaluated in this work, both in general-purpose processing units and in specific accelerators, namely NVidia GPUs. Profiling the code execution on a server based on x86-64 devices showed that the ADI method needs a combination of CPU computation power and memory transfer speed. This is best showed on a server based on the Intel manycore device, KNL, where the algorithm scales until the memory bandwidth is no longer enough to feed all 64 computing cores. A dual-socket server based on 16-core Xeon Skylakes, with AVX-512 vector support, proved to be a better choice: the algorithm executes in less time and scales better. The introduction of GPU computing to further improve the execution performance (and also using other optimisation techniques, namely a different thread scheme and shared memory to speed up the process) showed better results for larger grid sizes (above 32Ki x 32Ki). The CUDA development environment also showed a better performance than using OpenCL, in most cases. The largest difference was using a hybrid CR-PCR, where the OpenCL code displayed a major performance improvement when compared to CUDA. But even with this speedup, the better average time for the ADI method on all tested configurations on a NVidia GPU was using CUDA on an available updated GPU (with a Pascal architecture) and the CR as the auxiliary method.O problema da convecção-difusão é utilizado em simulaçãos cientificas que regularmente utilizam métodos diretos para solucionar um sistema de N equações lineares e necessitam de N3 operações. O problema pode ser resolvido utilizando um método computacionalmente mais eficiente para resolver um sistema de N equações lineares com N operações cada, implementado em dois passos, um solucionando linha a linha e outro solucionando coluna a coluna. Cada par de N operações são independentes em cada passo, havendo assim uma oportunidade de utilizar uma solução em baraçosamente paralela. Este método também explora o modo de guardar as matrizes na memória do computados, sendo esta por linhas ou em colunas, dividindo cada iteração em duas, este método é conhecido como o método de direção alternada. O maior bottleneck deste problema é a resolução dos sistemas de equações lineares criados pelo ADI. Estes sistemas podem ser descritos como matrizes tridiagonais, visto todos os seus elementos se encontrarem nas 3 diagonais interiores e a utilização de métodos estudados para este caso é necessário para conseguir atingir a melhor performance possível. Esses métodos podem ser sequenciais (como o algoritmo de Thomas) ou paralelos (como o CR e o PCR) As extensões vectoriais utilizadas nas atuais unidades de processamento, como dispositivos x86-64 e ARM, necessitam que os elementos do vetor estejam em blocos de memória contíguos para não sofrer penalizações. Algumas abordagens foram estudadas neste trabalho para as ultrapassar, tanto em processadores convencionais como em aceleradores de computação. Os registos do tempo em servidores baseado em dispositivos x86-64 mostram que o ADI necessitam de uma combinação de poder de processamento assim como velocidade de transferência de dados. Isto é demonstrado especialmente no servidor baseado no dispositivo KNL da Intel, no qual o algoritmo escala até que a largura de banda deixe de ser suficiente para o problema. Um servidor com dois sockets em que cada é composto por um dispositivo com 16 cores baseado na arquitetura Xeon Skylake, com acesso ao AVX-512, mostrou ser a melhor escolha: o algoritmo faz as mesmas operações em menos tempo e escala melhor. Com a introdução de computação com GPUs para melhorar a performance do programa mostrou melhores resultados para problemas de maiores dimensões (tamanho acima de 32Ki x 32Ki celulas). O desenvolvimento em CUDA também mostrou melhores resultados que em OpenCL na maioria dos casos. A maior divergência foi observada ao utilizar o método CR-PCR, onde o OpenCL mostrou melhor performance que em CUDA. Mas mesmo com este método sendo mais eficaz que o mesmo em CUDA, o melhor performance com o método ADI foi observado utilizando CUDA no GPU mais recente estudado com o método CR
    corecore