65 research outputs found

    Multilevel communication optimal LU and QR factorizations for hierarchical platforms

    Get PDF
    This study focuses on the performance of two classical dense linear algebra algorithms, the LU and the QR factorizations, on multilevel hierarchical platforms. We first introduce a new model called Hierarchical Cluster Platform (HCP), encapsulating the characteristics of such platforms. The focus is set on reducing the communication requirements of studied algorithms at each level of the hierarchy. Lower bounds on communications are therefore extended with respect to the HCP model. We then introduce multilevel LU and QR algorithms tailored for those platforms, and provide a detailed performance analysis. We also provide a set of numerical experiments and performance predictions demonstrating the need for such algorithms on large platforms

    Evaluation of Distributed Programming Models and Extensions to Task-based Runtime Systems

    Get PDF
    High Performance Computing (HPC) has always been a key foundation for scientific simulation and discovery. And more recently, deep learning models\u27 training have further accelerated the demand of computational power and lower precision arithmetic. In this era following the end of Dennard\u27s Scaling and when Moore\u27s Law seemingly still holds true to a lesser extent, it is not a coincidence that HPC systems are equipped with multi-cores CPUs and a variety of hardware accelerators that are all massively parallel. Coupling this with interconnect networks\u27 speed improvements lagging behind those of computational power increases, the current state of HPC systems is heterogeneous and extremely complex. This was heralded as a great challenge to the software stacks and their ability to extract performance from these systems, but also as a great opportunity to innovate at the programming model level to explore the different approaches and propose new solutions. With usability, portability, and performance as the main factors to consider, this dissertation first evaluates some of the widely used parallel programming models (MPI, MPI+OpenMP, and task-based runtime systems) ability to manage the load imbalance among the processes computing the LU factorization of a large dense matrix stored in the Block Low-Rank (BLR) format. Next I proposed a number of optimizations and implemented them in PaRSEC\u27s Dynamic Task Discovery (DTD) model, including user-level graph trimming and direct Application Programming Interface (API) calls to perform data broadcast operation to further extend the limit of STF model. On the other hand, the Parameterized Task Graph (PTG) approach in PaRSEC is the most scalable approach for many different applications, which I then explored the possibility of combining both the algorithmic approach of Communication-Avoiding (CA) and the communication-computation overlapping benefits provided by runtime systems using 2D five-point stencil as the test case. This broad programming models evaluation and extension work highlighted the abilities of task-based runtime system in achieving scalable performance and portability on contemporary heterogeneous HPC systems. Finally, I summarized the profiling capability of PaRSEC runtime system, and demonstrated with a use case its important role in the performance bottleneck identification leading to optimizations

    Multi-dimensional gyrokinetic parameter studies based on eigenvalue computations

    Full text link
    Plasma microinstabilities, which can be described in the framework of the linear gyrokinetic equations, are routinely computed in the context of stability analyses and transport predictions for magnetic confinement fusion experiments. The GENE code, which solves the gyrokinetic equations, has been coupled to the SLEPc package for an efficient iterative, matrix-free, and parallel computation of rightmost eigenvalues. This setup is presented, including the preconditioner which is necessary for the newly implemented Jacobi-Davidson solver. The fast computation of instabilities at a single parameter set is exploited to make parameter scans viable, that is to compute the solution at many points in the parameter space. Several issues related to parameter scans are discussed, such as an efficient parallelization over parameter sets and subspace recycling. © 2011 Elsevier B.V. All rights reserved.E. Romero and J.E. Roman were supported by the Spanish Ministry of Science and Innovation (MICINN) under project number TIN2009-07519.Merz, F.; Kowitz, C.; Romero Alcalde, E.; Román Moltó, JE.; Jenko, F. (2012). Multi-dimensional gyrokinetic parameter studies based on eigenvalue computations. Computer Physics Communications. 183(4):922-930. https://doi.org/10.1016/j.cpc.2011.12.018S922930183

    Task-based Runtime Optimizations Towards High Performance Computing Applications

    Get PDF
    The last decades have witnessed a rapid improvement of computational capabilities in high-performance computing (HPC) platforms thanks to hardware technology scaling. HPC architectures benefit from mainstream advances on the hardware with many-core systems, deep hierarchical memory subsystem, non-uniform memory access, and an ever-increasing gap between computational power and memory bandwidth. This has necessitated continuous adaptations across the software stack to maintain high hardware utilization. In this HPC landscape of potentially million-way parallelism, task-based programming models associated with dynamic runtime systems are becoming more popular, which fosters developers’ productivity at extreme scale by abstracting the underlying hardware complexity. In this context, this dissertation highlights how a software bundle powered by a task-based programming model can address the heterogeneous workloads engendered by HPC applications., i.e., data redistribution, geospatial modeling and 3D unstructured mesh deformation here. Data redistribution aims to reshuffle data to optimize some objective for an algorithm, whose objective can be multi-dimensional, such as improving computational load balance or decreasing communication volume or cost, with the ultimate goal of increasing the efficiency and therefore reducing the time-to-solution for the algorithm. Geostatistical modeling, one of the prime motivating applications for exascale computing, is a technique for predicting desired quantities from geographically distributed data, based on statistical models and optimization of parameters. Meshing the deformable contour of moving 3D bodies is an expensive operation that can cause huge computational challenges in fluid-structure interaction (FSI) applications. Therefore, in this dissertation, Redistribute-PaRSEC, ExaGeoStat-PaRSEC and HiCMA-PaRSEC are proposed to efficiently tackle these HPC applications respectively at extreme scale, and they are evaluated on multiple HPC clusters, including AMD-based, Intel-based, Arm-based CPU systems and IBM-based multi-GPU system. This multidisciplinary work emphasizes the need for runtime systems to go beyond their primary responsibility of task scheduling on massively parallel hardware system for servicing the next-generation scientific applications

    MODULAR FAST DIRECT ANALYSIS USING NON-RADIATING LOCAL-GLOBAL SOLUTION MODES

    Get PDF
    This dissertation proposes a modular fast direct (MFD) analysis method for a class of problems involving a large fixed platform region and a smaller, variable design region. A modular solution algorithm is obtained by first decomposing the problem geometry into platform and design regions. The two regions are effectively detached from one another using basic equivalence concepts. Equivalence principles allow the total system model to be constructed in terms of independent interaction modules associated with the platform and design regions. These modules include interactions with the equivalent surface that bounds the design region. This dissertation discusses how to analyze (fill and factor) each of these modules separately and how to subsequently compose the solution to the original system using the separately analyzed modules. The focus of this effort is on surface integral equation formulations of electromagnetic scattering from conductors and dielectrics. In order to treat large problems, it is necessary to work with sparse representations of the underlying system matrix and other, related matrices. Fortunately, a number of such representations are available. In the following, we will primarily use the adaptive cross approximation (ACA) to fill the multilevel simply sparse method (MLSSM) representation of the system matrix. The MLSSM provides a sparse representation that is similar to the multilevel fast multipole method. Solutions to the linear systems obtained using the modular analysis strategies described above are obtained using direct methods based on the local-global solution (LOGOS) method. In particular, the LOGOS factorization provides a data sparse factorization of the MLSSM representation of the system matrix. In addition, the LOGOS solver also provides an approximate sparse factorization of the inverse of the system matrix. The availability of the inverse eases the development of the MFD method. Because the behavior of the LOGOS factorization is critical to the development of the proposed MFD method, a significant part of this dissertation is devoted to providing additional analyses, improvements, and characterizations of LOGOS-based direct solution methods. These further developments of the LOGOS factorization algorithms and their application to the development of the MFD method comprise the most significant contributions of this dissertation

    Resiliency in numerical algorithm design for extreme scale simulations

    Get PDF
    This work is based on the seminar titled ‘Resiliency in Numerical Algorithm Design for Extreme Scale Simulations’ held March 1–6, 2020, at Schloss Dagstuhl, that was attended by all the authors. Advanced supercomputing is characterized by very high computation speeds at the cost of involving an enormous amount of resources and costs. A typical large-scale computation running for 48 h on a system consuming 20 MW, as predicted for exascale systems, would consume a million kWh, corresponding to about 100k Euro in energy cost for executing 1023 floating-point operations. It is clearly unacceptable to lose the whole computation if any of the several million parallel processes fails during the execution. Moreover, if a single operation suffers from a bit-flip error, should the whole computation be declared invalid? What about the notion of reproducibility itself: should this core paradigm of science be revised and refined for results that are obtained by large-scale simulation? Naive versions of conventional resilience techniques will not scale to the exascale regime: with a main memory footprint of tens of Petabytes, synchronously writing checkpoint data all the way to background storage at frequent intervals will create intolerable overheads in runtime and energy consumption. Forecasts show that the mean time between failures could be lower than the time to recover from such a checkpoint, so that large calculations at scale might not make any progress if robust alternatives are not investigated. More advanced resilience techniques must be devised. The key may lie in exploiting both advanced system features as well as specific application knowledge. Research will face two essential questions: (1) what are the reliability requirements for a particular computation and (2) how do we best design the algorithms and software to meet these requirements? While the analysis of use cases can help understand the particular reliability requirements, the construction of remedies is currently wide open. One avenue would be to refine and improve on system- or application-level checkpointing and rollback strategies in the case an error is detected. Developers might use fault notification interfaces and flexible runtime systems to respond to node failures in an application-dependent fashion. Novel numerical algorithms or more stochastic computational approaches may be required to meet accuracy requirements in the face of undetectable soft errors. These ideas constituted an essential topic of the seminar. The goal of this Dagstuhl Seminar was to bring together a diverse group of scientists with expertise in exascale computing to discuss novel ways to make applications resilient against detected and undetected faults. In particular, participants explored the role that algorithms and applications play in the holistic approach needed to tackle this challenge. This article gathers a broad range of perspectives on the role of algorithms, applications and systems in achieving resilience for extreme scale simulations. The ultimate goal is to spark novel ideas and encourage the development of concrete solutions for achieving such resilience holistically.Peer Reviewed"Article signat per 36 autors/es: Emmanuel Agullo, Mirco Altenbernd, Hartwig Anzt, Leonardo Bautista-Gomez, Tommaso Benacchio, Luca Bonaventura, Hans-Joachim Bungartz, Sanjay Chatterjee, Florina M. Ciorba, Nathan DeBardeleben, Daniel Drzisga, Sebastian Eibl, Christian Engelmann, Wilfried N. Gansterer, Luc Giraud, Dominik G ̈oddeke, Marco Heisig, Fabienne Jezequel, Nils Kohl, Xiaoye Sherry Li, Romain Lion, Miriam Mehl, Paul Mycek, Michael Obersteiner, Enrique S. Quintana-Ortiz, Francesco Rizzi, Ulrich Rude, Martin Schulz, Fred Fung, Robert Speck, Linda Stals, Keita Teranishi, Samuel Thibault, Dominik Thonnes, Andreas Wagner and Barbara Wohlmuth"Postprint (author's final draft

    Performance and Energy Optimization of the Iterative Solution of Sparse Linear Systems on Multicore Processors

    Get PDF
    En esta tesis doctoral se aborda la solución de sistemas dispersos de ecuaciones lineales utilizando métodos iterativos precondicionados basados en subespacios de Krylov. En concreto, se centra en ILUPACK, una biblioteca que implementa precondicionadores de tipo ILU multinivel para la solución eficiente de sistemas lineales dispersos. El incremento en el número de ecuaciones, y la aparición de nuevas arquitecturas, motiva el desarrollo de una versión paralela de ILUPACK que optimice tanto el tiempo de ejecución como el consumo energético en arquitecturas multinúcleo actuales y en clusters de nodos construidos con esta tecnología. El objetivo principal de la tesis es el diseño, implementación y valuación de resolutores paralelos energéticamente eficientes para sistemas lineales dispersos orientados a procesadores multinúcleo así como aceleradores hardware como el Intel Xeon Phi. Para lograr este objetivo, se aprovecha el paralelismo de tareas mediante OmpSs y MPI, y se desarrolla un entorno automático para detectar ineficiencias energéticas.In this dissertation we target the solution of large sparse systems of linear equations using preconditioned iterative methods based on Krylov subspaces. Specifically, we focus on ILUPACK, a library that offers multi-level ILU preconditioners for the effective solution of sparse linear systems. The increase of the number of equations and the introduction of new HPC architectures motivates us to develop a parallel version of ILUPACK which optimizes both execution time and energy consumption on current multicore architectures and clusters of nodes built from this type of technology. Thus, the main goal of this thesis is the design, implementation and evaluation of parallel and energy-efficient iterative sparse linear system solvers for multicore processors as well as recent manycore accelerators such as the Intel Xeon Phi. To fulfill the general objective, we optimize ILUPACK exploiting task parallelism via OmpSs and MPI, and also develope an automatic framework to detect energy inefficiencies

    Book of Abstracts of the Sixth SIAM Workshop on Combinatorial Scientific Computing

    Get PDF
    Book of Abstracts of CSC14 edited by Bora UçarInternational audienceThe Sixth SIAM Workshop on Combinatorial Scientific Computing, CSC14, was organized at the Ecole Normale Supérieure de Lyon, France on 21st to 23rd July, 2014. This two and a half day event marked the sixth in a series that started ten years ago in San Francisco, USA. The CSC14 Workshop's focus was on combinatorial mathematics and algorithms in high performance computing, broadly interpreted. The workshop featured three invited talks, 27 contributed talks and eight poster presentations. All three invited talks were focused on two interesting fields of research specifically: randomized algorithms for numerical linear algebra and network analysis. The contributed talks and the posters targeted modeling, analysis, bisection, clustering, and partitioning of graphs, applied in the context of networks, sparse matrix factorizations, iterative solvers, fast multi-pole methods, automatic differentiation, high-performance computing, and linear programming. The workshop was held at the premises of the LIP laboratory of ENS Lyon and was generously supported by the LABEX MILYON (ANR-10-LABX-0070, Université de Lyon, within the program ''Investissements d'Avenir'' ANR-11-IDEX-0007 operated by the French National Research Agency), and by SIAM

    Accelerating advanced preconditioning methods on hybrid architectures

    Get PDF
    Un gran número de problemas, en diversas áreas de la ciencia y la ingeniería, involucran la solución de sistemas dispersos de ecuaciones lineales de gran escala. En muchos de estos escenarios, son además un cuello de botella desde el punto de vista computacional, y por esa razón, su implementación eficiente ha motivado una cantidad enorme de trabajos científicos. Por muchos años, los métodos directos basados en el proceso de la Eliminación Gaussiana han sido la herramienta de referencia para resolver dichos sistemas, pero la dimensión de los problemas abordados actualmente impone serios desafíos a la mayoría de estos algoritmos, considerando sus requerimientos de memoria, su tiempo de cómputo y la complejidad de su implementación. Propulsados por los avances en las técnicas de precondicionado, los métodos iterativos se han vuelto más confiables, y por lo tanto emergen como alternativas a los métodos directos, ofreciendo soluciones de alta calidad a un menor costo computacional. Sin embargo, estos avances muchas veces son relativos a un problema específico, o dotan a los precondicionadores de una complejidad tal, que su aplicación en diversos problemas se vuelve poco práctica en términos de tiempo de ejecución y consumo de memoria. Como respuesta a esta situación, es común la utilización de estrategias de Computación de Alto Desempeño, ya que el desarrollo sostenido de las plataformas de hardware permite la ejecución simultánea de cada vez más operaciones. Un claro ejemplo de esta evolución son las plataformas compuestas por procesadores multi-núcleo y aceleradoras de hardware como las Unidades de Procesamiento Gráfico (GPU). Particularmente, las GPU se han convertido en poderosos procesadores paralelos, capaces de integrar miles de núcleos a precios y consumo energético razonables.Por estas razones, las GPU son ahora una plataforma de hardware de gran importancia para la ciencia y la ingeniería, y su uso eficiente es crucial para alcanzar un buen desempeño en la mayoría de las aplicaciones. Esta tesis se centra en el uso de GPUs para acelerar la solución de sistemas dispersos de ecuaciones lineales usando métodos iterativos precondicionados con técnicas modernas. En particular, se trabaja sobre ILUPACK, que ofrece implementaciones de los métodos iterativos más importantes, y presenta un interesante y moderno precondicionador de tipo ILU multinivel. En este trabajo, se desarrollan versiones del precondicionador y de los métodos incluidos en el paquete, capaces de explotar el paralelismo de datos mediante el uso de GPUs sin afectar las propiedades numéricas del precondicionador. Además, se habilita y analiza el uso de las GPU en versiones paralelas existentes, basadas en paralelismo de tareas para plataformas de memoria compartida y distribuida. Los resultados obtenidos muestran una sensible mejora en el tiempo de ejecución de los métodos abordados, así como la posibilidad de resolver problemas de gran escala de forma eficiente

    Paralelni algoritmi Jacobijeva tipa za singularnu i generaliziranu singularnu dekompoziciju

    Get PDF
    In this thesis, a hierarchically blocked one-sided Jacobi algorithm for the singular value decomposition (SVD) is presented. The algorithm targets both single and multiple graphics processing units (GPUs). The blocking structure reflects the levels of the GPU’s memory hierarchy. To this end, a family of parallel pivot strategies on the GPU’s shared address space has been developed, but the strategies are applicable to inter-node communication as well, with GPU nodes, CPU nodes, or, in general, any NUMA nodes. Unlike common hybrid approaches, the presented algorithm in a single-GPU setting needs a CPU for the controlling purposes only, while utilizing the GPU’s resources to the fullest extent permitted by the hardware. When required by the problem size, the algorithm, in principle, scales to an arbitrary number of GPU nodes. The scalability is demonstrated by more than twofold speedup for sufficiently large matrices on a four-GPU system vs. a single GPU. The subsequent part of the thesis describes how to modify the two-sided Hari–Zimmermann algorithm for computation of the generalized eigendecomposition of a symmetric matrix pair (A; B), where B is positive definite, to an implicit algorithm that computes the generalized singular value decomposition (GSVD) of a pair (F; G). In addition, blocking and parallelization techniques for accelerating both the CPU and the GPU computation are presented, with the GPU approach following the Jacobi SVD algorithm from the first part of the thesis. For triangular matrix pairs of a moderate size, numerical tests show that the double precision sequential pointwise algorithm is several times faster than the established DTGSJA algorithm in LAPACK, while the accuracy is slightly better, especially for the small generalized singular values. Cache-aware blocking increases the performance even further. As with the one-sided Jacobi-type (G)SVD algorithms in general, the presented algorithm is almost perfectly parallelizable and scalable on the shared memory machines, where the speedup almost solely depends on the number of cores used. A distributed memory variant, intended for huge matrices that do not fit into a single NUMA node, as well as a GPU variant, are also sketched. The thesis concludes with the affirmative answer to a question whether the onesided Jacobi-type algorithms can be an efficient and scalable choice for computing the (G)SVD of dense matrices on the massively parallel CPU and GPU architectures. Unless otherwise noted by the inline citations or implied by the context, this thesis is an overview of the original research results, most of which has already been published in [55, 58]. The author’s contributions are the one-sided Jacobi-type GPU algorithms for the ordinary and the generalized SVD, of which the latter has not yet been published, as well as the parallelization technique and some implementation details of the one-sided Hari–Zimmermann CPU algorithm for the GSVD. The rest is joint work with Sanja and Saša Singer.Singularna dekompozicija, katkad zvana prema engleskom originalu i dekompozicija singularnih vrijednosti, ili kraće SVD, jedna je od najkorisnijih matričnih dekompozicija, kako za teorijske, tako i za praktične svrhe. Svaka matrica GCm×nG \in \mathbb{C}^{m \times n} (zbog jednostavnijeg zapisa, uobičajeno se smatra da je mnm \geq n; u protivnom, traži se SVD matrice GG^\ast) može se rastaviti u produkt tri matrice G=UΣV,G = U \Sigma V^\ast, gdje su UCm×mU \in \mathbb{C}^{m \times m} i VCn×nV \in \mathbb{C}^{n \times n} unitarne, a ΣRm×n\Sigma \in \mathbb{R}^{m \times n} je 'dijagonalna' s nenegativnim dijagonalnim elementima. Osim ovog oblika dekompozicije, koristi se i skraćeni oblik G=UΣV,G = U'\Sigma'V^\ast, pri čemu je UCm×nU' \in \mathbb{C}^{m \times n} matrica s ortonormiranim stupcima, a Σ=diag(σ1,,σn),σi0\Sigma' = diag(\sigma_1, \dots, \sigma_n), \sigma_i \geq 0 za i=0,,ni = 0, \dots, n, je sada stvarno dijagonalna. Izvan matematike, u 'stvarnom' životu, SVD se koristi u procesiranju slika (rekonstrukciji, sažimanju, izoštravanju) i signala, s primjenama u medicini (CT, tj. kompjuterizirana tomografija; MR, tj. magnetna rezonancija), geoznanostima, znanosti o materijalima, kristalografiji, sigurnosti (prepoznavanje lica), izvlačenja informacija iz velike količine podataka (na primjer, LSI, tj. latent semantic indexing), ali i drugdje. Većina primjena koristi svojstvo da se iz SVD-a lako čita najbolja aproksimacija dane matrice matricom fiksnog (niskog) ranga. Čini se da je lakše reći gdje se SVD ne koristi, nego gdje se koristi, stoga se SVD često naziva i "švicarskim nožićem matričnih dekompozicija"1^1. Prvi počeci razvoja SVD-a sežu u 19. stoljeće, kad su poznati matematičari Eugenio Beltrami, Camille Jordan, James Joseph Sylvester, Erhard Schmidt i Herman Weyl pokazali njezinu egzistenciju i osnovna svojstva (za detalje pogledati [74]). Pioniri u numeričkom računanju SVD-a su Ervand George Kogbetliantz, te Gene Golub i William Kahan, koji su razvili algoritam za računanje (bidijagonalni QR), koji je dvadeset i pet godina vladao scenom numeričkog računanja SVD-a. U to vrijeme, sveučilište Stanford (gdje je Gene Golub radio) bilo je 'glavno sjedište' za razvoj primjena SVD-a. Početkom devedesetih godina, 'sjedište SVD-a' preseljeno je u Europu, nakon objave članka [21] o relativnoj točnosti računanja svojstvenih vrijednosti simetričnih pozitivno definitnih matrica korištenjem Jacobijeve metode. Naime, problem računanja svojstvene dekompozicije pozitivno definitne matrice i problem računanja SVD-a usko su vezani. Ako je poznata dekompozicija singularnih vrijednosti matrice GG punog stupčanog ranga, GCm×n=UΣV G \in \mathbb{C}^{m \times n} = U \Sigma V^\ast, pri čemu je GG faktor matrice AA, A=GGA = G \ast G, onda je AA simetrična i pozitivno definitna i vrijedi A=GG=VΣTUUΣV=Vdiag(σ12,,σm2)V.A = G \ast G = V \Sigma^T U^\ast U \Sigma V^\ast = V diag(\sigma_1^2, \dots, \sigma_m^2)V^\ast . Matrica VV je matrica svojstvenih vektora, a svojstvene vrijednosti su kvadrati singularnih vrijednosti. Stoga se algoritmi za računanje svojstvenih vrijednosti, kod kojih se transformacija vrši dvostranim (i slijeva i zdesna) djelovanjem na matricu AA, mogu napisati implicitno, tako da se transformacija vrši ili zdesna na faktor GG ili slijeva na faktor GG^\ast. U svojoj doktorskoj disertaciji Drmač [24] je napravio daljnju analizu, ne samo singularne dekompozicije računate Jacobijevim algoritmom, nego i generalizirane singularne dekompozicije (GSVD). Temeljem tih istraživanja, SVD baziran na Jacobijevim rotacijama ušao je i u numeričku biblioteku LAPACK. U međuvremenu, gotovo sva računala postala su višejezgrena, a moderni klasteri računala za znanstveno računanje sastoje se od nekoliko tisuća do nekoliko stotina tisuća višejezgrenih procesora2^2, pa standardni sekvencijalni algoritmi nipošto više nisu primjereni za numeričko računanje. Stoga se ubrzano razvijaju paralelni algoritmi koji poštuju i hijerarhijsku memorijsku strukturu odgovarajućih računala, težeći iskoristiti brzu cache memoriju za procesiranje potproblema u blokovima, na koje je moguće primijeniti BLAS-3 operacije. Ideja blokiranja je u primjeni što više (tipično, kubično u dimenziji matrice) numeričkih operacija nad podacima u brzoj memoriji. Nadalje, pojavom grafičkih procesnih jedinica namijenjenih znanstvenom računanju, kao i drugih visokoparalelnih numeričkih akceleratora (npr. Intel Xeon Phi), otvorio se novi segment istraživanja, koji poštuje njihov masivni paralelizam, s pojedinačno slabašnom snagom svake dretve u odnosu na središnji procesor. Generaliziranu singularnu dekompoziciju (GSVD) uveli su Van Loan [77], te Paige i Saunders [62]. Definicija GSVD-a nešto je manje poznata. Ako su zadane matrice FCm×nF \in \mathbb{C}^{m \times n} i GCp×nG \in \mathbb{C}^{p \times n}, za koje vrijedi K=[FG],k=rank(K),K = {F \brack G} , k = rank(K), tad postoje unitarne matrice UCm×m,VCp×pU \in \mathbb{C}^{m \times m}, V \in \mathbb{C}^{p \times p}, i matrica XCk×nX \in \mathbb{C}^{k \times n}, takve da je F=UΣFX,G=VΣGX,ΣFRm×k,ΣGRp×k.F = U \Sigma_F X, \qquad G = V \Sigma_G X, \qquad \Sigma_F \in \mathbb{R}^{m \times k}, \qquad \Sigma_G \in \mathbb{R}^{p \times k}. Elementi matrica ΣF\Sigma_F i ΣG\Sigma_G su nula, osim dijagonalnih elemenata, koji su realni i nenegativni. Nadalje, ΣF\Sigma_F i ΣG\Sigma_G zadovoljavaju ΣFTΣF+ΣGTΣG=I.\Sigma_F^T\Sigma_F + \Sigma_G^T\Sigma_G = I. Omjeri (ΣF)ii/(ΣG)ii(\Sigma_F)_{ii} / (\Sigma_G)_{ii} su generalizirane singularne vrijednosti para (F,G)(F, G). Ako je GG punog stupčanog ranga, tada je rank(K)=nrank(K) = n i generalizirane singularne vrijednosti su konačni brojevi. Ako je par (F,G)(F, G) realan, onda su realne sve matrice u dekompoziciji. Odavde nadalje, zbog jednostavnoti pretpostavlja se da je par realan. Može se pokazati da, ako je k=nk = n, tada se relacija između GSVD-a i reducirane forme CS (kosinus-sinus) dekompozicije (vidjeti, na primjer, [26]) može iskoristiti za njezino računanje (pogledati, na primjer članke Stewarta [72, 73] i Suttona [75]). Slično kao i SVD, generalizirana singularna dekompozicija ima primjene u mnogim područjima, kao što je usporedna analiza podataka vezanih uz genome [1], nepotpuna singularna metoda rubnih elemeneata [47], ionosferna tomografija [9], ali i mnogo drugih. GSVD para matrica (F,G)(F, G) blisko je vezana s hermitskim generaliziranim svojstvenim problemom za par (A,B):=(FF,GG)(A, B) := (F^\ast F, G^\ast G), tako da se metode za istovremenu dijagonalizaciju para (A,B)(A, B) mogu modificirati za računanje GSVD-a para (F,G)(F, G). U ovoj radnji razvijen je brzi i efikasan algoritam za računanje generalizirane singularne dekompozicije realnog para (F,G)(F, G). Metoda razvijena u radnji bazirana je na algoritmu za računanje generalizirane svojstvene dekompozicije, Ax=λBx;x0;(1) Ax = \lambda Bx; \quad x \neq 0; \qquad (1) gdje su AA i BB simetrične matrice, a par je definitan, tj. postoji realna konstanta μ\mu takva da je matrica AμBA-\mu B pozitivno definitna. Članke s metodom objavili su 1960. Falk i Langemeyer [31, 32] u slabo poznatom priručniku. Kad je paralelna verzija metode testirana, pokazalo se da pati zbog problema rastuće skale stupaca matrice tijekom procesa ortogonalizacije. Treba još primijetiti da pozitivna definitnost matrice BB odmah znači da je definitan i par (A,B)(A, B). Gotovo desetljeće nakon Falka i Langemeyera, Katharina Zimmermann je u svojoj doktorskoj disertaciji [81] grubo skicirala metodu za rješavanje generaliziranog svojstvenog problema (1) ako je B pozitivno definitna. Gose [34] je predložio optimalnu ne-cikličku pivotnu strategiju i dokazao globalnu konvergenciju originalne metode. Hari je u svojoj disertaciji [37], potaknut Zimmermanninom skicom metode, izveo algoritam i pokazao njegovu globalnu i kvadratičnu konvergenciju uz cikličke pivotne strategije. Kvadratičnu konvergenciju originalne Falk–Langemeyerove metode dokazao je 1988. Slapničar u svojem magisteriju, četiri godine nakon dokaza konvergencije Hari–Zimmermann metode. Hari je u [37] pokazao ključnu vezu između Hari–Zimmermannine i Falk–Langemeyerove varijante algoritma. Ako je matrica BB obostrano skalirana dijagonalnom matricom DD, tako da su joj dijagonalni elementi jednaki 1 prije svakog koraka poništavanja u Falk–Langemeyerovoj metodi, dobiva se Hari–Zimmermannina metoda. Dakle, nova metoda imala je ključno svojstvo normiranosti stupaca barem jedne matrice, što se pokazalo iznimno bitnim za uspjeh algoritma (izbjegavanje skaliranja matrica tijekom procesa ortogonalizacije). Treba reći da se GSVD može računati i na druge načine. Drmač je u [26] izveo algoritam za računanje GSVD-a para (F,G)(F, G), kad je GG punog stupčanog ranga. Algoritam transformira problem na samo jednu matricu, a nakon toga primjenjuje jednostrani Jacobijev SVD algoritam. Taj algoritam računa generalizirane singularne vrijednosti s malom relativnom greškom. Algoritam svođenja na jednu matricu sastoji se od tri koraka: skaliranje stupaca matrica FF i GG, QR faktorizacije sa stupčanim pivotiranjem već skalirane matrice GG, i konačno, rješavanjem trokutastog linearnog sustava s kk desnih strana. Posljednja dva koraka su sekvencijalna i vrlo ih je teško paralelizirati. Sama ideja korištenja implicitne (tj. jednostrane) Falk–Langemeyerove metode za GSVD para (F,G)(F, G), s GG punog stupčanog ranga, sreće se u disertaciji Annette Deichmöller [17], međutim, tamo se ne spominju usporedbe te metode s drugim metodama. S druge strane, algoritam za računanje GSVD-a u biblioteci LAPACK (potprogram xGGSVD), je modificirani Kogbetliantzov algoritam (vidjeti Paige [61]) s obveznim pretprocesiranjem (vidjeti Bai i Demmel [5]). Algoritam pretprocesiranja [6] transformira zadani matrični par (F0,G0)(F_0, G_0) u par (F,G)(F, G), takav da su FF i GG gornjetrokutaste, a GG je i nesingularna. Ako se unaprijed zna da je GG punog stupčanog ranga, i implicitna Falk–Langemeyerova i implicitna Hari–Zimmermannina metoda će raditi i bez pretprocesiranja. Ako su FF i GG vitke (engl. "tall and skinny"), QR factorizacija obje matrice će ubrzati ortogonalizaciju. Ako GG nije punog ranga, onda treba koristiti isto pretprocesiranje kao u LAPACK-u, budući da puni stupčani rang matrice GG garantira pozitivnu definitnost matrice B:=GTGB := G^T G. U ovoj radnji razvijen je i hijerarhijski, blokirani jednostrani algoritam za računanje SVD-a. Opisani algoritam može raditi na višeprocesorskom računalu, računalnim klasterima, jednoj ili više grafičkih procesnih jedinica. Princip rada algoritma na svim arhitekturama je sličan. Posebno je opisan algoritam koji radi na grafičkim procesnim jedinicama. Struktura blokiranja reflektira razine memorijske strukture grafičke procesne jedninice. Da bi se to postiglo, razvijene su familije paralelnih pivotnih strategija za dijeljenu (engl. shared) memoriju grafičkih procesnih jedinica. Uz dodatak rasporeda po procesima, strategije se mogu koristiti i kao strategije za komuniciranje među računalnim čvorovima (bili oni grafičke procesne jedinice, jezgre procesora ili tzv. NUMA čvorovi). Razvijeni algoritam nije hibridni, tj. centralnu procesnu jedinicu koristi samo za kontrolne svrhe, a cjelokupno računanje odvija se na grafičkoj procesnoj jedinici. Kad je zbog veličine problema potrebno, algoritam se može rasprostrijeti (skalirati) na proizvoljan broj grafičkih procesnih jedinica. Na dovoljno velikim matricama, skalabilnost je pokazana ubrzanjem od preko dva puta na četiri grafičke procesne jedinice, obzirom na jednu. U drugom dijelu radnje opisuje se jedan način modifikacije dvostranog Hari–Zimmermanninog algoritma za računanje generalizirane svojstvene dekompozicije matričnog para (A,B)(A, B), gdje su obje matrice simetrične, a BB je pozitivno definitna. Implicitni algoritam računa GSVD para (F,G)(F, G), pri čemu je (A,B):=(FTF,GTG)(A, B) := (F^T F, G^T G). Nadalje, pokazuje se kako treba blokirati algoritam, te kako ga paralelizirati, i u slučaju standardnih, i u slučaju grafičkih procesora. Za trokutaste matrične parove srednje velikih dimenzija (približno 5 000), pokazano je da je već sekvencijalni, neblokirani algoritam u dvostrukoj točnosti, predložen u radnji, nekoliko desetaka puta brži no što je to LAPACK potprogram DTGSJA i pritom ima nešto bolju točnost, posebno za male generalizirane singularne vrijednosti. Blokiranje algoritma koje odgovara cacheima znatno ubrzava algoritam. Pokazuje se da je i ovaj algoritam, slično kao jednostrani Jacobijev algoritam za SVD, gotovo idealno paralelizabilan i skalabilan na računalima s dijeljenom memorijom, te da njegovo ubrzanje gotovo isključivo ovisi o broju korištenih jezgara. U vrijeme testiranja, pokazalo se da je paralelizirani i blokirani Hari–Zimmermannin algoritam preko sto puta brži od LAPACK potprograma DTGESJA s višedretvenim BLAS potprogramima. Varijanta algoritma za razdijeljenu (engl. distributed) memoriju namijenjena je ogromnim matricama koje ne stanu u jedan NUMA čvor. Također, skicirana je i GPU varijanta algoritma, koja je vrlo slična jednostranom Jacobijevom algoritmu za SVD. Disertacija završava zaključkom da su ovi algoritmi Jacobijevog tipa efikasni i skalabilni i izvrstan su izbor za računanje (G)SVD-a punih matrica na masivno paralelnim standardnim arhitekturama i na grafičkim procesnim jedinicama. Ova doktorska disertacija bazirana je na originalnim znanstvenim radovima [55, 58], te proširena nekim novim rezultatima. Autorov doprinos u ovoj disertaciji su novi paralelni algoritmi za (G)SVD za grafičke procesne jedinice, tehnike paralelizacije, te detalji implementacije jednostranog Hari–Zimmermannina algoritma. Ostatak je zajednički rad sa Sanjom Singer i Sašom Singerom. 1^1 Diane O’Leary, 2006. 2^2 https://www.top500.or
    corecore