20 research outputs found

    Application of clustering analysis and sequence analysis on the performance analysis of parallel applications

    Get PDF
    High Performance Computing and Supercomputing is the high end area of the computing science that studies and develops the most powerful computers available. Current supercomputers are extremely complex so are the applications that run on them. To take advantage of the huge amount of computing power available it is strictly necessary to maximize the knowledge we have about how these applications behave and perform. This is the mission of the (parallel) performance analysis. In general, performance analysis toolkits oUer a very simplistic manipulations of the performance data. First order statistics such as average or standard deviation are used to summarize the values of a given performance metric, hiding in some cases interesting facts available on the raw performance data. For this reason, we require the Performance Analytics, i.e. the application of Data Analytics techniques in the performance analysis area. This thesis contributes with two new techniques to the Performance Analytics Veld. First contribution is the application of the cluster analysis to detect the parallel application computation structure. Cluster analysis is the unsupervised classiVcation of patterns (observations, data items or feature vectors) into groups (clusters). In this thesis we use the cluster analysis to group the CPU burst of a parallel application, the regions on each process in-between communication calls or calls to the parallel runtime. The resulting clusters obtained are the diUerent computational trends or phases that appear in the application. These clusters are useful to understand the behaviour of computation part of the application and focus the analyses to those that present performance issues. We demonstrate that our approach requires diUerent clustering algorithms previously used in the area. Second contribution of the thesis is the application of multiple sequence alignment algorithms to evaluate the computation structure detected. Multiple sequence alignment (MSA) is technique commonly used in bioinformatics to determine the similarities across two or more biological sequences: DNA or roteins. The Cluster Sequence Score we introduce applies a Multiple Sequence Alignment (MSA) algorithm to evaluate the SPMDiness of an application, i.e. how well its computation structure represents the Single Program Multiple Data (SPMD) paradigm structure. We also use this score in the Aggregative Cluster Re-Vnement, a new clustering algorithm we designed, able to detect the SPMD phases of an application at Vne-grain, surpassing the cluster algorithms we used initially. We demonstrate the usefulness of these techniques with three practical uses. The Vrst one is an extrapolation methodology able to maximize the performance metrics that characterize the application phases detected using a single application execution. The second one is the use of the computation structure detected to speedup in a multi-level simulation infrastructure. Finally, we analyse four production-class applications using the computation characterization to study the impact of possible application improvements and portings of the applications to diUerent hardware conVgurations. In summary, this thesis proposes the use of cluster analysis and sequence analysis to automatically detect and characterize the diUerent computation trends of a parallel application. These techniques provide the developer / analyst an useful insight of the application performance and ease the understanding of the application’s behaviour. The contributions of the thesis are not reduced to proposals and publications of the techniques themselves, but also practical uses to demonstrate their usefulness in the analysis task. In addition, the research carried out during these years has provided a production tool for analysing applications’ structure, part of BSC Tools suite

    Structural Performance Comparison of Parallel Software Applications

    Get PDF
    With rising complexity of high performance computing systems and their parallel software, performance analysis and optimization has become essential in the development of efficient applications. The comparison of performance data is a key operation required in performance analysis. An analyst may conduct different types of comparisons in order to understand the performance properties of an application. One use case is comparing performance data from multiple measurements. Typical examples for such comparisons are before/after comparisons when applying optimizations or changing code versions. Besides comparing performance between multiple runs, also comparing performance characteristics across the parallel execution streams of an application is essential to detect performance problems. This is typically useful to detect imbalances, outliers, or changing runtime behavior during the execution of an application. While such comparisons are straightforward for the aggregated data in performance profiles, only limited solutions exist for comparing event traces. Trace-based analysis, i.e., the collection of fine-grained information on individual application events with timestamps and application context, has proven to be a powerful technique. The detailed performance information included in event traces make them very suitable for performance analysis. However, this level of detail also presents a challenge because it implies a large and overwhelming amount of data. Currently, users need to perform manual comparison of event traces, which is extremely challenging and time consuming because of the large volume of detailed data and the need to correctly line up trace events. To fill the gap of missing solutions for automatic comparison of event traces, this work proposes a set of techniques that automatically align traces. The alignment allows their structural comparison and the highlighting of differences between them. A set of novel metrics provide the user with an objective measure of the differences between traces, both in terms of differences in the event stream and timing differences across events. An additional important aspect of trace-based analysis is the visualization of performance data in event timelines. This has proven to be a powerful approach for the detection of various types of performance problems. However, visualization of large numbers of event timelines quickly hits the limits of available display resolution. Likewise, identifying performance problems is challenging in the large amount of visualized performance data. To alleviate these problems this work proposes two new approaches for event timeline visualization. First, novel folding strategies for event timelines facilitate visual scalability and provide powerful overviews of performance data at the same time. Second, this work presents an effective approach that automatically identifies and highlights several types of performance critical sections in an application run. This approach identifies time dominant functions of an application and subsequently uses them to analyze runtime imbalances throughout the application run. Intuitive visualizations present the resulting runtime variations and guide the analyst to performance hot spots. Evaluations with benchmarks and real-world applications assess all introduced techniques. The effectiveness of the comparison approaches is demonstrated by showing automatically detected performance issues and structural differences between different versions of applications and across parallel execution streams. Case studies showcase the capabilities of the event timeline visualization techniques by demonstrating scalable performance data visualizations and detecting performance problems and code inefficiencies in real-world applications

    Parallel computing 2011, ParCo 2011: book of abstracts

    Get PDF
    This book contains the abstracts of the presentations at the conference Parallel Computing 2011, 30 August - 2 September 2011, Ghent, Belgiu

    Performance-aware energy optimizations in networks for HPC

    Get PDF
    Energy efficiency is an important challenge in the field of High Performance Computing (HPC). High energy requirements not only limit the potential to realize next-generation machines but are also an increasing part of the total cost of ownership of an HPC system. While at large HPC systems are becoming increasingly energy proportional in an effort to reduce energy costs, interconnect links stand out for their inefficiency. Commodity interconnect links remain ¿always-on¿, consuming full power even when no data is being transmitted. Although various techniques have been proposed towards energy- proportional interconnects, they are often too conservative or are not focused toward HPC. Aggressive techniques for interconnect energy savings are often not applied to HPC, in particular, because they may incur excessive performance overheads. Any energy-saving technique will only be adopted in HPC if there is no significant impact on performance, which is still the primary design objective. This thesis explores interconnect energy proportionality from a performance perspective. In this thesis, first a characterization of HPC applications is presented, making a case for the enormous potential for interconnect energy proportionality with HPC applications. Next, an HPC interconnect with on/off based links, modeled after the IEEE Energy Efficient Ethernet protocol, is evaluated. This evaluation while presenting a relationship between performance impact and energy over HPC applications also emphasizes the need for performance focused designs in energy efficient interconnects. Next, an adaptive mechanism, PerfBound, is presented that saves link energy subject to a bound on application performance overheads. Finally this evaluation structure is applied into an intermediate link power state, in addition to the traditional on and off states. Results of this study, over 15 production HPC applications show that, compared to current day always-on HPC interconnects, link energy can be reduced by unto 70%, while application performance overhead is bounded to only 1%.La eficiencia energética es un gran reto en el área de la Supercomputación (HPC), las grandes necesidades de energía no solo limitan el potencial de las computadoras de nueva generación, sino que también aumentan el coste de funcionamiento de estos sistemas. Mientras que los sistemas HPC tienden a ser cada vez más energéticamente proporcionales en un empeño por reducir costes, los enlaces de interconexión siguen siendo muy ineficientes. Los enlaces de interconexión comunes funcionan en modo "always-on", es decir, consumiendo energía incluso cuando no transmiten. Aunque se han propuesto algunas técnicas que ayuden a la proporcionalidad energética de los enlaces de interconexión, éstas han sido muy agresivas o poco enfocadas hacia su uso con sistemas HPC. Las técnicas de ahorro energético para los enlaces más agresivas no suelen ser utilizadas en HPC, particularmente porque degradan excesivamente el rendimiento. Cualquier técnica de ahorro energético solo será adoptada en sistemas HPC si no hay un impacto excesivo en el rendimiento, el cual es el principal objetivo de estos sistemas. En esta tesis, primeramente se presenta una nueva caracterización de aplicaciones HPC, remarcando el enorme potencial de la proporcionalidad en los enlaces de interconexión proporcionales para aplicaciones HPC. Seguidamente, se evaluará siguiendo el protocolo "IEEE Energy Efficient Ethernet" un link de interconexión on/off. Esta evaluación presentará una relación de impacto energético y rendimiento en aplicaciones HPC, enfatizando en la necesidad de usar un enlace de interconexión enfocados a la eficiencia. Se continuará con la presentación de un mecanismo adaptivo, PerfBound, que ahorra energía respetando unos límites máximos de impacto en el rendimiento. Finalmente, esta estructura es aplicada a un nuevo estado intermedio de funcionamiento adicional a los estados tradicionales on/off. Los resultados de este estudio, muestran que en más de 15 aplicaciones HPC la energía en los enlaces puede ser reducida en un 70% en comparación con enlaces "always-on", mientras que el impacto en el rendimiento es de tan solo un 1%.Postprint (published version

    Towards instantaneous performance analysis using coarse-grain sampled and instrumented data

    Get PDF
    Nowadays, supercomputers deliver an enormous amount of computation power; however, it is well-known that applications only reach a fraction of it. One limiting factor is the single processor performance because it ultimately dictates the overall achieved performance. Performance analysis tools help locating performance inefficiencies and their nature to ultimately improve the application performance. Performance tools rely on two collection techniques to invoke their performance monitors: instrumentation and sampling. Instrumentation refers to inject performance monitors into concrete application locations whereas sampling invokes the installed monitors to external events. Each technique has its advantages. The measurements obtained through instrumentation are directly associated to the application structure while sampling allows a simple way to determine the volume of measurements captured. However, the granularity of the measurements that provides valuable insight cannot be determined a priori. Should analysts study the performance of an application for the first time, they may consider using a performance tool and instrument every routine or use high-frequency sampling rates to provide the most detailed results. These approaches frequently lead to large overheads that impact the application performance and thus alter the measurements gathered and, therefore, mislead the analyst. This thesis introduces the folding mechanism that takes advantage of the repetitiveness found in many applications. The mechanism smartly combines metrics captured through coarse-grain sampling and instrumentation mechanisms to provide instantaneous metric reports within instrumented regions and without perturbing the application execution. To produce these reports, the folding processes metrics from different type of sources: performance and energy counters, source code and memory references. The process depends on their nature. While performance and energy counters represent continuous metrics, the source code and memory references refer to discrete values that point out locations within the application code or address space. This thesis evaluates and validates two fitting algorithms used in different areas to report continuous metrics: a Gaussian interpolation process known as Kriging and piece-wise linear regressions. The folding also takes benefit of analytical performance models to focus on a small set of performance metrics instead of exploring a myriad of performance counters. The folding also correlates the metrics with the source-code using two alternatives: using the outcome of the piece-wise linear regressions and a mechanism inspired by Multi-Sequence Alignment techniques. Finally, this thesis explores the applicability of the folding mechanism to captured memory references to detail which and how data objects are accessed. This thesis proposes an analysis methodology for parallel applications that focus on describing the most time-consuming computing regions. It is implemented on top of a framework that relies on a previously existing clustering tool and the folding mechanism. To show the usefulness of the methodology and the framework, this thesis includes the discussion of multiple first-time seen in-production applications. The discussions include high level of detail regarding the application performance bottlenecks and their responsible code. Despite many analyzed applications have been compiled using aggressive compiler optimization flags, the insight obtained from the folding mechanism has turned into small code transformations based on widely-known optimization techniques that have improved the performance in some cases. Additionally, this work also depicts power monitoring capabilities of recent processors and discusses the simultaneous performance and energy behavior on a selection of benchmarks and in-production applications.Actualment, els supercomputadors ofereixen una àmplia potència de càlcul però les aplicacions només en fan servir una petita fracció. Un dels factors limitants és el rendiment d'un processador, el qual dicta el rendiment en general. Les eines d'anàlisi de rendiment ajuden a localitzar els colls d'ampolla i la seva natura per a, eventualment, millorar el rendiment de l'aplicació. Les eines d'anàlisi de rendiment empren dues tècniques de recol·lecció de dades: instrumentació i mostreig. La instrumentació es refereix a la capacitat d'injectar monitors en llocs específics del codi mentre que el mostreig invoca els monitors quan ocórren esdeveniments externs. Cadascuna d'aquestes tècniques té les seves avantatges. Les mesures obtingudes per instrumentació s'associen directament a l'estructura de l'aplicació mentre que les obtingudes per mostreig permeten una forma senzilla de determinar-ne el volum capturat. Sigui com sigui, la granularitat de les mesures no es pot determinar a priori. Conseqüentment, si un analista vol estudiar el rendiment d'una aplicació sense saber-ne res, hauria de considerar emprar una eina d'anàlisi i instrumentar cadascuna de les rutines o bé emprar freqüències de mostreig altes per a proveir resultats detallats. En qualsevol cas, aquestes alternatives impacten en el rendiment de l'aplicació i per tant alterar les mètriques capturades, i conseqüentment, confondre a l'analista. Aquesta tesi introdueix el mecanisme anomenat folding, el qual aprofita la repetitibilitat existent en moltes aplicacions. El mecanisme combina intel·ligentment mètriques obtingudes mitjançant mostreig de gra gruixut i instrumentació per a proveir informes de mètriques instantànies dins de regions instrumentades sense pertorbar-ne l'execució. Per a produir aquests informes, el mecanisme processa les mètriques de diferents fonts: comptadors de rendiment i energia, codi font i referències de memoria. El procés depen de la natura de les dades. Mentre que les mètriques de rendiment i energia són valors continus, el codi font i les referències de memòria representen valors discrets que apunten ubicacions dins el codi font o l'espai d'adreces. Aquesta tesi evalua i valida dos algorismes d'ajust: un procés d'interpolació anomenat Kriging i una interpolació basada en regressions lineals segmentades. El mecanisme de folding també s'aprofita de models analítics de rendiment basats en comptadors hardware per a proveir un conjunt reduït de mètriques enlloc d'haver d'explorar una multitud de comptadors. El mecanisme també correlaciona les mètriques amb el codi font emprant dues alternatives: per un costat s'aprofita dels resultats obtinguts per les regressions lineals segmentades i per l'altre defineix un mecanisme basat en tècniques d'alineament de multiples seqüències. Aquesta tesi també explora l'aplicabilitat del mecanisme per a referències de memoria per a informar quines i com s'accessedeixen les dades de l'aplicació. Aquesta tesi proposa una metodología d'anàlisi per a aplicacions paral·leles centrant-se en descriure les regions de càlcul que consumeixen més temps. La metodología s'implementa en un entorn de treball que usa un mecanisme de clustering preexistent i el mecanisme de folding. Per a demostrar-ne la seva utilitat, aquesta tesi inclou la discussió de múltiples aplicacions analitzades per primera vegada. Les discussions inclouen un alt nivel de detall en referencia als colls d'ampolla de les aplicacions i de la seva natura. Tot i que moltes d'aquestes aplicacions s'han compilat amb opcions d'optimització agressives, la informació obtinguda per l'entorn de treball es tradueix en petites modificacions basades en tècniques d'optimització que permeten millorar-ne el rendiment en alguns casos. Addicionalment, aquesta tesi també reporta informació sobre el consum energètic reportat per processadors recents i discuteix el comportament simultani d'energia i rendiment en una selecció d'aplicacions sintètiques i aplicacions en producció

    A high-order Boris integrator

    Get PDF
    This work introduces the high-order Boris-SDC method for integrating the equations of motion for electrically charged particles in electric and magnetic fields. Boris-SDC relies on a combination of the Boris-integrator with spectral deferred corrections (SDC). SDC can be considered as preconditioned Picard iteration to compute the stages of a collocation method. In this interpretation, inverting the preconditioner corresponds to a sweep with a low-order method. In Boris-SDC, the Boris method, a second-order Lorentz force integrator based on velocity-Verlet, is used as a sweeper/preconditioner. The presented method provides a generic way to extend the classical Boris integrator, which is widely used in essentially all particle-based plasma physics simulations involving magnetic fields, to a high-order method. Stability, convergence order and conservation properties of the method are demonstrated for different simulation setups. Boris-SDC reproduces the expected high order of convergence for a single particle and for the center-of-mass of a particle cloud in a Penning trap and shows good long-term energy stability

    Development of a pattern library and a decision support system for building applications in the domain of scientific workflows for e-Science

    Get PDF
    Karastoyanova et al. created eScienceSWaT (eScience SoftWare Engineering Technique), that targets at providing a user-friendly and systematic approach for creating applications for scientific experiments in the domain of e-Science. Even though eScienceSWaT is used, still many choices about the scientific experiment model, IT experiment model and infrastructure have to be made. Therefore, a collection of best practices for building scientific experiments is required. Additionally, these best practice need to be connected and organized. Finally, a Decision Support System (DSS) that is based on the best practices and enables decisions about the various choices for e-Science solutions, needs to be developed. Hence, various e-Science applications are examined in this thesis. Best practices are recognised by abstracting from the identified problem-solution pairs in the e-Science applications. Knowledge and best practices from natural science, computer science and software engineering are stored in patterns. Furthermore, relationship types among patterns are worked out. Afterwards, relationships among the patterns are defined and the patterns are organized in a pattern library. In addition, the concept for a DSS that provisions the patterns and its prototypical implementation are presented
    corecore