5,227 research outputs found

    Diagnosing performance variations by comparing multi-level execution traces

    Get PDF
    Tracing allows the analysis of task interactions with each other and with the operating system. Locating performance problems in a trace is not trivial because of their large size. Furthermore, deep knowledge of all components of the observed system is required to decide whether observed behavior is normal. We introduce TraceCompare, a framework that automatically identifies differences between groups of executions of the same task at the user space and kernel levels. Many performance problems manifest themselves as variations that are easily identified by our framework. Our comparison algorithm takes into account all threads that affect the completion time of analyzed executions. Differences are correlated with application code to facilitate the correction of identified problems. Performance characteristics of task executions are represented by a new data structure called enhanced calling context tree (ECCT). We demonstrate the efficiency of our approach by presenting four case studies in which TraceCompare was used to uncover serious performance problems in enterprise and open source applications, without any prior knowledge of their codebase. We also show that the overhead of our tracing solution is between 0.2 and 9 percent depending on the type of application

    Improving efficiency and resilience in large-scale computing systems through analytics and data-driven management

    Full text link
    Applications running in large-scale computing systems such as high performance computing (HPC) or cloud data centers are essential to many aspects of modern society, from weather forecasting to financial services. As the number and size of data centers increase with the growing computing demand, scalable and efficient management becomes crucial. However, data center management is a challenging task due to the complex interactions between applications, middleware, and hardware layers such as processors, network, and cooling units. This thesis claims that to improve robustness and efficiency of large-scale computing systems, significantly higher levels of automated support than what is available in today's systems are needed, and this automation should leverage the data continuously collected from various system layers. Towards this claim, we propose novel methodologies to automatically diagnose the root causes of performance and configuration problems and to improve efficiency through data-driven system management. We first propose a framework to diagnose software and hardware anomalies that cause undesired performance variations in large-scale computing systems. We show that by training machine learning models on resource usage and performance data collected from servers, our approach successfully diagnoses 98% of the injected anomalies at runtime in real-world HPC clusters with negligible computational overhead. We then introduce an analytics framework to address another major source of performance anomalies in cloud data centers: software misconfigurations. Our framework discovers and extracts configuration information from cloud instances such as containers or virtual machines. This is the first framework to provide comprehensive visibility into software configurations in multi-tenant cloud platforms, enabling systematic analysis for validating the correctness of software configurations. This thesis also contributes to the design of robust and efficient system management methods that leverage continuously monitored resource usage data. To improve performance under power constraints, we propose a workload- and cooling-aware power budgeting algorithm that distributes the available power among servers and cooling units in a data center, achieving up to 21% improvement in throughput per Watt compared to the state-of-the-art. Additionally, we design a network- and communication-aware HPC workload placement policy that reduces communication overhead by up to 30% in terms of hop-bytes compared to existing policies.2019-07-02T00:00:00

    Logs and Models in Engineering Complex Embedded Production Software Systems

    Get PDF

    Logs and Models in Engineering Complex Embedded Production Software Systems

    Get PDF

    Automated performance deviation detection across software versions releases

    Get PDF
    Performance is an important aspect and critical requirement in multi-process software architecture systems such as Google Chrome. While interacting closely with members of the Google Chrome engineering team, we observed that they face a major challenge in detecting performance deviations between releases, because of their very high release frequency and therefore limited amount of data on each. This paper describes a deep analysis on the data distributions followed by a comparative approach using median based confidence interval for software evaluation. This technique is capable of detecting performance related deviations. It is substantially different from the standard confidence interval, in that it can be used in the presence of outliers and random external influences since the median is less influenced by them. We conducted a bottom-up analysis, using stack traces in a very large pool of releases. The results show that our approach can accurately localize performance deviations at a function-level granularity, using a very small number of trace samples, nearby 5 runs

    Analyse de variations de performance par comparaison de traces d'exécution

    Get PDF
    RÉSUMÉ La performance est un requis très important pour bon nombre d'applications. Malheureusement, plusieurs facteurs affectent cette performance : contention pour l'accès à une ressource, mauvais algorithmes de synchronisation, attente de données en provenance du disque, etc. En raison de la taille du code source de certaines applications et des multiples niveaux d'abstraction entre le code applicatif et le matériel, plusieurs développeurs ne soupçonnent pas l'existence de toutes ces sources de latence. Le traçage est une technique qui consiste à enregistrer des événements qui surviennent dans un système informatique. Un grand nombre de problèmes de performance peuvent être diagnostiqués à partir de l'information contenue dans une trace d'exécution. En raison de leur faible surcoût, les traceurs modernes peuvent être activés en continu sur des systèmes en production pour capturer des problèmes survenant rarement. Des outils spécialisés facilitent la navigation à travers le grand nombre d'événements contenus une trace d'exécution. Malheureusement, même avec ces outils, il peut être difficile de juger si le comportement observé est celui attendu sans une connaissance exhaustive du système analysé. L'objectif de ce travail de recherche est de vérifier si le diagnostic de variations de performance peut être facilité par un algorithme qui identifie automatiquement les différences entre deux groupes de traces d'exécution. L'algorithme doit mettre en évidence les latences présentes dans un groupe d'exécutions anormalement longues, mais pas dans un groupe d'exécutions normales. Un développeur peut alors tenter d'éliminer ces latences. Pour ce faire, nous introduisons d'abord deux nouveaux événements pour le traceur LTTng. L'événement cpu_stack rapporte périodiquement la pile d'appels des fils d'exécution utilisant le processeur. L'événement syscall_stack rapporte la pile d'appels des longs appels système. Ces deux événements nous permettent de déterminer si des latences présentes dans des traces d'exécution différentes ont été causées par le même code applicatif. Le traçage de l'événement syscall_stack a un surcoût moindre que celui du traçage traditionnel des appels système. Nous proposons aussi une nouvelle structure de données appelée « enhanced calling context tree » (ECCT) pour représenter les latences affectant le temps de complétion d'une tâche. Ces latences peuvent être au niveau du matériel, du système d'exploitation ou du code applicatif. En présence d'interactions entre plusieurs fils d'exécution, nous utilisons un algorithme de calcul du chemin critique pour inclure dans le ECCT les latences introduites par chacun de ces fils. Nous utilisons les ECCTs à la fois pour stocker des métriques de performance de façon compacte et comme entrée de notre algorithme de comparaison. Ensuite, nous présentons une interface graphique permettant de visualiser les différences entre des groupes d'exécution. Les groupes à comparer sont définis par l'utilisateur à l'aide de filtres. Les différences sont montrées à l'aide d'un outil de visualisation appelé « flame graph » différentiels ainsi que d'histogrammes. Les vues sont rafraîchies rapidement lorsque les filtres sont modifiés grâce à un algorithme de « map-reduce ». L'efficacité de notre solution pour détecter, diagnostiquer et corriger des problèmes de performance majeurs est démontrée grâce à quatre études de cas menées sur des logiciels libres et d'entreprises. Nous mesurons aussi le surcoût du traçage des événements requis par notre analyse sur différents types d'applications et concluons qu'il est toujours entre 0.2% et 9%. Enfin, nous démontrons que notre solution développée pour Linux peut être adaptée pour fonctionner sur les systèmes d'exploitation Windows et Mac OS X.----------ABSTRACT Performance is a critical requirement for many applications. Unfortunately, many factors can affect performance: resource contention, bad synchronization algorithms, slow disk operations, etc. Because of the large codebase of some applications and because of the multiple levels of abstraction between application code and hardware, many developers are not aware of the existence of these sources of latency. Tracing is a technique that consists of recording events that occur in a system. Many performance problems can be diagnosed using the information contained in an execution trace. Popular tracers achieve low overhead, which allows them to be enabled on production systems to capture bugs that occur infrequently. Specialized tools allow efficient navigation in the large number of events contained in execution traces. However, even with these tools, it is hard to determine whether the observed behavior is normal without a deep knowledge of the analyzed system. The objective of this research project is to verify whether we can facilitate the diagnosis of performance variations with an algorithm that automatically identifies differences between two groups of execution traces. The algorithm must highlight delays that appear in a group of slow executions, but not in a group of normal executions. A developer can then try to eliminate these delays. First, we introduce two new events for the LTTng tracer. The cpu_stack event periodically reports the call stack of threads running on the CPU. The syscall_stack reports the call stack of long system calls. Call stacks allow us to determine whether delays that appear in different execution traces are caused by the same application code. The overhead of tracing the syscall_stack event is less than that of traditional tracing of system calls. Second, we propose a new data structure called enhanced calling context tree (ECCT) to describe delays that affect the completion time of a task execution. These delays can be at the hardware, operating system or application levels. When a task execution involves interactions between several threads, a critical path algorithm is used to include delays caused by each of them in the ECCT. We use ECCTs to store performance metrics compactly and as an input to our comparison algorithm. Third, we present a GUI that shows differences between groups of executions. The user defines two groups of executions to compare using filters. Differences are shown using a visualization tool called differential flame graph along with histograms. A map-reduce algorithm is used to quickly refresh the views when filters are modified. We present four case studies, carried on open-source and enterprise software, to demonstrate that our solution helps to detect, diagnose and fix major performance problems. We measure the overhead of tracing the events required by our analysis on different kinds of applications and conclude that it is always between 0.2% and 9%. Finally, we show that our solution, developed for Linux, can be adapted for the Windows and Mac OS X operating systems
    corecore