6 research outputs found

    A Trace Macroscopic Description based on Time Aggregation

    Get PDF
    Trace visualization; trace analysis; trace overview; time aggregation; parallel systems; embedded systems; information theory; scientific computation; multimedia application; debugging; optimizationToday, because of computing system complexity, it is required to trace application executions to understand their behavior. Visualization techniques provide some help in representing their content, but their scalability is limited both because of human perception and bounded screen resolution. To solve this issue, we propose a visualization based on time aggregation that provides a concise overview of a trace whatever its size. The level of details in this visualization can be configurable by users who can adjust the compromise between concision (gain from aggregation) and information loss. They can then refine their analysis by zooming in an interesting part and choosing a less aggregated overview for this interesting part. This visualization is implemented in our tool, Ocelotl, which enables users to interact with this visualization by changing the selected time interval and its aggregation settings dynamically. The results presented in this paper show that the technique can help users correctly identify anomalies in very large trace files composed of up to forty million events.De nos jours, à cause de la complexité des systèmes actuels, les analystes utilisent le traçage pour comprendre le comportement des programmes. Les techniques de visualisation aident à représenter le contenu de ces traces, mais le passage à l'échelle est limité par la perception humaine des données affichées ainsi que par la résolution des écrans. Dans le but de résoudre ce problème, nous proposons une technique de visualisation faisant appel à une algorithme d'agrégation, fournissant un aperçu du contenu de la trace quelle que soit sa taille. Le niveau de détail peut être ajusté par l'utilisateur, grâce à un compromis entre la réduction de complexité de la représentation (gain dû à l'agrégation) et la perte d'information. L'utilisateur peut ensuite raffiner l'analyse en zoomant sur des parties intéressantes de la trace et en diminuant l'intensité de l'agrégation. Cette technique est implémentée dans notre outil, Ocelotl, qui permet à l'utilisateur d'interagir avec la visualisation en changeant les bornes de temps et les paramètres de l'agrégation de manière dynamique. Les résultats présentés dans ce rapport montrent que notre contribution aide les utilisateurs à identifier des anomalies dans des traces contenant jusqu'à quarante millions d'événements

    Evaluating Trace Aggregation Through Entropy Measures for Optimal Performance Visualization of Large Distributed Systems

    Get PDF
    Large-scale distributed high-performance applications are involving an ever-increasing number of threads to explore the extreme concurrency of today's systems. The performance analysis through visualization techniques usually su ers severe semantic limitations due, from one side, to the size of parallel applications, from another side, to the challenges to visualize large-scale traces. Most of performance visualization tools rely therefore on data aggregation in order to be able to scale. Even if this technique is frequently used, to the best of our knowledge, there has not been any real attempt to evaluate the quality of aggregated data for visualization. This paper presents an approach which lls this gap. We propose to build optimized macroscopic visualizations using measures inherited from information theory, and in particular the Kullback-Leibler divergence. These measures are used to estimate the complexity reduced and the information lost during any given data aggregation. We rst illustrate the applicability of our approach by exploiting these two measures in the analysis of work stealing traces using squari ed treemaps. We then report the e ective scalability of our approach by visualizing known anomalies in a synthetic trace le with the behavior of one million processes, with encouraging results

    A spatiotemporal data aggregation technique for performance analysis of large-scale execution traces

    Full text link

    Detection and analysis of resource usage anomalies in large distributed systems through multi-scale visualization

    No full text
    International audienceUnderstanding the behavior of large scale distributed systems is generally extremely difficult as it requires to observe a very large number of components over very large time.Most analysis tools for distributed systems gather basic information such as individual processor or network utilization. Although scalable because of the data reduction techniques applied before the analysis, these tools are often insufficient to detect or fully understand anomalies in the dynamic behavior of resource utilization and their influence on the applications performance.In this paper, we propose a methodology for detecting resource usage anomalies in large scale distributed systems. The methodology relies on four functionalities: characterized trace collection, multi-scale data aggregation, specifically tailored user interaction techniques, and visualization techniques. We show the efficiency of this approach through the analysis of simulations of the volunteer computing Berkeley Open Infrastructure for Network Computing architecture. Three scenarios are analyzed in this paper: analysis of the resource sharing mechanism, resource usage considering response time instead of throughput, and the evaluation of input file size on Berkeley Open Infrastructure for Network Computing architecture. The results show that our methodology enables to easily identify resource usage anomalies, such as unfair resource sharing, contention, moving network bottlenecks, and harmful short-term resource sharing

    Actes du 10ème Atelier en Évaluation de Performances

    Get PDF
    National audienceL'Atelier en Évaluation de Performances est une réunion destinée à faire s'exprimer et se rencontrer les jeunes chercheurs (doctorants et nouveaux docteurs) dans le domaine de la Modélisation et de l'Évaluation de Performances, une discipline consacrée à l'étude et l'optimisation de systèmes dynamiques stochastiques et/ou temporisés apparaissant en Informatique, Télécommunications, Productique et Robotique entre autres. La présentation informelle de travaux, même en cours, y est encouragée afin de renforcer les interactions entre jeunes chercheurs et préparer des soumissions de nouveaux projets scientifiques. Des exposés de synthèse sur des domaines de recherche d'actualité, donnés par des chercheurs confirmés du domaine renforcent la partie formation de l'atelier
    corecore