3 research outputs found

    System Performance Anomaly Detection using Tracing Data Analysis

    No full text
    RÉSUMÉ: Les progrès technologiques et l’augmentation de la puissance de calcul ont récemment conduit à l’émergence d’architectures logicielles complexes et à grande échelle. Les unités centrales de traitement conventionnelles sont maintenant soutenues par des unités de co-traitement pour accélérer différentes tâches. L’impact de ces améliorations peut être observé dans les systèmes distribués, les microservices, les appareils IdO (internet of things ou IoT en anglais) et les environnements infonuagiques qui sont devenus de plus en plus complexes à mesure qu’ils grandissent en termes d’échelle et de fonctionnalités. Dans de tels systèmes, une tâche simple engage de nombreux coeurs en parallèle, potentiellement sur plusieurs noeuds, et une même opération peut être servie de différentes manières par différents coeurs et nœuds physiques. De plus, plusieurs facteurs tels que leur distribution dans le réseau, l’utilisation de différentes technologies, leur courte durée de vie, les bogues logiciels, les pannes matérielles et les conflits de ressources rendent ces systèmes sujets à la montée de comportements anormaux. Le haut degré de complexité et la distribution inhérente des petits services compliquent la compréhension des performances de ces environnements. En outre, les outils de surveillance et d’analyse des performances disponibles présentent de nombreuses lacunes. ---------- ABSTRACT:Advances in technology and computing power have led to the emergence of complex and large-scale software architectures in recent years. The conventional central processing units are now getting support from co-processing units to speed up different tasks. The result of these improvements can be seen in distributed systems, Microservices, IoT devices, and cloud environments that have become increasingly complex as they grow in both scale and functionality. In such systems, a simple task involves many cores in parallel, possibly on multiple nodes, and also, a single operation can be served in different ways by different cores and physical nodes. Moreover, several factors, such as their distribution in the network, the use of different technologies, their short life, software bugs, hardware failures, and resource contentions, make these systems prone to the rise of anomalous system behaviors. The high degree of complexity and inherent distribution of small services makes understanding the performance of such environments challenging. Besides, available performance monitoring and analysis tools have many shortcomings

    A Framework for Detecting System Performance Anomalies Using Tracing Data Analysis

    No full text
    Advances in technology and computing power have led to the emergence of complex and large-scale software architectures in recent years. However, they are prone to performance anomalies due to various reasons, including software bugs, hardware failures, and resource contentions. Performance metrics represent the average load on the system and do not help discover the cause of the problem if abnormal behavior occurs during software execution. Consequently, system experts have to examine a massive amount of low-level tracing data to determine the cause of a performance issue. In this work, we propose an anomaly detection framework that reduces troubleshooting time, besides guiding developers to discover performance problems by highlighting anomalous parts in trace data. Our framework works by collecting streams of system calls during the execution of a process using the Linux Trace Toolkit Next Generation(LTTng), sending them to a machine learning module that reveals anomalous subsequences of system calls based on their execution times and frequency. Extensive experiments on real datasets from two different applications (e.g., MySQL and Chrome), for varying scenarios in terms of available labeled data, demonstrate the effectiveness of our approach to distinguish normal sequences from abnormal ones

    Anomaly detection in microservice environments using distributed tracing data analysis and NLP

    No full text
    In recent years DevOps and agile approaches like microservice architectures and Continuous Integration have become extremely popular given the increasing need for flexible and scalable solutions. However, several factors such as their distribution in the network, the use of different technologies, their short life, etc. make microservices prone to the occurrence of anomalous system behaviours. In addition, due to the high degree of complexity of small services, it is difficult to adequately monitor the security and behavior of microservice environments. In this work, we propose an NLP (natural language processing) based approach to detect performance anomalies in spans during a given trace, besides locating release-over-release regressions. Notably, the whole system needs no prior knowledge, which facilitates the collection of training data. Our proposed approach benefits from distributed tracing data to collect sequences of events that happened during spans. Extensive experiments on real datasets demonstrate that the proposed method achieved an F_score of 0.9759. The results also reveal that in addition to the ability to detect anomalies and release-over-release regressions, our proposed approach speeds up root cause analysis by means of implemented visualization tools in Trace Compass
    corecore