14 research outputs found

    Slot-based Calling Context Encoding

    Get PDF
    Calling context is widely used in software engineering areas such as profiling, debugging and event logging. It can also enhance some dynamic analysis such as data race detection. To obtain the calling context at runtime, current approaches either perform expensive stack walking to recover contexts or instrument the application and dynamically encode the context into an integer. The current encoding schemes are either not fully precise, or have high instrumentation and detection overhead, and scalability issue for large and highly recursive applications.We propose slot-based calling context encoding (SCCE), which consists of a scalable encoding for acyclic contexts and an efficient encoding for cyclic contexts. Evaluating with CPU 2006 benchmark suite, we show that our acyclic encoding is scalable, has very low instrumentation overhead, and an acceptable detection overhead. We also show that our cyclic encoding also has lower instrumentation and detection overhead than the state-of-the-art approach by significantly reducing the number of bytes pushed and checked for cyclic contexts

    Portable and Accurate Collection of Calling-Context-Sensitive Bytecode Metrics for the Java Virtual Machine

    Get PDF
    Calling-context profiles and dynamic metrics at the bytecode level are important for profiling, workload characterization, program comprehension, and reverse engineering. Prevailing tools for collecting calling-context profiles or dynamic bytecode metrics often provide only incomplete information or suffer from limited compatibility with standard JVMs. However, completeness and accuracy of the profiles is essential for tasks such as workload characterization, and compatibility with standard JVMs is important to ensure that complex workloads can be executed. In this paper, we present the design and implementation of JP2, a new tool that profiles both the inter- and intra-procedural control flow of workloads on standard JVMs. JP2 produces calling-context profiles preserving callsite information, as well as execution statistics at the level of individual basic blocks of code. JP2 is complemented with scripts that compute various dynamic bytecode metrics from the profiles. As a case-study and tutorial on the use of JP2, we use it for cross-profiling for an embedded Java processor

    Profiling, extracting, and analyzing dynamic software metrics

    Get PDF
    This thesis presents a methodology for the analysis of software executions aimed at profiling software, extracting dynamic software metrics, and then analyzing those metrics with the goal of assisting software quality researchers. The methodology is implemented in a toolkit which consists of an event-based profiler which collects more accurate data than existing profilers, and a program called MetricView that derives and extracts dynamic metrics from the generated profiles. The toolkit was designed to be modular and flexible, allowing analysts and developers to easily extend its functionality to derive new or custom dynamic software metrics. We demonstrate the effectiveness and usefulness of DynaMEAT by applying it to several open-source projects of varying sizes

    Fully Reflective Execution Environments: Virtual Machines for More Flexible Software

    Get PDF
    VMs are complex pieces of software that implement programming language semantics in an efficient, portable, and secure way. Unfortunately, mainstream VMs provide applications with few mechanisms to alter execution semantics or memory management at run time. We argue that this limits the evolvability and maintainability of running systems for both, the application domain, e.g., to support unforeseen requirements, and the VM domain, e.g., to modify the organization of objects in memory. This work explores the idea of incorporating reflective capabilities into the VM domain and analyzes its impact in the context of software adaptation tasks. We characterize the notion of a fully reflective VM, a kind of VM that provides means for its own observability and modifiability at run time. This enables programming languages to adapt the underlying VM to changing requirements. We propose a reference architecture for such VMs and present TruffleMATE as a prototype for this architecture. We evaluate the mechanisms TruffleMATE provides to deal with unanticipated dynamic adaptation scenarios for security, optimization, and profiling aspects. In contrast to existing alternatives, we observe that TruffleMATE is able to handle all scenarios, using less than 50 lines of code for each, and without interfering with the application's logic

    Dynamic Analysis of Embedded Software

    Get PDF
    abstract: Most embedded applications are constructed with multiple threads to handle concurrent events. For optimization and debugging of the programs, dynamic program analysis is widely used to collect execution information while the program is running. Unfortunately, the non-deterministic behavior of multithreaded embedded software makes the dynamic analysis difficult. In addition, instrumentation overhead for gathering execution information may change the execution of a program, and lead to distorted analysis results, i.e., probe effect. This thesis presents a framework that tackles the non-determinism and probe effect incurred in dynamic analysis of embedded software. The thesis largely consists of three parts. First of all, we discusses a deterministic replay framework to provide reproducible execution. Once a program execution is recorded, software instrumentation can be safely applied during replay without probe effect. Second, a discussion of probe effect is presented and a simulation-based analysis is proposed to detect execution changes of a program caused by instrumentation overhead. The simulation-based analysis examines if the recording instrumentation changes the original program execution. Lastly, the thesis discusses data race detection algorithms that help to remove data races for correctness of the replay and the simulation-based analysis. The focus is to make the detection efficient for C/C++ programs, and to increase scalability of the detection on multi-core machines.Dissertation/ThesisDoctoral Dissertation Computer Science 201

    Understanding the performance of interactive applications

    Get PDF
    Many if not most computer systems are used by human users. The performance of such interactive systems ultimately affects those users. Thus, when measuring, understanding, and improving system performance, it makes sense to consider the human user's perspective. Essentially, the performance of interactive applications is determined by the perceptible lag in handling user requests. So, when characterizing the runtime of an interactive application we need a new approach that focuses on the perceptible lags rather than on overall and general performance characteristics. Such a new characterization approach should enable a new way to profile and improve the performance of interactive applications. Imagine a way that would seek out these perceptible lags and then investigate the causes of these lags. Performance analysts could simply optimize responsible parts of the software, thus eliminating perceptible lag for interactive applications. Unfortunately, existing profiling approaches either incur significant overhead that makes them impractical for an interactive scenario, or they lack the ability to provide insight into the causes of long latencies. An effective approach for interactive applications has to fulfill several requirements such as an accurate view of the causes of performance problems and insignificant perturbation of the interactive application. We propose a new profiling approach that helps developers to understand and improve the perceptible performance of interactive applications and satisfies the above needs

    Software Tracing Comparison Using Data Mining Techniques

    Get PDF
    La performance est devenue une question cruciale sur le dĂ©veloppement, le test et la maintenance des logiciels. Pour rĂ©pondre Ă  cette prĂ©occupation, les dĂ©veloppeurs et les testeurs utilisent plusieurs outils pour amĂ©liorer les performances ou suivre les bogues liĂ©s Ă  la performance. L’utilisation de mĂ©thodologies comparatives telles que Flame Graphs fournit un moyen formel de vĂ©rifier les causes des rĂ©gressions et des problĂšmes de performance. L’outil de comparaison fournit des informations pour l’analyse qui peuvent ĂȘtre utilisĂ©es pour les amĂ©liorer par un mĂ©canisme de profilage profond, comparant habituellement une donnĂ©e normale avec un profil anormal. D’autre part, le mĂ©canisme de traçage est un mĂ©canisme de tendance visant Ă  enregistrer des Ă©vĂ©nements dans le systĂšme et Ă  rĂ©duire les frais gĂ©nĂ©raux de son utilisation. Le registre de cette information peut ĂȘtre utilisĂ© pour fournir aux dĂ©veloppeurs des donnĂ©es pour l’analyse de performance. Cependant, la quantitĂ© de donnĂ©es fournies et les connaissances requises Ă  comprendre peuvent constituer un dĂ©fi pour les mĂ©thodes et les outils d’analyse actuels. La combinaison des deux mĂ©thodologies, un mĂ©canisme comparatif de profilage et un systĂšme de traçabilitĂ© peu Ă©levĂ© peut permettre d’évaluer les causes des problĂšmes rĂ©pondant Ă©galement Ă  des exigences de performance strictes en mĂȘme temps. La prochaine Ă©tape consiste Ă  utiliser ces donnĂ©es pour dĂ©velopper des mĂ©thodes d’analyse des causes profondes et d’identification des goulets d’étranglement. L’objectif de ce recherche est d’automatiser le processus d’analyse des traces et d’identifier automatiquement les diffĂ©rences entre les groupes d’exĂ©cutions. La solution prĂ©sentĂ©e souligne les diffĂ©rences dans les groupes prĂ©sentant une cause possible de cette diffĂ©rence, l’utilisateur peut alors bĂ©nĂ©ficier de cette revendication pour amĂ©liorer les exĂ©cutions. Nous prĂ©sentons une sĂ©rie de techniques automatisĂ©es qui peuvent ĂȘtre utilisĂ©es pour trouver les causes profondes des variations de performance et nĂ©cessitant des interfĂ©rences mineures ou non humaines. L’approche principale est capable d’indiquer la performance en utilisant une mĂ©thodologie de regroupement comparative sur les exĂ©cutions et a Ă©tĂ© appliquĂ©e sur des cas d’utilisation rĂ©elle. La solution proposĂ©e a Ă©tĂ© mise en oeuvre sur un cadre d’analyse pour aider les dĂ©veloppeurs Ă  rĂ©soudre des problĂšmes similaires avec un outil diffĂ©rentiel de flamme. À notre connaissance, il s’agit de la premiĂšre tentative de corrĂ©ler les mĂ©canismes de regroupement automatique avec l’analyse des causes racines Ă  l’aide des donnĂ©es de suivi. Dans ce projet, la plupart des donnĂ©es utilisĂ©es pour les Ă©valuations et les expĂ©riences ont Ă©tĂ© effectuĂ©es dans le systĂšme d’exploitation Linux et ont Ă©tĂ© menĂ©es Ă  l’aide de Linux Trace Toolkit Next Generation (LTTng) qui est un outil trĂšs flexible avec de faibles coĂ»ts gĂ©nĂ©raux.----------ABSTRACT: Performance has become a crucial matter in software development, testing and maintenance. To address this concern, developers and testers use several tools to improve the performance or track performance related bugs. The use of comparative methodologies such as Flame Graphs provides a formal way to verify causes of regressions and performance issues. The comparison tool provides information for analysis that can be used to improve the study by a deep profiling mechanism, usually comparing normal with abnormal profiling data. On the other hand, Tracing is a popular mechanism, targeting to record events in the system and to reduce the overhead associated with its utilization. The record of this information can be used to supply developers with data for performance analysis. However, the amount of data provided, and the required knowledge to understand it, may present a challenge for the current analysis methods and tools. Combining both methodologies, a comparative mechanism for profiling and a low overhead trace system, can enable the easier evaluation of issues and underlying causes, also meeting stringent performance requirements at the same time. The next step is to use this data to develop methods for root cause analysis and bottleneck identification. The objective of this research project is to automate the process of trace analysis and automatic identification of differences among groups of executions. The presented solution highlights differences in the groups, presenting a possible cause for any difference. The user can then benefit from this claim to improve the executions. We present a series of automated techniques that can be used to find the root causes of performance variations, while requiring small or no human intervention. The main approach is capable to identify the performance difference cause using a comparative grouping methodology on the executions, and was applied to real use cases. The proposed solution was implemented on an analysis framework to help developers with similar problems, together with a differential flame graph tool. To our knowledge, this is the first attempt to correlate automatic grouping mechanisms with root cause analysis using tracing data. In this project, most of the data used for evaluations and experiments were done with the Linux Operating System and were conducted using the Linux Trace Toolkit Next Generation (LTTng), which is a very flexible tool with low overhead

    Model of Adaptive System for Continuous Monitoring and Performance Prediction of Distributed Applications

    Get PDF
    Stalno praćenje rada softvera je neophodno da bi se utvrdilo da li softver poĆĄtuje zadate nivoe kvaliteta. Na osnovu sakupljenih podataka, moguće je da se predvidi i dalje ponaĆĄanje aplikacije i da se izvrĆĄi izbor daljih akcija da bi se odrĆŸao zahtevani nivo. Tema ove disertacije je razvoj sistema za kontinualno praćenje performansi softvera, kao i razvoj modela za predviđanje performansi softvera. Za implementaciju sistema potrebljena je JEE tehnologija, ali je sistem razvijen tako da moĆŸe da se primeni i za praćenje softvera razvijenog za druge platforme. Sistem je modelovan tako minimalno utiče na performanse sistema softvera koji se prati. Linearna regresija je upotrebljena za modelovanje zavisnosti performansi od okruĆŸenja u kom se softver izvrĆĄava. Sistem je upotrebljen za praćenje izabrane JEE aplikacije.Continuous monitoring of software is necessary to determine whether the software performs within required service perfomance levels. Based on collected data, it is possible to predict the future performance of applications and to plan further actions in order to maintain the required service levels. The theme of this dissertation is the development of systems for continuous performance monitoring software, as well as the development of models for predicting the performance of software. To implement the system was used JEE technologies, but the system was developed so that it can be used for tracking software developed for other platforms. The system is modeled as a minimum impact on system performance software that is monitored. Linear regression was used for modeling the dependence of the performance environment in which the software is running. The system was used to monitor selected JEE applications
    corecore