17 research outputs found

    Profiling, extracting, and analyzing dynamic software metrics

    Get PDF
    This thesis presents a methodology for the analysis of software executions aimed at profiling software, extracting dynamic software metrics, and then analyzing those metrics with the goal of assisting software quality researchers. The methodology is implemented in a toolkit which consists of an event-based profiler which collects more accurate data than existing profilers, and a program called MetricView that derives and extracts dynamic metrics from the generated profiles. The toolkit was designed to be modular and flexible, allowing analysts and developers to easily extend its functionality to derive new or custom dynamic software metrics. We demonstrate the effectiveness and usefulness of DynaMEAT by applying it to several open-source projects of varying sizes

    Understanding the performance of interactive applications

    Get PDF
    Many if not most computer systems are used by human users. The performance of such interactive systems ultimately affects those users. Thus, when measuring, understanding, and improving system performance, it makes sense to consider the human user's perspective. Essentially, the performance of interactive applications is determined by the perceptible lag in handling user requests. So, when characterizing the runtime of an interactive application we need a new approach that focuses on the perceptible lags rather than on overall and general performance characteristics. Such a new characterization approach should enable a new way to profile and improve the performance of interactive applications. Imagine a way that would seek out these perceptible lags and then investigate the causes of these lags. Performance analysts could simply optimize responsible parts of the software, thus eliminating perceptible lag for interactive applications. Unfortunately, existing profiling approaches either incur significant overhead that makes them impractical for an interactive scenario, or they lack the ability to provide insight into the causes of long latencies. An effective approach for interactive applications has to fulfill several requirements such as an accurate view of the causes of performance problems and insignificant perturbation of the interactive application. We propose a new profiling approach that helps developers to understand and improve the perceptible performance of interactive applications and satisfies the above needs

    GĂ©nĂ©ration efïŹcace de graphes d’appels dynamiques complets

    Get PDF
    Analyser le code permet de vĂ©riïŹer ses fonctionnalitĂ©s, dĂ©tecter des bogues ou amĂ©liorer sa performance. L’analyse du code peut ĂȘtre statique ou dynamique. Des approches combinants les deux analyses sont plus appropriĂ©es pour les applications de taille industrielle oĂč l’utilisation individuelle de chaque approche ne peut fournir les rĂ©sultats souhaitĂ©s. Les approches combinĂ©es appliquent l’analyse dynamique pour dĂ©terminer les portions Ă  problĂšmes dans le code et effectuent par la suite une analyse statique concentrĂ©e sur les parties identiïŹĂ©es. Toutefois les outils d’analyse dynamique existants gĂ©nĂšrent des donnĂ©es imprĂ©cises ou incomplĂštes, ou aboutissent en un ralentissement inacceptable du temps d’exĂ©cution. Lors de ce travail, nous nous intĂ©ressons Ă  la gĂ©nĂ©ration de graphes d’appels dynamiques complets ainsi que d’autres informations nĂ©cessaires Ă  la dĂ©tection des portions Ă  problĂšmes dans le code. Pour ceci, nous faisons usage de la technique d’instrumentation dynamique du bytecode Java pour extraire l’information sur les sites d’appels, les sites de crĂ©ation d’objets et construire le graphe d’appel dynamique du programme. Nous dĂ©montrons qu’il est possible de proïŹler dynamiquement une exĂ©cution complĂšte d’une application Ă  temps d’exĂ©cution non triviale, et d’extraire la totalitĂ© de l’information Ă  un coup raisonnable. Des mesures de performance de notre proïŹleur sur trois sĂ©ries de benchmarks Ă  charges de travail diverses nous ont permis de constater que la moyenne du coĂ»t de proïŹlage se situe entre 2.01 et 6.42. Notre outil de gĂ©nĂ©ration de graphes dynamiques complets, nommĂ© dyko, constitue Ă©galement une plateforme extensible pour l’ajout de nouvelles approches d’instrumentation. Nous avons testĂ© une nouvelle technique d’instrumentation des sites de crĂ©ation d’objets qui consiste Ă  adapter les modiïŹcations apportĂ©es par l’instrumentation au bytecode de chaque mĂ©thode. Nous avons aussi testĂ© l’impact de la rĂ©solution des sites d’appels sur la performance gĂ©nĂ©rale du proïŹleur.Code analysis is used to verify code functionality, detect bugs or improve its performance. Analyzing the code can be done either statically or dynamically. Approaches combining both analysis techniques are most appropriate for industrial-scale applications where each one individually cannot provide the desired results. Blended analysis, for example, ïŹrst applies dynamic analysis to identify problematic code regions and then performs a focused static analysis on these regions. However, the existing dynamic analysis tools generate inaccurate or incomplete data, or result in an unacceptably slow execution times. In this work, we focus on the generation of complete dynamic call graphs with additional information required for blended analysis. We make use of dynamic instrumentation techniques of Java bytecode to extract information about call sites and object creation sites, and to build the dynamic call graph of the program. We demonstrate that it is possible to proïŹle real-world applications to efïŹciently extract complete and accurate information. Performance measurement of our proïŹler on three sets of benchmarks with various workloads places the overhead of our proïŹler between 2.01 and 6.42. Our proïŹling tool generating complete dynamic graphs, named dyko, is also an extensible platform for evaluating new instrumentation approaches. We tested a new adaptive instrumentation technique for object creation sites which accommodates instrumentation to the bytecode of each method. We also tested the impact of call sites resolution on the overall performance of the proïŹler

    Efficient Reorganisation of Hybrid Index Structures Supporting Multimedia Search Criteria

    Get PDF
    This thesis describes the development and setup of hybrid index structures. They are access methods for retrieval techniques in hybrid data spaces which are formed by one or more relational or normalised columns in conjunction with one non-relational or non-normalised column. Examples for these hybrid data spaces are, among others, textual data combined with geographical ones or data from enterprise content management systems. However, all non-relational data types may be stored as well as image feature vectors or comparable types. Hybrid index structures are known to function efficiently regarding retrieval operations. Unfortunately, little information is available about reorganisation operations which insert or update the row tuples. The fundamental research is mainly executed in simulation based environments. This work is written ensuing from a previous thesis that implements hybrid access structures in realistic database surroundings. During this implementation it has become obvious that retrieval works efficiently. Yet, the restructuring approaches require too much effort to be set up, e.g., in web search engine environments where several thousands of documents are inserted or modified every day. These search engines rely on relational database systems as storage backends. Hence, the setup of these access methods for hybrid data spaces is required in real world database management systems. This thesis tries to apply a systematic approach for the optimisation of the rearrangement algorithms inside realistic scenarios. Thus, a measurement and evaluation scheme is created which is repeatedly deployed to an evolving state and a model of hybrid index structures in order to optimise the regrouping algorithms to make a setup of hybrid index structures in real world information systems possible. Thus, a set of input corpora is selected which is applied to the test suite as well as an evaluation scheme. To sum up, it can be said that this thesis describes input sets, a test suite including an evaluation scheme as well as optimisation iterations on reorganisation algorithms reflecting a theoretical model framework to provide efficient reorganisations of hybrid index structures supporting multimedia search criteria
    corecore