17 research outputs found
Profiling, extracting, and analyzing dynamic software metrics
This thesis presents a methodology for the analysis of software executions aimed at profiling software, extracting dynamic software metrics, and then analyzing those metrics with the goal of assisting software quality researchers. The methodology is implemented in a toolkit which consists of an event-based profiler which collects more accurate data than existing profilers, and a program called MetricView that derives and extracts dynamic metrics from the generated profiles. The toolkit was designed to be modular and flexible, allowing analysts and developers to easily extend its functionality to derive new or custom dynamic software metrics. We demonstrate the effectiveness and usefulness of DynaMEAT by applying it to several open-source projects of varying sizes
Understanding the performance of interactive applications
Many if not most computer systems are used by human users. The performance of such interactive systems ultimately affects those users. Thus, when measuring, understanding, and improving system performance, it makes sense to consider the human user's perspective. Essentially, the performance of interactive applications is determined by the perceptible lag in handling user requests. So, when characterizing the runtime of an interactive application we need a new approach that focuses on the perceptible lags rather than on overall and general performance characteristics. Such a new characterization approach should enable a new way to profile and improve the performance of interactive applications. Imagine a way that would seek out these perceptible lags and then investigate the causes of these lags. Performance analysts could simply optimize responsible parts of the software, thus eliminating perceptible lag for interactive applications. Unfortunately, existing profiling approaches either incur significant overhead that makes them impractical for an interactive scenario, or they lack the ability to provide insight into the causes of long latencies. An effective approach for interactive applications has to fulfill several requirements such as an accurate view of the causes of performance problems and insignificant perturbation of the interactive application. We propose a new profiling approach that helps developers to understand and improve the perceptible performance of interactive applications and satisfies the above needs
GĂ©nĂ©ration efïŹcace de graphes dâappels dynamiques complets
Analyser le code permet de vĂ©riïŹer ses fonctionnalitĂ©s, dĂ©tecter des bogues ou amĂ©liorer sa performance. Lâanalyse du code peut ĂȘtre statique ou dynamique. Des approches combinants les deux analyses sont plus appropriĂ©es pour les applications de taille industrielle oĂč lâutilisation individuelle de chaque approche ne peut fournir les rĂ©sultats souhaitĂ©s.
Les approches combinĂ©es appliquent lâanalyse dynamique pour dĂ©terminer les portions
à problÚmes dans le code et effectuent par la suite une analyse statique concentrée sur
les parties identiïŹĂ©es. Toutefois les outils dâanalyse dynamique existants gĂ©nĂšrent des
donnĂ©es imprĂ©cises ou incomplĂštes, ou aboutissent en un ralentissement inacceptable du temps dâexĂ©cution.
Lors de ce travail, nous nous intĂ©ressons Ă la gĂ©nĂ©ration de graphes dâappels dynamiques complets ainsi que dâautres informations nĂ©cessaires Ă la dĂ©tection des portions Ă
problĂšmes dans le code. Pour ceci, nous faisons usage de la technique dâinstrumentation dynamique du bytecode Java pour extraire lâinformation sur les sites dâappels, les sites de crĂ©ation dâobjets et construire le graphe dâappel dynamique du programme. Nous dĂ©montrons quâil est possible de proïŹler dynamiquement une exĂ©cution complĂšte dâune application Ă temps dâexĂ©cution non triviale, et dâextraire la totalitĂ© de lâinformation Ă un coup raisonnable. Des mesures de performance de notre proïŹleur sur trois sĂ©ries de benchmarks Ă charges de travail diverses nous ont permis de constater que la moyenne du coĂ»t de proïŹlage se situe entre 2.01 et 6.42.
Notre outil de génération de graphes dynamiques complets, nommé dyko, constitue
Ă©galement une plateforme extensible pour lâajout de nouvelles approches dâinstrumentation. Nous avons testĂ© une nouvelle technique dâinstrumentation des sites de crĂ©ation dâobjets qui consiste Ă adapter les modiïŹcations apportĂ©es par lâinstrumentation au bytecode de chaque mĂ©thode. Nous avons aussi testĂ© lâimpact de la rĂ©solution des sites dâappels sur la performance gĂ©nĂ©rale du proïŹleur.Code analysis is used to verify code functionality, detect bugs or improve its performance. Analyzing the code can be done either statically or dynamically. Approaches
combining both analysis techniques are most appropriate for industrial-scale applications where each one individually cannot provide the desired results. Blended analysis,
for example, ïŹrst applies dynamic analysis to identify problematic code regions and then performs a focused static analysis on these regions. However, the existing dynamic
analysis tools generate inaccurate or incomplete data, or result in an unacceptably slow execution times.
In this work, we focus on the generation of complete dynamic call graphs with additional information required for blended analysis. We make use of dynamic instrumentation techniques of Java bytecode to extract information about call sites and object
creation sites, and to build the dynamic call graph of the program. We demonstrate that it
is possible to proïŹle real-world applications to efïŹciently extract complete and accurate
information. Performance measurement of our proïŹler on three sets of benchmarks with
various workloads places the overhead of our proïŹler between 2.01 and 6.42.
Our proïŹling tool generating complete dynamic graphs, named dyko, is also an extensible platform for evaluating new instrumentation approaches. We tested a new adaptive
instrumentation technique for object creation sites which accommodates instrumentation
to the bytecode of each method. We also tested the impact of call sites resolution on the overall performance of the proïŹler
Efficient Reorganisation of Hybrid Index Structures Supporting Multimedia Search Criteria
This thesis describes the development and setup of hybrid index structures. They are access methods for retrieval techniques in hybrid data spaces which are formed by one or more relational or normalised columns in conjunction with one non-relational or non-normalised column. Examples for these hybrid data spaces are, among others, textual data combined with geographical ones or data from enterprise content management systems. However, all non-relational data types may be stored as well as image feature vectors or comparable types.
Hybrid index structures are known to function efficiently regarding retrieval operations. Unfortunately, little information is available about reorganisation operations which insert or update the row tuples. The fundamental research is mainly executed in simulation based environments. This work is written ensuing from a previous thesis that implements hybrid access structures in realistic database surroundings. During this implementation it has become obvious that retrieval works efficiently. Yet, the restructuring approaches require too much effort to be set up, e.g., in web search engine environments where several thousands of documents are inserted or modified every day. These search engines rely on relational database systems as storage backends. Hence, the setup of these access methods for hybrid data spaces is required in real world database management systems.
This thesis tries to apply a systematic approach for the optimisation of the rearrangement algorithms inside realistic scenarios. Thus, a measurement and evaluation scheme is created which is repeatedly deployed to an evolving state and a model of hybrid index structures in order to optimise the regrouping algorithms to make a setup of hybrid index structures in real world information systems possible. Thus, a set of input corpora is selected which is applied to the test suite as well as an evaluation scheme.
To sum up, it can be said that this thesis describes input sets, a test suite including an evaluation scheme as well as optimisation iterations on reorganisation algorithms reflecting a theoretical model framework to provide efficient reorganisations of hybrid index structures supporting multimedia search criteria