8 research outputs found

    Software Performance Modelling Using PEPA Nets

    Get PDF
    Modelling and analysing distributed and mobile software systems is a challenging task. PEPA nets—coloured stochastic Petri nets—are a recently introduced modelling formalism which clearly capture important features such as location, synchronisation and message passing. In this paper we describe PEPA nets and the newly-developed platform support for software performance modelling using them. Crucial to this support is the compilation from PEPA nets into Hillston’s PEPA stochastic process algebra in order to access the software tools which support the PEPA algebra. In addition to derivation of steady state performance measures, this suite of tools allows properties of the system to be verified using model-checking. We show the application of PEPA nets in the modelling and analysis of a secure Web service

    Addressing concerns in performance prediction : the impact of data dependencies and denormal arithmetic in scientific codes

    Get PDF
    To meet the increasing computational requirements of the scientific community, the use of parallel programming has become commonplace, and in recent years distributed applications running on clusters of computers have become the norm. Both parallel and distributed applications face the problem of predictive uncertainty and variations in runtime. Modern scientific applications have varying I/O, cache, and memory profiles that have significant and difficult to predict effects on their runtimes. Data-dependent sensitivities such as the costs of denormal floating point calculations introduce more variations in runtime, further hindering predictability. Applications with unpredictable performance or which have highly variable runtimes can cause several problems. If the runtime of an application is unknown or varies widely, workflow schedulers cannot eïżœciently allocate them to compute nodes, leading to the under-utilisation of expensive resources. Similarly, a lack of accurate knowledge of the performance of an application on new hardware can lead to misguided procurement decisions. In heavily parallel applications, minor variations in runtime on individual nodes can have disproportionate effects on the overall application runtime. Even on a smaller scale, a lack of certainty about an application's runtime can preclude its use in real-time or time-critical applications such as clinical diagnosis. This thesis investigates two sources of data-dependent performance variability. The first source is algorithmic and is seen in a state-of-the-art C++ biomedical imaging application. It identifies the cause of the variability in the application and develops a means of characterising the variability. This 'probe task' based model is adapted for use with a workflow scheduler, and the scheduling improvements it brings are examined. The second source of variability is more subtle as it is micro-architectural in nature. Depending on the input data, two runs of an application executing exactly the same sequence of instructions and with exactly the same memory access patterns can have large differences in runtime due to deficiencies in common hardware implementations of denormal arithmetic1. An exception-based profiler is written to detect occurrences of denormal arithmetic and it is shown how this is insufficient to isolate the sources of denormal arithmetic in an application. A novel tool based on theValgrind binary instrumentation framework is developed which can trace the origins of denormal values and the frequency of their occurrence in an application's data structures. This second tool is used to isolate and remove the cause of denormal arithmetic both from a simple numerical code, and then from a face recognition application

    Component performance modeling and scheduling strategies on grids

    Get PDF
    Doppelpromotion: Institut fĂŒr Roboterforschung Dortmund und der UniversitĂ€t Pis

    Addressing concerns in performance prediction : the impact of data dependencies and denormal arithmetic in scientific codes

    Get PDF
    To meet the increasing computational requirements of the scientific community, the use of parallel programming has become commonplace, and in recent years distributed applications running on clusters of computers have become the norm. Both parallel and distributed applications face the problem of predictive uncertainty and variations in runtime. Modern scientific applications have varying I/O, cache, and memory profiles that have significant and difficult to predict effects on their runtimes. Data-dependent sensitivities such as the costs of denormal floating point calculations introduce more variations in runtime, further hindering predictability. Applications with unpredictable performance or which have highly variable runtimes can cause several problems. If the runtime of an application is unknown or varies widely, workflow schedulers cannot eïżœciently allocate them to compute nodes, leading to the under-utilisation of expensive resources. Similarly, a lack of accurate knowledge of the performance of an application on new hardware can lead to misguided procurement decisions. In heavily parallel applications, minor variations in runtime on individual nodes can have disproportionate effects on the overall application runtime. Even on a smaller scale, a lack of certainty about an application’s runtime can preclude its use in real-time or time-critical applications such as clinical diagnosis. This thesis investigates two sources of data-dependent performance variability. The first source is algorithmic and is seen in a state-of-the-art C++ biomedical imaging application. It identifies the cause of the variability in the application and develops a means of characterising the variability. This ‘probe task’ based model is adapted for use with a workflow scheduler, and the scheduling improvements it brings are examined. The second source of variability is more subtle as it is micro-architectural in nature. Depending on the input data, two runs of an application executing exactly the same sequence of instructions and with exactly the same memory access patterns can have large differences in runtime due to deficiencies in common hardware implementations of denormal arithmetic1. An exception-based profiler is written to detect occurrences of denormal arithmetic and it is shown how this is insufficient to isolate the sources of denormal arithmetic in an application. A novel tool based on theValgrind binary instrumentation framework is developed which can trace the origins of denormal values and the frequency of their occurrence in an application’s data structures. This second tool is used to isolate and remove the cause of denormal arithmetic both from a simple numerical code, and then from a face recognition application.EThOS - Electronic Theses Online ServiceGBUnited Kingdo

    A Proactive Approach to Application Performance Analysis, Forecast and Fine-Tuning

    Get PDF
    A major challenge currently faced by the IT industry is the cost, time and resource associated with repetitive performance testing when existing applications undergo evolution. IT organizations are under pressure to reduce the cost of testing, especially given its high percentage of the overall costs of application portfolio management. Previously, to analyse application performance, researchers have proposed techniques requiring complex performance models, non-standard modelling formalisms, use of process algebras or complex mathematical analysis. In Continuous Performance Management (CPM), automated load testing is invoked during the Continuous Integration (CI) process after a build. CPM is reactive and raises alarms when performance metrics are violated. The CI process is repeated until performance is acceptable. Previous and current work is yet to address the need of an approach to allow software developers proactively target a specified performance level while modifying existing applications instead of reacting to the performance test results after code modification and build. There is thus a strong need for an approach which does not require repetitive performance testing, resource intensive application profilers, complex software performance models or additional quality assurance experts. We propose to fill this gap with an innovative relational model associating the operation‟s Performance with two novel concepts – the operation‟s Admittance and Load Potential. To address changes to a single type or multiple types of processing activities of an application operation, we present two bi-directional methods, both of which in turn use the relational model. From annotations of Delay Points within the code, the methods allow software developers to either fine-tune the operation‟s algorithm “targeting” a specified performance level in a bottom-up way or to predict the operation‟s performance due to code changes in a top-down way under a given workload. The methods do not need complex performance models or expensive performance testing of the whole application. We validate our model on a realistic experimentation framework. Our results indicate that it is possible to characterize an application Performance as a function of its Admittance and Load Potential and that the application Admittance can be characterized as a function of the latency of its Delay Points. Applying this method to complex large-scale systems has the potential to significantly reduce the cost of performance testing during system maintenance and evolution

    Analyse von IT-Anwendungen mittels Zeitvariation

    Get PDF
    Performanzprobleme treten in der Praxis von IT-Anwendungen hĂ€ufig auf, trotz steigender Hardwareleistung und verschiedenster AnsĂ€tze zur Entwicklung performanter Software im Softwarelebenszyklus. Modellbasierte Performanzanalysen ermöglichen auf Basis von Entwurfsartefakten eine PrĂ€vention von Performanzproblemen. Bei bestehenden oder teilweise implementierten IT-Anwendungen wird versucht, durch Hardwareskalierung oder Optimierung des Codes Performanzprobleme zu beheben. Beide AnsĂ€tze haben Nachteile: modellbasierte AnsĂ€tze werden durch die benötigte hohe Expertise nicht generell genutzt, die nachtrĂ€gliche Optimierung ist ein unsystematischer und unkoordinierter Prozess. Diese Dissertation schlĂ€gt einen neuen Ansatz zur Performanzanalyse fĂŒr eine nachfolgende Optimierung vor. Mittels eines Experiments werden Performanzwechselwirkungen in der IT-Anwendung identifiziert. Basis des Experiments, das Analyseinstrumentarium, ist eine zielgerichtete, zeitliche Variation von Start-, Endzeitpunkt oder Laufzeitdauer von AblĂ€ufen der IT-Anwendung. Diese Herangehensweise ist automatisierbar und kann strukturiert und ohne hohen Lernaufwand im Softwareentwicklungsprozess angewandt werden. Mittels der Turingmaschine wird bewiesen, dass durch die zeitliche Variation des Analyseinstrumentariums die Korrektheit von sequentiellen Berechnung beibehalten wird. Dies wird auf nebenlĂ€ufige Systeme mittels der parallelen Registermaschine erweitert und diskutiert. Mit diesem praxisnahen Maschinenmodell wird dargelegt, dass die entdeckten WirkzusammenhĂ€nge des Analyseinstrumentariums Optimierungskandidaten identifizieren. Eine spezielle Experimentierumgebung, in der die AblĂ€ufe eines Systems, bestehend aus Software und Hardware, programmierbar variiert werden können, wird mittels einer Virtualisierungslösung realisiert. Techniken zur Nutzung des Analyseinstrumentariums durch eine Instrumentierung werden angegeben. Eine Methode zur Ermittlung von Mindestanforderungen von IT-Anwendungen an die Hardware wird prĂ€sentiert und mittels der Experimentierumgebung anhand von zwei Szenarios und dem Android Betriebssystem exemplifiziert. Verschiedene Verfahren, um aus den Beobachtungen des Experiments die Optimierungskandidaten des Systems zu eruieren, werden vorgestellt, klassifiziert und evaluiert. Die Identifikation von Optimierungskandidaten und -potenzial wird an Illustrationsszenarios und mehreren großen IT-Anwendungen mittels dieser Methoden praktisch demonstriert. Als konsequente Erweiterung wird auf Basis des Analyseinstrumentariums eine Testmethode zum Validieren eines Systems gegenĂŒber nicht deterministisch reproduzierbaren Fehlern, die auf Grund mangelnder Synchronisationsmechanismen (z.B. Races) oder zeitlicher AblĂ€ufe entstehen (z.B. Heisenbugs, alterungsbedingte Fehler), angegeben.Performance problems are very common in IT-Application, even though hardware performance is consistently increasing and there are several different software performance engineering methodologies during the software life cycle. The early model based performance predictions are offering a prevention of performance problems based on software engineering artifacts. Existing or partially implemented IT-Applications are optimized with hardware scaling or code tuning. There are disadvantages with both approaches: the model based performance predictions are not generally used due to the needed high expertise, the ex post optimization is an unsystematic and unstructured process. This thesis proposes a novel approach to a performance analysis for a subsequent optimization of the IT-Application. Via an experiment in the IT-Application performance interdependencies are identified. The core of the analysis is a specific variation of start-, end time or runtime of events or processes in the IT-Application. This approach is automatic and can easily be used in a structured way in the software development process. With a Turingmachine the correctness of this experimental approach was proved. With these temporal variations the correctness of a sequential calculation is held. This is extended and discussed on concurrent systems with a parallel Registermachine. With this very practical machine model the effect of the experiment and the subsequent identification of optimization potential and candidates are demonstrated. A special experimental environment to vary temporal processes and events of the hardware and the software of a system was developed with a virtual machine. Techniques for this experimental approach via instrumenting are stated. A method to determine minimum hardware requirements with this experimental approach is presented and exemplified with two scenarios based on the Android Framework. Different techniques to determine candidates and potential for an optimization are presented, classified and evaluated. The process to analyze and identify optimization candidates and potential is demonstrated on scenarios for illustration purposes and real IT-Applications. As a consistent extension a test methodology enabling a test of non-deterministic reproducible errors is given. Such non-deterministic reproducible errors are faults in the system caused by insufficient synchronization mechanisms (for example Races or Heisenbugs) or aging-related faults
    corecore