78 research outputs found

    A framework for adaptive monitoring and performance management of component-based enterprise applications

    Get PDF
    Most large-scale enterprise applications are currently built using component-based middleware platforms such as J2EE or .NET. Developers leverage enterprise services provided by such platforms to speed up development and increase the robustness of their applications. In addition, using a component-oriented development model brings benefits such as increased reusability and flexibility in integrating with third-party systems. In order to provide the required services, the application servers implementing the corresponding middleware specifications employ a complex run-time infrastructure that integrates with developer-written business logic. The resulting complexity of the execution environment in such systems makes it difficult for architects and developers to understand completely the implications of alternative design options over the resulting performance of the running system. They often make incorrect assumptions about the behaviour of the middleware, which may lead to design decisions that cause severe performance problems after the system has been deployed. This situation is aggravated by the fact that although application servers vary greatly in performance and capabilities, many advertise a similar set of features, making it difficult to choose the one that is the most appropriate for their task. The thesis presents a methodology and tool for approaching performance management in enterprise component-based systems. By leveraging the component platform infrastructure, the described solution can nonintrusively instrument running applications and extract performance statistics. The use of component meta-data for target analysis, together with standards-based implementation strategies, ensures the complete portability of the instrumentation solution across different application servers. Based on this instrumentation infrastructure, a complete performance management framework including modelling and performance prediction is proposed. Most instrumentation solutions exhibit static behaviour by targeting a specified set of components. For long running applications, a constant overhead profile is undesirable and typically, such a solution would only be used for the duration of a performance audit, sacrificing the benefits of constantly observing a production system in favour of a reduced performance impact. This is addressed in this thesis by proposing an adaptive approach to monitoring which uses execution models to target profiling operations dynamically on components that exhibit performance degradation; this ensures a negligible overhead when the target application performs as expected and a minimum impact when certain components under-perform. Experimental results obtained with the prototype tool demonstrate the feasibility of the approach in terms of induced overhead. The portable and extensible architecture yields a versatile and adaptive basic instrumentation facility for a variety of potential applications that need a flexible solution for monitoring long running enterprise applications

    CiCUTS: Combining System Execution Modeling Tools with Continuous Integration Environments

    Full text link
    System execution modeling (SEM) tools provide an effec-tive means to evaluate the quality of service (QoS) of enter-prise distributed real-time and embedded (DRE) systems. SEM tools facilitate testing and resolving performance is-sues throughout the entire development life-cycle, rather than waiting until final system integration. SEM tools have not historically focused on effective testing. New techniques are therefore needed to help bridge the gap between the early integration capabilities of SEM tools and testing so developers can focus on resolving strategic integration and performance issues, as opposed to wrestling with tedious and error-prone low-level testing concerns. This paper provides two contributions to research on us-ing SEM tools to address enterprise DRE system integration challenges. First, we evaluate several approaches for com-bining continuous integration environments with SEM tools and describe CiCUTS, which combines the CUTS SEM tool with the CruiseControl.NET continuous integration envi-ronment. Second, we present a case study that shows how CiCUTS helps reduce the time and effort required to man-age and execute integration tests that evaluate QoS met-rics for a representative DRE system from the domain of shipboard computing. The results of our case study show that CiCUTS helps developers and testers ensure the per-formance of an example enterprise DRE system is within its QoS specifications throughout development, instead of waiting until system integration time to evaluate QoS.

    EasyFJP: Providing Hybrid Parallelism as a Concern for Divide and Conquer Java Applications

    Get PDF
    Because of the increasing availability of multi-core machines, clus- ters, Grids, and combinations of these there is now plenty of computational power,but today's programmers are not fully prepared to exploit parallelism. In particular, Java has helped in handling the heterogeneity of such environments. However, there is a lot of ground to cover regarding facilities to easily and elegantly parallelizing applications. One path to this end seems to be the synthesis of semi- automatic parallelism and Parallelism as a Concern (PaaC). The former allows users to be mostly unaware of parallel exploitation problems and at the same time manually optimize parallelized applications whenever necessary, while the latter allows applications to be separated from parallel-related code. In this paper, we present EasyFJP, an approach that implicitly exploits parallelism in Java applications based on the concept of fork-join synchronization pattern, a simple but effective abstraction for creating and coordinating parallel tasks. In addition, EasyFJP lets users to explicitly optimize applications through policies, or user-provided rules to dynamically regulate task granularity. Finally, EasyFJP relies on PaaC by means of source code generation techniques to wire applications and parallel-specific code together. Experiments with real-world applications on an emulated Grid and a cluster evidence that EasyFJP delivers competitive performance compared to state-of-the-art Java parallel programming tools.Fil: Mateos Diaz, Cristian Maximiliano. Consejo Nacional de Investigaciones Científicas y Técnicas. Centro Científico Tecnológico - CONICET - Tandil. Instituto Superior de Ingenieria del Software; Argentina;Fil: Zunino Suarez, Alejandro Octavio. Consejo Nacional de Investigaciones Científicas y Técnicas. Centro Científico Tecnológico - CONICET - Tandil. Instituto Superior de Ingenieria del Software; Argentina;Fil: Hirsch Jofré, Matías Eberardo. Consejo Nacional de Investigaciones Científicas y Técnicas. Centro Científico Tecnológico - CONICET - Tandil. Instituto Superior de Ingenieria del Software; Argentina

    Service Quality and Profit Control in Utility Computing Service Life Cycles

    Get PDF
    Utility Computing is one of the most discussed business models in the context of Cloud Computing. Service providers are more and more pushed into the role of utilities by their customer's expectations. Subsequently, the demand for predictable service availability and pay-per-use pricing models increases. Furthermore, for providers, a new opportunity to optimise resource usage offers arises, resulting from new virtualisation techniques. In this context, the control of service quality and profit depends on a deep understanding of the representation of the relationship between business and technique. This research analyses the relationship between the business model of Utility Computing and Service-oriented Computing architectures hosted in Cloud environments. The relations are clarified in detail for the entire service life cycle and throughout all architectural layers. Based on the elaborated relations, an approach to a delivery framework is evolved, in order to enable the optimisation of the relation attributes, while the service implementation passes through business planning, development, and operations. Related work from academic literature does not cover the collected requirements on service offers in this context. This finding is revealed by a critical review of approaches in the fields of Cloud Computing, Grid Computing, and Application Clusters. The related work is analysed regarding appropriate provision architectures and quality assurance approaches. The main concepts of the delivery framework are evaluated based on a simulation model. To demonstrate the ability of the framework to model complex pay-per-use service cascades in Cloud environments, several experiments have been conducted. First outcomes proof that the contributions of this research undoubtedly enable the optimisation of service quality and profit in Cloud-based Service-oriented Computing architectures

    Statistiline lÀhenemine mÀlulekete tuvastamiseks Java rakendustes

    Get PDF
    Kaasaegsed hallatud kĂ€itusaja keskkonnad (ingl. managed runtime environment) ja programmeerimiskeeled lihtsustavad rakenduste loomist ning haldamist. KĂ”ige levinumaks nĂ€iteks sÀÀrase keele ja keskkonna kohta on Java. Üheks tĂ€htsaks hallatud kĂ€itusaja keskkonna ĂŒlesandeks on automaatne mĂ€luhaldus. Vaatamata sisseehitatud prĂŒgikoristajale, mĂ€lulekke probleem Javas on endiselt relevantne ning tĂ€hendab tarbetut mĂ€lu hoidmist. Probleem on eriti kriitiline rakendustes mis peaksid ööpĂ€evaringselt tĂ”rgeteta toimima, kuna mĂ€luleke on ĂŒks vĂ€heseid programmeerimisvigu mis vĂ”ib hĂ€vitada kogu Java rakenduse. Parimaks indikaatoriks otsustamaks kas objekt on kasutuses vĂ”i mitte on objekti viimane kasutusaeg. Selle meetrika pĂ”hiliseks puudujÀÀgiks on selle hind jĂ”udluse mĂ”ttes. KĂ€esolev vĂ€itekiri uurib mĂ€lulekete problemaatikat Javas ning pakub vĂ€lja uudse mĂ€lulekkeid tuvastava ning diagnoosiva algoritmi. VĂ€itekirjas kirjeldatakse alternatiivset lĂ€henemisviisi objektide kasutuse hindamiseks. PĂ”hihĂŒpoteesiks on idee et lekkivaid objekte saab statistiliste meetoditega eristada mittelekkivatest kui vaadelda objektide populatsiooni eluiga erinevate gruppide lĂ”ikes. Pakutud lĂ€henemine on oluliselt odavama hinnaga jĂ”udluse mĂ”ttes, kuna objekti kohta on vaja salvestada infot ainult selle loomise hetkel. VĂ€itekirja uurimistöö tulemusi on rakendatud mĂ€lulekete tuvastamise tööriista Plumbr arendamisel, mida hetkel edukalt kasutatakse ka erinevates toodangkeskkondades. PĂ€rast sissejuhatavaid peatĂŒkke, vĂ€itekirjas vaadeldakse siiani pakutud lahendusi ning on pakutud vĂ€lja ka nende meetodite klassifikatsioon. JĂ€rgnevalt on kirjeldatud statistiline baasmeetod mĂ€lulekete tuvastamiseks. Lisaks on analĂŒĂŒsitud ka kirjeldatud baasmeetodi puudujÀÀke. JĂ€rgnevalt on kirjeldatud kuidas said defineeritud lisamÔÔdikud mis aitasid masinĂ”ppe abil baasmeetodit tĂ€psemaks teha. Testandmeid masinĂ”ppe tarbeks on kogutud Plumbri abil pĂ€ris rakendustest ning toodangkeskkondadest. Lisaks, kirjeldatakse vĂ€itekirjas juhtumianalĂŒĂŒse ning vĂ”rdlust ĂŒhe olemasoleva mĂ€lulekete tuvastamise lahendusega.Modern managed runtime environments and programming languages greatly simplify creation and maintenance of applications. One of the best examples of such managed runtime environments and a language is the Java Virtual Machine and the Java programming language. Despite the built in garbage collector, the memory leak problem is still relevant in Java and means wasting memory by preventing unused objects from being removed. The problem of memory leaks is especially critical for applications, which are expected to work uninterrupted around the clock, as running out of memory is one of a few reasons which may cause the termination of the whole Java application. The best indicator of whether an object is used or not is the time of the last access. However, the main disadvantage of this metric is the incurred performance overhead. Current thesis researches the memory leak problem and proposes a novel approach for memory leak detection and diagnosis. The thesis proposes an alternative approach for estimation of the 'unusedness' of objects. The main hypothesis is that leaked objects may be identified by applying statistical methods to analyze lifetimes of objects, by observing the ages of the population of objects grouped by their allocation points. Proposed solution is much more efficient performance-wise as for each object it is sufficient to record any information at the time of creation of the object. The research conducted for the thesis is utilized in a memory leak detection tool Plumbr. After the introduction and overview of the state of the art, current thesis reviews existing solutions and proposes the classification for memory leak detection approaches. Next, the statistical approach for memory leak detection is described along with the description of the main metric used to distinguish leaking objects from non-leaking ones. Follows the analysis of this single metric. Based on this analysis additional metrics are designed and machine learning algorithms are applied on the statistical data acquired from real production environments from the Plumbr tool. Case studies of real applications and one previous solution for the memory leak detection are performed in order to evaluate performance overhead of the tool

    Anomaly Detection and Fault Localization Using Runtime State Models

    Get PDF
    Software systems are impacting every aspect of our daily lives, making software failures expensive, even life endangering. Despite rigorous testing, software bugs inevitably exist, especially in complex systems. Existing tools to aid debugging, such as tracing, profiling, and logging facilities, reveal the behavior of a program’s execution; however, they require the developers to manually correlate the data to diagnose faults. This work is the first to introduce the Runtime State Model, a summarization of a program’s behavior, for software anomaly detection and fault localization. A Runtime State Model is constructed from variables’ value change events of an execution. It consists of a set of states, and state transitions, where a state is a set of variables with their current values, and a state transition is induced by a variable’s value change. Comparisons between states from difference executions can be conducted to detect software anomalies. Deviations from the healthy states also help explain and locate faults in the source code. To automate this process, we implement Xtract, a facility that automatically extracts runtime traces from the Java Virtual Machines and constructs Runtime State Models for multiple simultaneous Java applications. Our evaluation provides evidence that Runtime State Models might be effective in detecting and locating injected faults to a RUBiS server with Xtract
    • 

    corecore