81 research outputs found

    ARCHITECTURE-BASED RELIABILITY ANALYSIS OF WEB SERVICES

    Get PDF
    In a Service Oriented Architecture (SOA), the hierarchical complexity of Web Services (WS) and their interactions with the underlying Application Server (AS) create new challenges in providing a realistic estimate of WS performance and reliability. The current approaches often treat the entire WS environment as a black-box. Thus, the sensitivity of the overall reliability and performance to the behavior of the underlying WS architectures and AS components are not well-understood. In other words, the current research on the architecture-based analysis of WSs is limited. This dissertation presents a novel methodology for modeling the reliability and performance of web services. WSs are treated as atomic entities but the AS is broken down into layers. More specifically, interactions of WSs with the underlying layers of an AS are investigated. One important feature of the research is investigating the impact of dynamic parameters that exist at the layers, such as configuration parameters. These parameters may have negative impact on WSs performance if they are not configured properly. WSs are developed in house and the AS considered is JBoss AS. An experimental environment is setup so that controlled service requests can be generated and important performance metrics can be recorded under various configurations of the AS. On the other hand, a simulation model is developed from the source code and run-time behavior of the existing WS and AS implementations. The model mimics the logical behavior of the WSs based on their communication with the AS layers. The simulation results are compared to the experimental results to ensure the correctness of the model. The architecture of the simulation model, which is based on Stochastic Petri Nets (SPN), is modularized in accordance to the layers and their interactions. As the web services are often executed in a complex and distributed environment, the modularized approach enables a user or a designer to observe and investigate the performance of the entire system under various conditions. In contrast, most approaches to WSs analyses are monolithic in that the entire system is treated as a closed box. The results show that 1) the simulation model can be a viable tool for measuring the performance and reliability of WSs under different loads and conditions that may be of great interest to WS designers and the professionals involved; 2) Configuration parameters have big impacts on the overall performance; 3) The simulation model can be tuned to account for various speeds in terms of communication, hardware, and software; 4) As the simulation model is modularized, it may be used as a foundation for aggregating the modules (layers), nullifying modules, or the model can be enhanced to include other aspects of the WS architecture such as network characteristics and the hardware/operating system on which the AS and WSs execute; and 5) The simulation model is beneficial to predict the performance of web services for those cases that are difficult to replicate in a field study

    Managing system to supervise professional multimedia equipment

    Get PDF
    Tese de Mestrado Integrado. Engenharia Informática e Computação. Faculdade de Engenharia. Universidade do Porto. 201

    A framework for adaptive monitoring and performance management of component-based enterprise applications

    Get PDF
    Most large-scale enterprise applications are currently built using component-based middleware platforms such as J2EE or .NET. Developers leverage enterprise services provided by such platforms to speed up development and increase the robustness of their applications. In addition, using a component-oriented development model brings benefits such as increased reusability and flexibility in integrating with third-party systems. In order to provide the required services, the application servers implementing the corresponding middleware specifications employ a complex run-time infrastructure that integrates with developer-written business logic. The resulting complexity of the execution environment in such systems makes it difficult for architects and developers to understand completely the implications of alternative design options over the resulting performance of the running system. They often make incorrect assumptions about the behaviour of the middleware, which may lead to design decisions that cause severe performance problems after the system has been deployed. This situation is aggravated by the fact that although application servers vary greatly in performance and capabilities, many advertise a similar set of features, making it difficult to choose the one that is the most appropriate for their task. The thesis presents a methodology and tool for approaching performance management in enterprise component-based systems. By leveraging the component platform infrastructure, the described solution can nonintrusively instrument running applications and extract performance statistics. The use of component meta-data for target analysis, together with standards-based implementation strategies, ensures the complete portability of the instrumentation solution across different application servers. Based on this instrumentation infrastructure, a complete performance management framework including modelling and performance prediction is proposed. Most instrumentation solutions exhibit static behaviour by targeting a specified set of components. For long running applications, a constant overhead profile is undesirable and typically, such a solution would only be used for the duration of a performance audit, sacrificing the benefits of constantly observing a production system in favour of a reduced performance impact. This is addressed in this thesis by proposing an adaptive approach to monitoring which uses execution models to target profiling operations dynamically on components that exhibit performance degradation; this ensures a negligible overhead when the target application performs as expected and a minimum impact when certain components under-perform. Experimental results obtained with the prototype tool demonstrate the feasibility of the approach in terms of induced overhead. The portable and extensible architecture yields a versatile and adaptive basic instrumentation facility for a variety of potential applications that need a flexible solution for monitoring long running enterprise applications

    Monitoring middleware for distributed applications

    Get PDF
    With growing maturity Internet services are proving integral to the provision of computer services. To provide consistent end-user experiences these services are increasingly augmented with some notion of 'Quality-of-Service' (QoS), which typically requires the management of computing resources to maintain a predictable level of service performance. It is difficult to guarantee consistent servIce provision In dynamic and open environments such as the Internet. However service monitoring can be used to inform compensatory actions by collecting meaningful service performance data from strategic points in an active service environment. Due to the unpredictable nature of the Internet distributed monitoring mechanisms face challenges with respect to the various communication protocols, application languages, and monitoring requirements associated with a service environment. With the growing popularity of Internet services creation of monitoring solutions on a per- service basis becomes time-consuming and misses opportunities to re-use existing logic. Ideally monitoring solutions would be domain-agnostic, automatically generated and automatically deployed. This thesis progresses these ambitions by providing a generic, distributed monitoring and evaluation framework based on Metric Collector (MeCo) components. These components can transparently gather measurement data across a range of service technologies as used within E-Commerce service environments. MeCo components form part of a framework which can interpret Service Level Agreements (SLAs) to automatically provide tailored service monitoring. The evaluation paradigms of the Meeo Framework are re-appropriated for use in Distributed Virtual Environments (DYEs). Quantifiable QoS requirements are established for Interest Management mechanisms (which limit message production based on object localities within a DYE). These are then incorporated into a DVE Simulator application. This application allows DYE application developers to evaluate Interest Management configurations for their suitability. Extensions to the DVE Simulator are exhibited in the Evolutionary Optimisation Simulator (EOS), which provides automated optimisation capabilities for DVE configurations through utilisation of genetic algorithm techniques.EThOS - Electronic Theses Online ServiceGBUnited Kingdo
    corecore