192 research outputs found

    JGroups evaluation in J2EE cluster environments

    Get PDF
    Clusters have become the de facto platform to scale J2EE application servers. Each tier of the server uses group communication to maintain consistency between replicated nodes. JGroups is the most commonly used Java middleware for group communications in J2EE open source implementations. No evaluation has been done yet to evaluate the scalability of this middleware and its impact on application server scalability. We present an evaluation of JGroups performance and scalability in the context of clustered J2EE application servers. We evaluate the JGroups configuration used by popular software such as the Tomcat JSP server or JBoss J2EE server. We benchmark JGroups with different network technologies, protocol stacks and cluster sizes. We show, using the default protocol stack, that group communication performance using UDP/IP depends on the switch capability to handle multicast packets. With UDP, Fast Ethernet can give better results than Gigabit Ethernet. We experiment with another configuration using TCP/IP and show that current J2EE application server clusters up to 16 nodes (the largest configuration we tested) can scale much better with this configuration. We attribute the superiority of TCP/IP based group communications over UDP/IP multicast to a better flow control management and a better usage of the network switches available in cluster environments. Finally, we discuss architectural improvements for a better modularity and resource usage of JGroups channels

    Using real options to select stable Middleware-induced software architectures

    Get PDF
    The requirements that force decisions towards building distributed system architectures are usually of a non-functional nature. Scalability, openness, heterogeneity, and fault-tolerance are examples of such non-functional requirements. The current trend is to build distributed systems with middleware, which provide the application developer with primitives for managing the complexity of distribution, system resources, and for realising many of the non-functional requirements. As non-functional requirements evolve, the `coupling' between the middleware and architecture becomes the focal point for understanding the stability of the distributed software system architecture in the face of change. It is hypothesised that the choice of a stable distributed software architecture depends on the choice of the underlying middleware and its flexibility in responding to future changes in non-functional requirements. Drawing on a case study that adequately represents a medium-size component-based distributed architecture, it is reported how a likely future change in scalability could impact the architectural structure of two versions, each induced with a distinct middleware: one with CORBA and the other with J2EE. An option-based model is derived to value the flexibility of the induced-architectures and to guide the selection. The hypothesis is verified to be true for the given change. The paper concludes with some observations that could stimulate future research in the area of relating requirements to software architectures

    Component replication in application servers

    Get PDF
    Three-tier middleware architecture is commonly used for hosting large-scale distributed applications. Typically the application is decomposed into three layers: front-end, middle tier and back-end. Front-end ("Web server") is responsible for handling user interactions and acts as a client of the middle tier, while back-end provides storage facilities for applications. Middle tier (' Application server') is usually the place where all computations are performed, so this layer provides middleware services for transactions, security and so forth. The benefit of this architecture is that it allows flexible configuration such as partitioning and clustering for improved performance and scalability. On this architecture, availability measures, such as replication, can be introduced in each tier in an application specific manner. Among the three tier described above, the availability of the middle tier and the back-end tier are the most important, as these tiers provide the computation and the data for the applications. This thesis investigates how replication for availability can be incorporated within the middle and back-end tiers. The replication mechanisms must guarantee exactly once execution of user request despite failures of application and database servers. The thesis develops an approach that requires enhancements to the middle tier only for supporting replication of both the tiers. The design, implementation and performance evaluation of such a middle tier based replication scheme for multi-database transactions on a widely deployed open source application server (1Boss) are presented.EThOS - Electronic Theses Online ServiceQUE Project, Department of Informatics, ITB, Bandung, IndonesiaGBUnited Kingdo

    Cache Serializability: Reducing Inconsistency in Edge Transactions

    Full text link
    Read-only caches are widely used in cloud infrastructures to reduce access latency and load on backend databases. Operators view coherent caches as impractical at genuinely large scale and many client-facing caches are updated in an asynchronous manner with best-effort pipelines. Existing solutions that support cache consistency are inapplicable to this scenario since they require a round trip to the database on every cache transaction. Existing incoherent cache technologies are oblivious to transactional data access, even if the backend database supports transactions. We propose T-Cache, a novel caching policy for read-only transactions in which inconsistency is tolerable (won't cause safety violations) but undesirable (has a cost). T-Cache improves cache consistency despite asynchronous and unreliable communication between the cache and the database. We define cache-serializability, a variant of serializability that is suitable for incoherent caches, and prove that with unbounded resources T-Cache implements this new specification. With limited resources, T-Cache allows the system manager to choose a trade-off between performance and consistency. Our evaluation shows that T-Cache detects many inconsistencies with only nominal overhead. We use synthetic workloads to demonstrate the efficacy of T-Cache when data accesses are clustered and its adaptive reaction to workload changes. With workloads based on the real-world topologies, T-Cache detects 43-70% of the inconsistencies and increases the rate of consistent transactions by 33-58%.Comment: Ittay Eyal, Ken Birman, Robbert van Renesse, "Cache Serializability: Reducing Inconsistency in Edge Transactions," Distributed Computing Systems (ICDCS), IEEE 35th International Conference on, June~29 2015--July~2 201

    Performance Evaluation and Comparison of Distributed Messaging Using Message Oriented Middleware

    Get PDF
    Message Oriented Middleware (MOM) is an enabling technology for modern event- driven applications that are typically based on publish/subscribe communication [Eugster03]. Enterprises typically contain hundreds of applications operating in environments with diverse databases and operating systems. Integration of these applications is required to coordinate the business process. Unfortunately, this is no easy task. Enterprise Integration, according to Brosey et al. (2001), aims to connect and combines people, processes, systems, and technologies to ensure that the right people and the right processes have the right information and the right resources at the right time [Brosey01]. Communication between different applications can be achieved by using synchronous and asynchronous communication tools. In synchronous communication, both parties involved must be online (for example, a telephone call), whereas in asynchronous communication, only one member needs to be online (email). Middleware is software that helps two applications communicate with one another. Remote Procedure Calls (RPC) and Object Request Brokers (ORB) are two types of synchronous middleware—when they send a request they must wait for an immediate reply. This can decrease an application’s performance when there is no need for synchronous communication. Even though asynchronous distributed messaging using message oriented middleware is widely used in industry, there is not enough work done in evaluating the performance of various open source Message oriented middleware. The objective of this work was to benchmark and evaluate three different open source MOM’s performance in publish/subscribe and point-to-point domains, functional comparison and qualitative study from developers perspective

    Middleware-based Database Replication: The Gaps between Theory and Practice

    Get PDF
    The need for high availability and performance in data management systems has been fueling a long running interest in database replication from both academia and industry. However, academic groups often attack replication problems in isolation, overlooking the need for completeness in their solutions, while commercial teams take a holistic approach that often misses opportunities for fundamental innovation. This has created over time a gap between academic research and industrial practice. This paper aims to characterize the gap along three axes: performance, availability, and administration. We build on our own experience developing and deploying replication systems in commercial and academic settings, as well as on a large body of prior related work. We sift through representative examples from the last decade of open-source, academic, and commercial database replication systems and combine this material with case studies from real systems deployed at Fortune 500 customers. We propose two agendas, one for academic research and one for industrial R&D, which we believe can bridge the gap within 5-10 years. This way, we hope to both motivate and help researchers in making the theory and practice of middleware-based database replication more relevant to each other.Comment: 14 pages. Appears in Proc. ACM SIGMOD International Conference on Management of Data, Vancouver, Canada, June 200

    IT auditing of oracle application server

    Get PDF

    Automatic performance optimisation of component-based enterprise systems via redundancy

    Get PDF
    Component technologies, such as J2EE and .NET have been extensively adopted for building complex enterprise applications. These technologies help address complex functionality and flexibility problems and reduce development and maintenance costs. Nonetheless, current component technologies provide little support for predicting and controlling the emerging performance of software systems that are assembled from distinct components. Static component testing and tuning procedures provide insufficient performance guarantees for components deployed and run in diverse assemblies, under unpredictable workloads and on different platforms. Often, there is no single component implementation or deployment configuration that can yield optimal performance in all possible conditions under which a component may run. Manually optimising and adapting complex applications to changes in their running environment is a costly and error-prone management task. The thesis presents a solution for automatically optimising the performance of component-based enterprise systems. The proposed approach is based on the alternate usage of multiple component variants with equivalent functional characteristics, each one optimized for a different execution environment. A management framework automatically administers the available redundant variants and adapts the system to external changes. The framework uses runtime monitoring data to detect performance anomalies and significant variations in the application's execution environment. It automatically adapts the application so as to use the optimal component configuration under the current running conditions. An automatic clustering mechanism analyses monitoring data and infers information on the components' performance characteristics. System administrators use decision policies to state high-level performance goals and configure system management processes. A framework prototype has been implemented and tested for automatically managing a J2EE application. Obtained results prove the framework's capability to successfully manage a software system without human intervention. The management overhead induced during normal system execution and through management operations indicate the framework's feasibility

    Attribute based component design: Supporting model driven development in CbSE

    Get PDF
    In analysing the evolution of Software Engineering, the scale of the components has increased, the requirements for different domains become complex and a variety of different component frameworks and their associated models have emerged. Many modern component frameworks provide enterprise level facilities and services, such as instance management, and component container support, that allow developers to apply if needed to manage scale and complexity. Although the services provided by these frameworks are common, they have different models and implementation. Accordingly, the main problem is, when developing a component based application using a component framework, the design of the components becomes tightly integrated with the framework implementation and the framework model is embedded in the component functionality, and hence reduces reusability. Another problem arose is, the designers must have in-depth knowledge of the implementation of a component framework to be able to model, design and implement the components and take advantages of the services provided. To address these problems, this research proposes the Attribute based Component Design (AbCD) approach which allows developers to model software using logical and abstract components at the specification level. The components encapsulate the provided functionality, as well as the required services, runtime requirements and interaction models using a set of attributes. These attributes are systemically derived by grouping common features and services from light weight component frameworks and heavy weight component frameworks that are available in the literature. The AbCD approach consists of the AbCD Meta-model, which is an extension of the บML meta-model, and the Component Design Guidelines (CDG) that includes core Component based Software Engineering principles to assist the modelling process for designers. To support the AbCD approach, an implementation has been developed as a set of plug-ins, called the AbCD tool suite, for Eclipse IDE. An evaluation of the AbCD approach is conducted by using the tool suite with two case studies. The first case study focuses on abstraction achieved by the AbCD approach and the second focuses on reusability of the components. The evaluation shows that the artefacts produced using the approach provide an alternative architectural view to the design and help to re-factor the design based on aspects. At the same time the evaluation process identified possible improvements in the AbCD meta-model and the tool suite constructed. This research provides a non-invasive approach for designing component based software using model driven development
    corecore