176 research outputs found

    Applying OMG D&C Specification and ECA Rules for Autonomous Distributed Component-based Systems

    Get PDF
    Manual administration of complex distributed applications is almost impossible to achieve. On one side, work in autonomic computing focuses on systems that are able to maintain themselves, driven by high-level policies. Such a selfadministration relies on the concept of a control loop. On the other side, modeling is currently used to ease design of complex distributed systems. Nevertheless, at runtime, models remain useless, because they are decoupled from the running system which is subject to dynamic changes. The autonomic computing control loop involves an abstract representation of the system used to analyze the situation and to adapt the application properly. Our proposition, named Distributed Autonomous Component-based ARchitectures (Dacar), introduces models in the control loop. Using adequate models into the control loop, it is possible to design both the distributed systems and their evolution policies, and to execute them. The metamodel suggested in our work mixes both OMG Deployment and Configuration specification and the Event-Condition-Action (ECA) metamodels. This paper treats the different concerns that are present in the control loop and focuses on the concepts of the metamodel that are needed to express entities of the control loop. It also gives an overview of the current Dacar prototype and illustrated it on an ubiquitous application example

    Middleware-based Database Replication: The Gaps between Theory and Practice

    Get PDF
    The need for high availability and performance in data management systems has been fueling a long running interest in database replication from both academia and industry. However, academic groups often attack replication problems in isolation, overlooking the need for completeness in their solutions, while commercial teams take a holistic approach that often misses opportunities for fundamental innovation. This has created over time a gap between academic research and industrial practice. This paper aims to characterize the gap along three axes: performance, availability, and administration. We build on our own experience developing and deploying replication systems in commercial and academic settings, as well as on a large body of prior related work. We sift through representative examples from the last decade of open-source, academic, and commercial database replication systems and combine this material with case studies from real systems deployed at Fortune 500 customers. We propose two agendas, one for academic research and one for industrial R&D, which we believe can bridge the gap within 5-10 years. This way, we hope to both motivate and help researchers in making the theory and practice of middleware-based database replication more relevant to each other.Comment: 14 pages. Appears in Proc. ACM SIGMOD International Conference on Management of Data, Vancouver, Canada, June 200

    General-purpose autonomic computing

    Get PDF
    The success of mainstream computing is largely due to the widespread availability of general-purpose architectures and of generic approaches that can be used to solve real-world problems cost-effectively and across a broad range of application domains. In this chapter, we propose that a similar generic framework is used to make the development of autonomic solutions cost effective, and to establish autonomic computing as a major approach to managing the complexity of today’s large-scale systems and systems of systems. To demonstrate the feasibility of general-purpose autonomic computing, we introduce a generic autonomic computing framework comprising a policy-based autonomic architecture and a novel four-step method for the effective development of self-managing systems. A prototype implementation of the reconfigurable policy engine at the core of our architecture is then used to develop autonomic solutions for case studies from several application domains. Looking into the future, we describe a methodology for the engineering of self-managing systems that extends and generalises our autonomic computing framework further

    Cooperative resource management in the cloud

    Get PDF
    L’évolution des infrastructures informatiques encourage la gestion sĂ©parĂ©e de l’infrastructure matĂ©rielle et de celle des logiciels. Dans cette direction, les infrastructures de cloud virtualisĂ©es sont devenues trĂ©s populaires. Parmi les diffĂ©rents modĂšles de cloud, les Infrastructures as a Service (IaaS) ont de nombreux avantages pour le fournisseur comme pour le client. Dans ce modĂšle de cloud, le fournisseur fournit ses ressources virtualisĂ©es et il est responsable de la gestion de son infrastructure. De son cotĂ©, le client gĂšre son application qui est dĂ©ployĂ©e dans les machines virtuelles allouĂ©es. Ces deux acteurs s’appuient gĂ©nĂ©ralement sur des systĂšmes d’administration autonomes pour automatiser les tĂąches d’administration. RĂ©duire la quantitĂ© de ressources utilisĂ©es (et la consommation d’énergie) est un des principaux objectifs de ce modĂšle de cloud. Cette rĂ©duction peut ĂȘtre obtenue Ă  l’exĂ©cution au niveau de l’application par le client (en redimensionnant l’application) ou au niveau du systĂšme virtualisĂ© par le fournisseur (en regroupant les machines virtuelles dans l’infrastructure matĂ©rielle en fonction de leur charge). Dans les infrastructures de cloud traditionnelles, les politiques de gestion de ressources ne sont pas coopĂ©ratives : le fournisseur ne possĂšde pas d’informations dĂ©taillĂ©es sur les applications. Ce manque de coordination engendre des surcoĂ»ts et des gaspillages de ressources qui peuvent ĂȘtre rĂ©duits avec une politique de gestion de ressources coopĂ©rative. Dans cette thĂšse, nous traitons du problĂšme de la gestion de ressources sĂ©parĂ©e dans un environnement de cloud virtualisĂ©. Nous proposons un modĂšle de machines virtuelles Ă©lastiques avec une politique de gestion coopĂ©rative des ressources. Cette politique associe la connaissance des deux acteurs du cloud afin de rĂ©duire les coĂ»ts et la consommation d’énergie. Nous Ă©valuons les bĂ©nĂ©fices de cette approche avec plusieurs expĂ©riences dans un IaaS privĂ©. Cette Ă©valuation montre que notre politique est meilleure que la gestion des ressources non coordonnĂ©e dans un IaaS traditionnel, car son impact sur les performances est faible et elle permet une meilleure utilisation des ressources matĂ©rielles et logicielles. ABSTRACT : Recent advances in computer infrastructures encourage the separation of hardware and software management tasks. Following this direction, virtualized cloud infrastructures are becoming very popular. Among various cloud models, Infrastructure as a Service (IaaS) provides many advantages to both provider and customer. In this service model, the provider offers his virtualized resource, and is responsible for managing his infrastructure, while the customer manages his application deployed in the allocated virtual machines. These two actors typically use autonomic resource management systems to automate these tasks at runtime. Minimizing the amount of resource (and power consumption) in use is one of the main services that such cloud model must ensure. This objective can be done at runtime either by the customer at the application level (by scaling the application) or by the provider at the virtualization level (by migrating virtual machines based on the infrastructure’s utilization rate). In traditional cloud infrastructures, these resource management policies work uncoordinated: knowledge about the application is not shared with the provider. This behavior faces application performance overheads and resource wasting, which can be reduced with a cooperative resource management policy. In this research work, we discuss the problem of separate resource management in the cloud. After having this analysis, we propose a direction to use elastic virtual machines with cooperative resource management. This policy combines the knowledge of the application and the infrastructure in order to reduce application performance overhead and power consumption. We evaluate the benefit of our cooperative resource management policy with a set of experiments in a private IaaS. The evaluation shows that our policy outperforms uncoordinated resource management in traditional IaaS with lower performance overhead, better virtualized and physical resource usage

    QoS control of E-business systems through performance modelling and estimation

    Get PDF
    E-business systems provide the infrastructure whereby parties interact electronically via business transactions. At peak loads, these systems are susceptible to large volumes of transactions and concurrent users and yet they are expected to maintain adequate performance levels. Over provisioning is an expensive solution. A good alternative is the adaptation of the system, managing and controlling its resources. We address these concerns by presenting a model that allows fast evaluation of performance metrics in terms of measurable or controllable parameters. The model can be used in order to (a) predict the performance of a system under given or assumed loading conditions and (b) to choose the optimal configuration set-up for certain controllable parameters with respect to specified performance measures. Firstly, we analyze the characteristics of E-business systems. This analysis leads to the analytical model, which is sufficiently general to capture the behaviour of a large class of commonly encountered architectures. We propose an approximate solution which is numerically efficient and fast. By mean of simulation, we prove that its accuracy is acceptable over a wide range of system configurations and different load levels. We further evaluate the approximate solution by comparing it to a real-life E-business system. A J2EE application of non-trivial size and complexity is deployed on a 2-tier system composed of the JBoss application server and a database server. We implement an infrastructure fully integrated on the application server, capable of monitoring the E-business system and controlling its configuration parameters. Finally, we use this infrastructure to quantify both the static parameters of the model and the observed performance. The latter are then compared with the metrics predicted by the model, showing that the approximate solution is almost exact in predicting performance and that it assesses the optimal system configuration very accurately.EThOS - Electronic Theses Online ServiceGBUnited Kingdo

    StarMX: A Framework for Developing Self-Managing Software Systems

    Get PDF
    The scale of computing systems has extensively grown over the past few decades in order to satisfy emerging business requirements. As a result of this evolution, the complexity of these systems has increased significantly, which has led to many difficulties in managing and administering them. The solution to this problem is to build systems that are capable of managing themselves, given high-level objectives. This vision is also known as Autonomic Computing. A self-managing system is governed by a closed control loop, which is responsible for dynamically monitoring the underlying system, analyzing the observed situation, planning the recovering actions, and executing the plan to maintain the system equilibrium. The realization of such systems poses several developmental and operational challenges, including: developing their architecture, constructing the control loop, and creating services that enable dynamic adaptation behavior. Software frameworks are effective in addressing these challenges: they can simplify the development of such systems by reducing design and implementation efforts, and they provide runtime services for supporting self-managing behavior. This dissertation presents a novel software framework, called StarMX, for developing adaptive and self-managing Java-based systems. It is a generic configurable framework based on standards and well-established principles, and provides the required features and facilities for the development of such systems. It extensively supports Java Management Extensions (JMX) and is capable of integrating with different policy engines. This allows the developer to incorporate and use these techniques in the design of a control loop in a flexible manner. The control loop is created as a chain of entities, called processes, such that each process represents one or more functions of the loop (monitoring, analyzing, planning, and executing). A process is implemented by either a policy language or the Java language. At runtime, the framework invokes the chain of processes in the control loop, providing each one with the required set of objects for monitoring and effecting. An open source Java-based Voice over IP system, called CC2, is selected as the case study used in a set of experiments that aim to capture a solid understanding of the framework suitability for developing adaptive systems and to improve its feature set. The experiments are also used to evaluate the performance overhead incurred by the framework at runtime. The performance analysis results show the execution time spent in different components, including the framework itself, the policy engine, and the sensors/effectors. The results also reveal that the time spent in the framework is negligible, and it has no considerable impact on the system's overall performance
    • 

    corecore