601 research outputs found

    Toward a Formal Semantics for Autonomic Components

    Full text link
    Autonomic management can improve the QoS provided by parallel/ distributed applications. Within the CoreGRID Component Model, the autonomic management is tailored to the automatic - monitoring-driven - alteration of the component assembly and, therefore, is defined as the effect of (distributed) management code. This work yields a semantics based on hypergraph rewriting suitable to model the dynamic evolution and non-functional aspects of Service Oriented Architectures and component-based autonomic applications. In this regard, our main goal is to provide a formal description of adaptation operations that are typically only informally specified. We contend that our approach makes easier to raise the level of abstraction of management code in autonomic and adaptive applications.Comment: 11 pages + cover pag

    Energy-QoS Tradeoffs in J2EE Hosting Centers

    Get PDF
    International audienceNowadays, hosting centres are widely used to host various kinds of applications e.g., web servers or scientific applications. Resource management is a major challenge for most organisations that run these infrastructures. Many studies show that clusters are not used at their full capacity which represents a significant source of waste. Autonomic management systems have been introduced in order to dynamically adapt software infrastructures according to runtime conditions. They provide support to deploy, configure, monitor, and repair applications in such environments. In this paper, we report our experiments in using an autonomic management system to provide resource aware management for a clustered application. We consider a standard replicated server infrastructure in which we dynamically adapt the degree of replication in order to ensure a given QoS while minimising energy consumption

    Passive Fault-Tolerance Management in Component-Based Embedded Systems

    Get PDF
    It is imperative to accept that failures can and will occur even in meticulously designed distributed systems and to design proper measures to counter those failures. Passive replication minimizes resource consumption by only activating redundant replicas in case of failures, as typically, providing and applying state updates is less resource demanding than requesting execution. However, most existing solutions for passive fault tolerance are usually designed and configured at design time, explicitly and statically identifying the most critical components and their number of replicas, lacking the needed flexibility to handle the runtime dynamics of distributed component-based embedded systems. This paper proposes a cost-effective adaptive fault tolerance solution with a significant lower overhead compared to a strict active redundancy-based approach, achieving a high error coverage with a minimum amount of redundancy. The activation of passive replicas is coordinated through a feedback-based coordination model that reduces the complexity of the needed interactions among components until a new collective global service solution is determined, hence improving the overall maintainability and robustness of the system

    A Framework for Effective Placement of Virtual Machine Replicas for Highly Available Performance-sensitive Cloud-based Applications

    Get PDF
    REACTION 2012. 1st International workshop on Real-time and distributed computing in emerging applications. December 4th, 2012, San Juan, Puerto Rico.Applications are increasingly being deployed in the Cloud due to benefits stemming from economy of scale, scalability, flexibility and utility-based pricing model. Although most cloud-based applications have hitherto been enterprisestyle, there is a new trend towards hosting performancesensitive applications in the cloud that demand both high availability and good response times. In the current stateof- the-art in cloud computing research, there does not exist solutions that provide both high availability and acceptable response times to these applications in a way that also optimizes resource consumption in data centers, which is a key consideration for cloud providers. This paper addresses this dual challenge by presenting a design of a fault-tolerant framework for virtualized data centers that makes two important contributions. First, it describes an architecture of a fault-tolerance framework that can be used to automatically deploy replicas of virtual machines in data centers in a way that optimizes resources while assures availability and responsiveness. Second, it describes a specific formulation of a replica deployment combinatorial optimization problem that can be plugged into our strategizable deployment framework.This work was supported in part by the National Science Foundation NSF SHF/CNS Award CNS 0915976 and NSF CAREER CNS 0845789. Any opinions, findings, and conclusions or recommendations expressed in this material are those of the author(s) and do not necessarily reflect the views of the National Science Foundation

    Taming Energy Costs of Large Enterprise Systems Through Adaptive Provisioning

    Get PDF
    One of the most pressing concerns in modern datacenter management is the rising cost of operation. Therefore, reducing variable expense, such as energy cost, has become a number one priority. However, reducing energy cost in large distributed enterprise system is an open research topic. These systems are commonly subjected to highly volatile workload processes and characterized by complex performance dependencies. This paper explicitly addresses this challenge and presents a novel approach to Taming Energy Costs of Larger Enterprise Systems (Tecless). Our adaptive provisioning methodology combines a low-level technical perspective on distributed systems with a high-level treatment of workload processes. More concretely, Tecless fuses an empirical bottleneck detection model with a statistical workload prediction model. Our methodology forecasts the system load online, which enables on-demand infrastructure adaption while continuously guaranteeing quality of service. In our analysis we show that the prediction of future workload allows adaptive provisioning with a power saving potential of up 25 percent of the total energy cost
    • …
    corecore