2,194 research outputs found

    Timed contract compliance under event timing uncertainty

    Get PDF
    Despite that many real-life contracts include time constraints, for instance explicitly specifying deadlines by when to perform actions, or for how long certain behaviour is prohibited, the literature formalising such notions is surprisingly sparse. Furthermore, one of the major challenges is that compliance is typically computed with respect to timed event traces with event timestamps assumed to be perfect. In this paper we present an approach for evaluating compliance under the effect of imperfect timing information, giving a semantics to analyse contract violation likelihood.peer-reviewe

    Conceptual Service Level Agreement Mechanism to Minimize the SLA Violation with SLA Negotiation Process in Cloud Computing Environment

    Get PDF
    تُستخدم الخدمة عبر الإنترنت لتكون بمثابة الدفع لكل استخدام في الحوسبة السحابية. لا يحتاج مستخدم الخدمة إلى عقد طويل مع مزودي الخدمات السحابية. اتفاقية مستوى الخدمة (SLAs) هي تفاهمات تم تحديدها بين مزودي الخدمة السحابية وغيرهم ، على سبيل المثال ، مستخدم الخدمة أو المشغل الوسيط أو المشغلين المراقبين. نظرًا لأن الحوسبة السحابية هي تقنية مستمرة تقدم العديد من الخدمات لتطبيقات الأعمال الأساسية وأنظمة قابلة للتكيف لإدارة الاتفاقيات عبر الإنترنت تعتبر مهمة تحافظ على اتفاقية مستوى الخدمةو جودة الخدمة لمستخدم السحابة. إذا فشل مزود الخدمة في الحفاظ على الخدمة المطلوبة ، فإن اتفاقية مستوى الخدمة تعتبر انتهاكًا لاتفاقية مستوى الخدمة. الهدف الرئيسي هو تقليل انتهاكات اتفاقية مستوى الخدمة (SLA) للحفاظ على جودة الخدمة لمستخدمي السحابة. في هذه المقالة البحثية ، اقترحنا صندوق أدوات للمساعدة في إجراء تبادل اتفاقية مستوى الخدمة مع مزودي الخدمة والذي سيمكن العميل السحابي من الإشارة إلى متطلبات جودة الخدمة واقترح خوارزمية بالإضافة إلى نموذج التفاوض من اجل التفاوض على الطلب مع الخدمة لمقدمي الخدمة لإنتاج اتفاقية أفضل بين مقدم الخدمة ومستهلك الخدمة السحابية. وبالتالي ، يمكن للإطار الذي تمت مناقشته تقليل انتهاكات اتفاقية مستوى الخدمة وكذلك خيبات الأمل في المفاوضات وتوسيع نطاق كفاية التكلفة. علاوة على ذلك ، فإن مجموعة أدوات اتفاقية مستوى الخدمة المقترحة منتجة بشكل إضافي للعملاء حتى يتمكن العملاء من تأمين سداد قيمة معقولة مقابل تقليل جودة الخدمة أو وقت التنازل. يوضح هذا البحث أنه يمكن الحفاظ على مستوى الضمان في موفري الخدمات السحابية من خلال نقل الخدمات دون انقطاع من منظور العميل.Online service is used to be as Pay-Per-Use in Cloud computing. Service user need not be in a long time contract with cloud service providers. Service level agreements (SLAs) are understandings marked between a cloud service providers and others, for example, a service user, intermediary operator, or observing operators. Since cloud computing is an ongoing technology giving numerous services to basic business applications and adaptable systems to manage online agreements are significant. SLA maintains the quality-of-service to the cloud user. If service provider fails to maintain the required service SLA is considered to be SLA violated. The main aim is to minimize the SLA violations for maintain the QoS of their cloud users. In this research article, a toolbox is proposed to help the procedure of exchanging of a SLA with the service providers that will enable the cloud client in indicating service quality demands and an algorithm as well as Negotiation model is also proposed to negotiate the request with the service providers to produce a better agreement between service provider and cloud service consumer. Subsequently, the discussed framework can reduce SLA violations as well as negotiation disappointments and have expanded cost-adequacy. Moreover, the suggested SLA toolkit is additionally productive to clients so clients can secure a sensible value repayment for diminished QoS or conceding time. This research shows the assurance level in the cloud service providers can be kept up by as yet conveying the services with no interruption from the client's perspectiv

    QoS-Based Optimization of Runtime Management of Sensing Cloud Applications

    Get PDF
    Die vorliegende Arbeit präsentiert Ansätze und Techniken zur qualitätsbewussten Verbesserung des Laufzeitmanagements von IoT-Anwendungen. IoT-Anwendungen nehmen über die Sensorik von Smart Devices ihre Umgebung wahr, um diese zu analysieren oder mit ihr zu interagieren. Smart Devices sind in der Rechen- und Speicherleistung begrenzt, weshalb viele IoT-Anwendungen über eine IoT Plattform mit elastischen und skalierbaren Cloud Services verbunden sind. Die Last auf dem Cloud Service entsteht durch die verbundenen Smart Devices, die kontinuierlich Nachrichten transferieren. Die Ressourcenkonfiguration des Cloud Services beeinflusst dessen Kapazität. Ein Service Operator, der eine IoT-Anwendung betreibt, ist mit der Herausforderung konfrontiert, die Smart Devices und den Cloud Service so zu konfigurieren, dass eine hohe Datenqualität bei niedrigen Betriebskosten erreicht wird. Um hierbei den Service Operator zur Design Time zu unterstützen, modellieren wir Kostenfunktionen für Datenqualitäten, die durch das Wechselspiel der Smart Device- und Cloud Service-Konfiguration beeinflusst werden. Mit Hilfe dieser Kostenfunktionen kann ein Service Operator nach einer kostenminimalen Konfiguration für bestimmte Szenarien suchen. Existierende Ansätze zur Optimierung von Anwendungen zur Design Time fokussieren sich auf traditionelle Software-Architekturen und bieten daher nicht die notwendigen Konzepte zur Kostenmodellierung von IoT-Anwendungen an. Des Weiteren unterstützen wir den Service Operator durch Lastkontrollverfahren, die auf Kapazitätsengpässe des Cloud Services durch eine kontrollierte Reduktion der Nachrichtenrate reagieren. Während sich das auf die Genauigkeit der Messungen nachteilig auswirken kann, stabilisieren sich zeitliche Verzögerungen und die IoT-Anwendung bleibt auch in starken Überlastszenarien verfügbar. Existierende Laufzeittechniken fokussieren sich auf die automatische Ressourcenprovisionierung von Cloud Services durch Auto-Scaler. Diese ermöglichen zwar, auf Kapazitätsengpässe und Lastschwankungen zu reagieren, doch die erreichte Quality-of-Service (QoS) kann dadurch mit hohen Betriebskosten verbunden sein. Daher ermöglichen wir durch die Lastkontrollverfahren eine weitere Technik, mit der einerseits dynamisch auf Kapazitätsengpässe reagiert werden und andererseits die zur Verfügung stehende Kapazität eines Cloud Services effizient genutzt werden kann. Außerdem präsentieren wir Kopplungstechniken, die Auto-Scaling und Lastkontrollverfahren kombinieren. Bestehende Ansätze zur Rekonfiguration von Smart Devices konzentrieren sich auf Qualitäten wie Genauigkeit oder Energie-Effizienz und sind daher ungeeignet, um auf Kapazitätsengpässe zu reagieren. Zusammenfassend liefert die Dissertation die folgenden Beiträge: 1. Untersuchung von Performance Metriken für Skalierentscheidungen: Wir haben Infrastuktur- und Anwendungsebenen-Metriken daraufhin evaluiert, wie geeignet sie für Skalierentscheidungen von Microservices sind, die variierende Charakteristiken aufweisen. Auf Basis der Ergebnisse kann ein Service Operator eine fundierte Entscheidung darüber treffen, welche Performance Metrik zur Skalierung eines bestimmten Microservices am geeignesten ist. 2. Design von QoS Kostenfunktionen für IoT-Anwendungen: Wir haben ein QoS Kostenmodell aufgestellt, dass das Wirken von Smart Device- und Cloud Service-Konfiguration auf die Qualitäten einer IoT-Anwendung erfasst. Auf Grundlage dieser Kostenmodelle kann die Konfiguration von IoT-Anwendungen zur Design Time optimiert werden. Des Weiteren können mit den Kostenfunktionen Laufzeitverfahren hinsichtlich ihrem Beitrag zur QoS für verschiedene Szenarien evaluiert werden. 3. Entwicklung von Lastkontrollverfahren für IoT-Anwendungen: Die präsentierten Verfahren bieten einen komplementären Mechanismus zu Auto-Scaling an, um bei Kapazitätsengpässen die QoS aufrechtzuerhalten. Hierbei wird die Gesamtlast auf dem Cloud Service durch Anpassungen der Nachrichtenrate der Smart Devices reduziert. Ein Service Operator hat hiermit die Möglichkeit, Kapazitätsengpässen über eine Degradierung der Datenqualität zu begegnen. 4. Kopplung von Lastkontrollverfahren mit Ressourcen-Provisionierung: Wir präsentieren regelbasierte Kopplungsmechanismen, die reaktiv Lastkontrollverfahren oder Auto-Scaler aktivieren und diese damit koppeln. Das ermöglicht, auf Kapazitätsengpässe über eine Kombination von Datenqualitätsreduzierungen und Ressourcekostenerhöhungen zu reagieren. 5. Design eines Frameworks zur Entwicklung selbst-adaptiver Systeme: Das selbst-adaptive Framework bietet ein Anwendungsmodell für IoT-Anwendungen und Konzepte für die Rekonfiguration von Microservices und Smart Devices an. Es kann in verschiedenen Cloud-Umgebungen aufgesetzt werden und beschleunigt die prototypische Entwicklung von Laufzeitverfahren. Wir validierten die Ansätze anhand zweier Case Study Systeme unterschiedlicher Komplexität. Das erste Case Study System besteht aus einem Cloud Service, welcher über eine IoT Plattform Nachrichten von virtuellen Smart Devices verarbeitet. Mit diesem System haben wir für unterschiedliche Anwendungsszenarien die Charakteristiken der vorgestellten Lastkontrollverfahren analysiert, um diese gegen Auto-Scaling und einer Kopplung der Ansätze zu vergleichen. Hierbei stellte sich heraus, dass die Lastkontrollverfahren ähnlich effizient wie Auto-Scaler Überlastszenarien addressieren können und sich die QoS in einem vergleichbaren Bereich bewegt. Im Schnitt erreichten die Lastkontrollverfahren in den untersuchten Szenarien etwa 50 % geringere QoS Gesamtkosten. Es zeigte sich auch, dass sowohl Auto-Scaling als auch die Lastkontrollverfahren in bestimmten Anwendungsszenarien deutliche Nachteile haben, so z. B. wenn die Datengenauigkeit oder Ressourcenkosten im Vordergrund stehen. Es hat sich gezeigt, dass eine Kopplung hierbei immer vorteilhaft ist, um die QoS beizubehalten. Im zweiten Case Study System haben wir eine intelligente Heizungslösung der Robert Bosch GmbH implementiert, um die Ansätze an einem komplexeren System zu validieren. Auch hier zeigte sich, dass eine Kombination von Lastkontrolle und Auto-Scaling am vorteilhaftesten ist und zu einer hohen Datenqualität bei geringen Ressourcenkosten beiträgt. Die Ergebnisse zeigen, dass die vorgestellten Lastkontrollverfahren geeignet sind, die QoS von IoT Anwendungen zu verbessern. Es bietet einem Service Operator damit ein weiteres Werkzeug für das Laufzeitmanagement von IoT Anwendungen, dass einen zum Auto-Scaling komplementären Mechanismus verwendet. Das hier vorgestellte Framework zur Entwicklung selbst-adaptiver IoT Systeme haben wir zur empirischen Beantwortung der Forschungsfragen instanziiert und damit dessen Eignung demonstriert. Wir zeigen außerdem eine exemplarische Verwendung der vorgestellten Kostenfunktionen für verschiedene Anwendungsszenarien und binden diese im Zuge der Validierung in einem Optimierungs-Framework ein

    Utility-based Allocation of Resources to Virtual Machines in Cloud Computing

    Get PDF
    In recent years, cloud computing has gained a wide spread use as a new computing model that offers elastic resources on demand, in a pay-as-you-go fashion. One important goal of a cloud provider is dynamic allocation of Virtual Machines (VMs) according to workload changes in order to keep application performance to Service Level Agreement (SLA) levels, while reducing resource costs. The problem is to find an adequate trade-off between the two conflicting objectives of application performance and resource costs. In this dissertation, resource allocation solutions for this trade-off are proposed by expressing application performance and resource costs in a utility function. The proposed solutions allocate VM resources at the global data center level and at the local physical machine level by optimizing the utility function. The utility function, given as the difference between performance and costs, represents the profit of the cloud provider and offers the possibility to capture in a flexible and natural way the performance-cost trade-off. For global level resource allocation, a two-tier resource management solution is developed. In the first tier, local node controllers are located that dynamically allocate resource shares to VMs, so to maximize a local node utility function. In the second tier, there is a global controller that makes VM live migration decisions in order to maximize a global utility function. Experimental results show that optimizing the global utility function by changing the number of physical nodes according to workload maintains the performance at acceptable levels while reducing costs. To allocate multiple resources at the local physical machine level, a solution based on feed-back control theory and utility function optimization is proposed. This dynamically allocates shares to multiple resources of VMs such as CPU, memory, disk and network I/O bandwidth. In addressing the complex non-linearities that exist in shared virtualized infrastructures between VM performance and resource allocations, a solution is proposed that allocates VM resources to optimize a utility function based on application performance and power modelling. An Artificial Neural Network (ANN) is used to build an on- line model of the relationships between VM resource allocations and application performance, and another one between VM resource allocations and physical machine power. To cope with large utility optimization times in the case of an increased number of VMs, a distributed resource manager is proposed. It consists of several ANNs, each responsible for modelling and resource allocation of one VM, while exchanging information with other ANNs for coordinating resource allocations. Experiments, in simulated and realistic environments, show that the distributed ANN resource manager achieves better performance-power trade-offs than a centralized version and a distributed non-coordinated resource manager. To deal with the difficulty of building an accurate online application model and long model adaptation time, a solution that offers model-free resource management based on fuzzy control is proposed. It optimizes a utility function based on a hill-climbing search heuristic implemented as fuzzy rules. To cope with long utility optimization time in the case of an increased number of VMs, a multi-agent fuzzy controller is developed where each agent, in parallel with others, optimizes its own local utility function. The fuzzy control approach eliminates the need to build a model beforehand and provides a robust solution even for noisy measurements. Experimental results show that the multi-agent fuzzy controller performs better in terms of utility value than a centralized fuzzy control version and a state-of-the-art adaptive optimal control approach, especially for an increased number of VMs. Finally, to address some of the problems of reactive VM resource allocation approaches, a proactive resource allocation solution is proposed. This approach decides on VM resource allocations based on resource demand prediction, using a machine learning technique called Support Vector Machine (SVM). To deal with interdependencies between VMs of the same multi-tier application, cross- correlation demand prediction of multiple resource usage time series of all VMs of the multi-tier application is applied. As experiments show, this results in improved prediction accuracy and application performance

    SLA-based trust model for secure cloud computing

    Get PDF
    Cloud computing has changed the strategy used for providing distributed services to many business and government agents. Cloud computing delivers scalable and on-demand services to most users in different domains. However, this new technology has also created many challenges for service providers and customers, especially for those users who already own complicated legacy systems. This thesis discusses the challenges of, and proposes solutions to, the issues of dynamic pricing, management of service level agreements (SLA), performance measurement methods and trust management for cloud computing.In cloud computing, a dynamic pricing scheme is very important to allow cloud providers to estimate the price of cloud services. Moreover, the dynamic pricing scheme can be used by cloud providers to optimize the total cost of cloud data centres and correlate the price of the service with the revenue model of service. In the context of cloud computing, dynamic pricing methods from the perspective of cloud providers and cloud customers are missing from the existing literature. A dynamic pricing scheme for cloud computing must take into account all the requirements of building and operating cloud data centres. Furthermore, a cloud pricing scheme must consider issues of service level agreements with cloud customers.I propose a dynamic pricing methodology which provides adequate estimating methods for decision makers who want to calculate the benefits and assess the risks of using cloud technology. I analyse the results and evaluate the solutions produced by the proposed scheme. I conclude that my proposed scheme of dynamic pricing can be used to increase the total revenue of cloud service providers and help cloud customers to select cloud service providers with a good quality level of service.Regarding the concept of SLA, I provide an SLA definition in the context of cloud computing to achieve the aim of presenting a clearly structured SLA for cloud users and improving the means of establishing a trustworthy relationship between service provider and customer. In order to provide a reliable methodology for measuring the performance of cloud platforms, I develop performance metrics to measure and compare the scalability of the virtualization resources of cloud data centres. First, I discuss the need for a reliable method of comparing the performance of various cloud services currently being offered. Then, I develop a different type of metrics and propose a suitable methodology to measure the scalability using these metrics. I focus on virtualization resources such as CPU, storage disk, and network infrastructure.To solve the problem of evaluating the trustworthiness of cloud services, this thesis develops a model for each of the dimensions for Infrastructure as a Service (IaaS) using fuzzy-set theory. I use the Takagi-Sugeno fuzzy-inference approach to develop an overall measure of trust value for the cloud providers. It is not easy to evaluate the cloud metrics for all types of cloud services. So, in this thesis, I use Infrastructure as a Service (IaaS) as a main example when I collect the data and apply the fuzzy model to evaluate trust in terms of cloud computing. Tests and results are presented to evaluate the effectiveness and robustness of the proposed model

    Towards a novel biologically-inspired cloud elasticity framework

    Get PDF
    With the widespread use of the Internet, the popularity of web applications has significantly increased. Such applications are subject to unpredictable workload conditions that vary from time to time. For example, an e-commerce website may face higher workloads than normal during festivals or promotional schemes. Such applications are critical and performance related issues, or service disruption can result in financial losses. Cloud computing with its attractive feature of dynamic resource provisioning (elasticity) is a perfect match to host such applications. The rapid growth in the usage of cloud computing model, as well as the rise in complexity of the web applications poses new challenges regarding the effective monitoring and management of the underlying cloud computational resources. This thesis investigates the state-of-the-art elastic methods including the models and techniques for the dynamic management and provisioning of cloud resources from a service provider perspective. An elastic controller is responsible to determine the optimal number of cloud resources, required at a particular time to achieve the desired performance demands. Researchers and practitioners have proposed many elastic controllers using versatile techniques ranging from simple if-then-else based rules to sophisticated optimisation, control theory and machine learning based methods. However, despite an extensive range of existing elasticity research, the aim of implementing an efficient scaling technique that satisfies the actual demands is still a challenge to achieve. There exist many issues that have not received much attention from a holistic point of view. Some of these issues include: 1) the lack of adaptability and static scaling behaviour whilst considering completely fixed approaches; 2) the burden of additional computational overhead, the inability to cope with the sudden changes in the workload behaviour and the preference of adaptability over reliability at runtime whilst considering the fully dynamic approaches; and 3) the lack of considering uncertainty aspects while designing auto-scaling solutions. This thesis seeks solutions to address these issues altogether using an integrated approach. Moreover, this thesis aims at the provision of qualitative elasticity rules. This thesis proposes a novel biologically-inspired switched feedback control methodology to address the horizontal elasticity problem. The switched methodology utilises multiple controllers simultaneously, whereas the selection of a suitable controller is realised using an intelligent switching mechanism. Each controller itself depicts a different elasticity policy that can be designed using the principles of fixed gain feedback controller approach. The switching mechanism is implemented using a fuzzy system that determines a suitable controller/- policy at runtime based on the current behaviour of the system. Furthermore, to improve the possibility of bumpless transitions and to avoid the oscillatory behaviour, which is a problem commonly associated with switching based control methodologies, this thesis proposes an alternative soft switching approach. This soft switching approach incorporates a biologically-inspired Basal Ganglia based computational model of action selection. In addition, this thesis formulates the problem of designing the membership functions of the switching mechanism as a multi-objective optimisation problem. The key purpose behind this formulation is to obtain the near optimal (or to fine tune) parameter settings for the membership functions of the fuzzy control system in the absence of domain experts’ knowledge. This problem is addressed by using two different techniques including the commonly used Genetic Algorithm and an alternative less known economic approach called the Taguchi method. Lastly, we identify seven different kinds of real workload patterns, each of which reflects a different set of applications. Six real and one synthetic HTTP traces, one for each pattern, are further identified and utilised to evaluate the performance of the proposed methods against the state-of-the-art approaches

    Autonomic management of virtualized resources in cloud computing

    Get PDF
    The last five years have witnessed a rapid growth of cloud computing in business, governmental and educational IT deployment. The success of cloud services depends critically on the effective management of virtualized resources. A key requirement of cloud management is the ability to dynamically match resource allocations to actual demands, To this end, we aim to design and implement a cloud resource management mechanism that manages underlying complexity, automates resource provisioning and controls client-perceived quality of service (QoS) while still achieving resource efficiency. The design of an automatic resource management centers on two questions: when to adjust resource allocations and how much to adjust. In a cloud, applications have different definitions on capacity and cloud dynamics makes it difficult to determine a static resource to performance relationship. In this dissertation, we have proposed a generic metric that measures application capacity, designed model-independent and adaptive approaches to manage resources and built a cloud management system scalable to a cluster of machines. To understand web system capacity, we propose to use a metric of productivity index (PI), which is defined as the ratio of yield to cost, to measure the system processing capability online. PI is a generic concept that can be applied to different levels to monitor system progress in order to identify if more capacity is needed. We applied the concept of PI to the problem of overload prevention in multi-tier websites. The overload predictor built on the PI metric shows more accurate and responsive overload prevention compared to conventional approaches. To address the issue of the lack of accurate server model, we propose a model-independent fuzzy control based approach for CPU allocation. For adaptive and stable control performance, we embed the controller with self-tuning output amplification and flexible rule selection. Finally, we build a QoS provisioning framework that supports multi-objective QoS control and service differentiation. Experiments on a virtual cluster with two service classes show the effectiveness of our approach in both performance and power control. To address the problems of complex interplay between resources and process delays in fine-grained multi-resource allocation, we consider capacity management as a decision-making problem and employ reinforcement learning (RL) to optimize the process. The optimization depends on the trial-and-error interactions with the cloud system. In order to improve the initial management performance, we propose a model-based RL algorithm. The neural network based environment model, which is learned from previous management history, generates simulated resource allocations for the RL agent. Experiment results on heterogeneous applications show that our approach makes efficient use of limited interactions and find near optimal resource configurations within 7 steps. Finally, we present a distributed reinforcement learning approach to the cluster-wide cloud resource management. We decompose the cluster-wide resource allocation problem into sub-problems concerning individual VM resource configurations. The cluster-wide allocation is optimized if individual VMs meet their SLA with a high resource utilization. For scalability, we develop an efficient reinforcement learning approach with continuous state space. For adaptability, we use VM low-level runtime statistics to accommodate workload dynamics. Prototyped in a iBalloon system, the distributed learning approach successfully manages 128 VMs on a 16-node close correlated cluster

    Business-driven resource allocation and management for data centres in cloud computing markets

    Get PDF
    Cloud Computing markets arise as an efficient way to allocate resources for the execution of tasks and services within a set of geographically dispersed providers from different organisations. Client applications and service providers meet in a market and negotiate for the sales of services by means of the signature of a Service Level Agreement that contains the Quality of Service terms that the Cloud provider has to guarantee by managing properly its resources. Current implementations of Cloud markets suffer from a lack of information flow between the negotiating agents, which sell the resources, and the resource managers that allocate the resources to fulfil the agreed Quality of Service. This thesis establishes an intermediate layer between the market agents and the resource managers. In consequence, agents can perform accurate negotiations by considering the status of the resources in their negotiation models, and providers can manage their resources considering both the performance and the business objectives. This thesis defines a set of policies for the negotiation and enforcement of Service Level Agreements. Such policies deal with different Business-Level Objectives: maximisation of the revenue, classification of clients, trust and reputation maximisation, and risk minimisation. This thesis demonstrates the effectiveness of such policies by means of fine-grained simulations. A pricing model may be influenced by many parameters. The weight of such parameters within the final model is not always known, or it can change as the market environment evolves. This thesis models and evaluates how the providers can self-adapt to changing environments by means of genetic algorithms. Providers that rapidly adapt to changes in the environment achieve higher revenues than providers that do not. Policies are usually conceived for the short term: they model the behaviour of the system by considering the current status and the expected immediate after their application. This thesis defines and evaluates a trust and reputation system that enforces providers to consider the impact of their decisions in the long term. The trust and reputation system expels providers and clients with dishonest behaviour, and providers that consider the impact of their reputation in their actions improve on the achievement of their Business-Level Objectives. Finally, this thesis studies the risk as the effects of the uncertainty over the expected outcomes of cloud providers. The particularities of cloud appliances as a set of interconnected resources are studied, as well as how the risk is propagated through the linked nodes. Incorporating risk models helps providers differentiate Service Level Agreements according to their risk, take preventive actions in the focus of the risk, and pricing accordingly. Applying risk management raises the fulfilment rate of the Service-Level Agreements and increases the profit of the providerPostprint (published version

    Application of learning algorithms to traffic management in integrated services networks.

    Get PDF
    SIGLEAvailable from British Library Document Supply Centre-DSC:DXN027131 / BLDSC - British Library Document Supply CentreGBUnited Kingdo

    Learning a goal-oriented model for energy efficient adaptive applications in data centers

    Get PDF
    This work has been motivated by the growing demand of energy coming from the IT sector. We propose a goal-oriented approach where the state of the system is assessed using a set of indicators. These indicators are evaluated against thresholds that are used as goals of our system. We propose a self-adaptive context-aware framework, where we learn both the relations existing between the indicators and the effect of the available actions over the indicators state. The system is also able to respond to changes in the environment, keeping these relations updated to the current situation. Results have shown that the proposed methodology is able to create a network of relations between indicators and to propose an effective set of repair actions to contrast suboptimal states of the data center. The proposed framework is an important tool for assisting the system administrator in the management of a data center oriented towards Energy Efficiency (EE), showing him the connections occurring between the sometimes contrasting goals of the system and suggesting the most likely successful repair action(s) to improve the system state, both in terms of EE and QoS
    corecore