155 research outputs found

    Planning and Optimization During the Life-Cycle of Service Level Agreements for Cloud Computing

    Get PDF
    Ein Service Level Agreement (SLA) ist ein elektronischer Vertrag zwischen dem Kunden und dem Anbieter eines Services. Die beteiligten Partner kl aren ihre Erwartungen und Verp ichtungen in Bezug auf den Dienst und dessen Qualit at. SLAs werden bereits f ur die Beschreibung von Cloud-Computing-Diensten eingesetzt. Der Diensteanbieter stellt sicher, dass die Dienstqualit at erf ullt wird und mit den Anforderungen des Kunden bis zum Ende der vereinbarten Laufzeit ubereinstimmt. Die Durchf uhrung der SLAs erfordert einen erheblichen Aufwand, um Autonomie, Wirtschaftlichkeit und E zienz zu erreichen. Der gegenw artige Stand der Technik im SLA-Management begegnet Herausforderungen wie SLA-Darstellung f ur Cloud- Dienste, gesch aftsbezogene SLA-Optimierungen, Dienste-Outsourcing und Ressourcenmanagement. Diese Gebiete scha en zentrale und aktuelle Forschungsthemen. Das Management von SLAs in unterschiedlichen Phasen w ahrend ihrer Laufzeit erfordert eine daf ur entwickelte Methodik. Dadurch wird die Realisierung von Cloud SLAManagement vereinfacht. Ich pr asentiere ein breit gef achertes Modell im SLA-Laufzeitmanagement, das die genannten Herausforderungen adressiert. Diese Herangehensweise erm oglicht eine automatische Dienstemodellierung, sowie Aushandlung, Bereitstellung und Monitoring von SLAs. W ahrend der Erstellungsphase skizziere ich, wie die Modellierungsstrukturen verbessert und vereinfacht werden k onnen. Ein weiteres Ziel von meinem Ansatz ist die Minimierung von Implementierungs- und Outsourcingkosten zugunsten von Wettbewerbsf ahigkeit. In der SLA-Monitoringphase entwickle ich Strategien f ur die Auswahl und Zuweisung von virtuellen Cloud Ressourcen in Migrationsphasen. Anschlie end pr ufe ich mittels Monitoring eine gr o ere Zusammenstellung von SLAs, ob die vereinbarten Fehlertoleranzen eingehalten werden. Die vorliegende Arbeit leistet einen Beitrag zu einem Entwurf der GWDG und deren wissenschaftlichen Communities. Die Forschung, die zu dieser Doktorarbeit gef uhrt hat, wurde als Teil von dem SLA@SOI EU/FP7 integriertem Projekt durchgef uhrt (contract No. 216556)

    Security in Cloud Computing: Evaluation and Integration

    Get PDF
    Au cours de la derniĂšre dĂ©cennie, le paradigme du Cloud Computing a rĂ©volutionnĂ© la maniĂšre dont nous percevons les services de la Technologie de l’Information (TI). Celui-ci nous a donnĂ© l’opportunitĂ© de rĂ©pondre Ă  la demande constamment croissante liĂ©e aux besoins informatiques des usagers en introduisant la notion d’externalisation des services et des donnĂ©es. Les consommateurs du Cloud ont gĂ©nĂ©ralement accĂšs, sur demande, Ă  un large Ă©ventail bien rĂ©parti d’infrastructures de TI offrant une plĂ©thore de services. Ils sont Ă  mĂȘme de configurer dynamiquement les ressources du Cloud en fonction des exigences de leurs applications, sans toutefois devenir partie intĂ©grante de l’infrastructure du Cloud. Cela leur permet d’atteindre un degrĂ© optimal d’utilisation des ressources tout en rĂ©duisant leurs coĂ»ts d’investissement en TI. Toutefois, la migration des services au Cloud intensifie malgrĂ© elle les menaces existantes Ă  la sĂ©curitĂ© des TI et en crĂ©e de nouvelles qui sont intrinsĂšques Ă  l’architecture du Cloud Computing. C’est pourquoi il existe un rĂ©el besoin d’évaluation des risques liĂ©s Ă  la sĂ©curitĂ© du Cloud durant le procĂ©dĂ© de la sĂ©lection et du dĂ©ploiement des services. Au cours des derniĂšres annĂ©es, l’impact d’une efficace gestion de la satisfaction des besoins en sĂ©curitĂ© des services a Ă©tĂ© pris avec un sĂ©rieux croissant de la part des fournisseurs et des consommateurs. Toutefois, l’intĂ©gration rĂ©ussie de l’élĂ©ment de sĂ©curitĂ© dans les opĂ©rations de la gestion des ressources du Cloud ne requiert pas seulement une recherche mĂ©thodique, mais aussi une modĂ©lisation mĂ©ticuleuse des exigences du Cloud en termes de sĂ©curitĂ©. C’est en considĂ©rant ces facteurs que nous adressons dans cette thĂšse les dĂ©fis liĂ©s Ă  l’évaluation de la sĂ©curitĂ© et Ă  son intĂ©gration dans les environnements indĂ©pendants et interconnectĂ©s du Cloud Computing. D’une part, nous sommes motivĂ©s Ă  offrir aux consommateurs du Cloud un ensemble de mĂ©thodes qui leur permettront d’optimiser la sĂ©curitĂ© de leurs services et, d’autre part, nous offrons aux fournisseurs un Ă©ventail de stratĂ©gies qui leur permettront de mieux sĂ©curiser leurs services d’hĂ©bergements du Cloud. L’originalitĂ© de cette thĂšse porte sur deux aspects : 1) la description innovatrice des exigences des applications du Cloud relativement Ă  la sĂ©curitĂ© ; et 2) la conception de modĂšles mathĂ©matiques rigoureux qui intĂšgrent le facteur de sĂ©curitĂ© dans les problĂšmes traditionnels du dĂ©ploiement des applications, d’approvisionnement des ressources et de la gestion de la charge de travail au coeur des infrastructures actuelles du Cloud Computing. Le travail au sein de cette thĂšse est rĂ©alisĂ© en trois phases.----------ABSTRACT: Over the past decade, the Cloud Computing paradigm has revolutionized the way we envision IT services. It has provided an opportunity to respond to the ever increasing computing needs of the users by introducing the notion of service and data outsourcing. Cloud consumers usually have online and on-demand access to a large and distributed IT infrastructure providing a plethora of services. They can dynamically configure and scale the Cloud resources according to the requirements of their applications without becoming part of the Cloud infrastructure, which allows them to reduce their IT investment cost and achieve optimal resource utilization. However, the migration of services to the Cloud increases the vulnerability to existing IT security threats and creates new ones that are intrinsic to the Cloud Computing architecture, thus the need for a thorough assessment of Cloud security risks during the process of service selection and deployment. Recently, the impact of effective management of service security satisfaction has been taken with greater seriousness by the Cloud Service Providers (CSP) and stakeholders. Nevertheless, the successful integration of the security element into the Cloud resource management operations does not only require methodical research, but also necessitates the meticulous modeling of the Cloud security requirements. To this end, we address throughout this thesis the challenges to security evaluation and integration in independent and interconnected Cloud Computing environments. We are interested in providing the Cloud consumers with a set of methods that allow them to optimize the security of their services and the CSPs with a set of strategies that enable them to provide security-aware Cloud-based service hosting. The originality of this thesis lies within two aspects: 1) the innovative description of the Cloud applications’ security requirements, which paved the way for an effective quantification and evaluation of the security of Cloud infrastructures; and 2) the design of rigorous mathematical models that integrate the security factor into the traditional problems of application deployment, resource provisioning, and workload management within current Cloud Computing infrastructures. The work in this thesis is carried out in three phases

    Business-driven resource allocation and management for data centres in cloud computing markets

    Get PDF
    Cloud Computing markets arise as an efficient way to allocate resources for the execution of tasks and services within a set of geographically dispersed providers from different organisations. Client applications and service providers meet in a market and negotiate for the sales of services by means of the signature of a Service Level Agreement that contains the Quality of Service terms that the Cloud provider has to guarantee by managing properly its resources. Current implementations of Cloud markets suffer from a lack of information flow between the negotiating agents, which sell the resources, and the resource managers that allocate the resources to fulfil the agreed Quality of Service. This thesis establishes an intermediate layer between the market agents and the resource managers. In consequence, agents can perform accurate negotiations by considering the status of the resources in their negotiation models, and providers can manage their resources considering both the performance and the business objectives. This thesis defines a set of policies for the negotiation and enforcement of Service Level Agreements. Such policies deal with different Business-Level Objectives: maximisation of the revenue, classification of clients, trust and reputation maximisation, and risk minimisation. This thesis demonstrates the effectiveness of such policies by means of fine-grained simulations. A pricing model may be influenced by many parameters. The weight of such parameters within the final model is not always known, or it can change as the market environment evolves. This thesis models and evaluates how the providers can self-adapt to changing environments by means of genetic algorithms. Providers that rapidly adapt to changes in the environment achieve higher revenues than providers that do not. Policies are usually conceived for the short term: they model the behaviour of the system by considering the current status and the expected immediate after their application. This thesis defines and evaluates a trust and reputation system that enforces providers to consider the impact of their decisions in the long term. The trust and reputation system expels providers and clients with dishonest behaviour, and providers that consider the impact of their reputation in their actions improve on the achievement of their Business-Level Objectives. Finally, this thesis studies the risk as the effects of the uncertainty over the expected outcomes of cloud providers. The particularities of cloud appliances as a set of interconnected resources are studied, as well as how the risk is propagated through the linked nodes. Incorporating risk models helps providers differentiate Service Level Agreements according to their risk, take preventive actions in the focus of the risk, and pricing accordingly. Applying risk management raises the fulfilment rate of the Service-Level Agreements and increases the profit of the providerPostprint (published version

    CloudOps: Towards the Operationalization of the Cloud Continuum: Concepts, Challenges and a Reference Framework

    Get PDF
    The current trend of developing highly distributed, context aware, heterogeneous computing intense and data-sensitive applications is changing the boundaries of cloud computing. Encouraged by the growing IoT paradigm and with flexible edge devices available, an ecosystem of a combination of resources, ranging from high density compute and storage to very lightweight embedded computers running on batteries or solar power, is available for DevOps teams from what is known as the Cloud Continuum. In this dynamic context, manageability is key, as well as controlled operations and resources monitoring for handling anomalies. Unfortunately, the operation and management of such heterogeneous computing environments (including edge, cloud and network services) is complex and operators face challenges such as the continuous optimization and autonomous (re-)deployment of context-aware stateless and stateful applications where, however, they must ensure service continuity while anticipating potential failures in the underlying infrastructure. In this paper, we propose a novel CloudOps workflow (extending the traditional DevOps pipeline), proposing techniques and methods for applications’ operators to fully embrace the possibilities of the Cloud Continuum. Our approach will support DevOps teams in the operationalization of the Cloud Continuum. Secondly, we provide an extensive explanation of the scope, possibilities and future of the CloudOps.This research was funded by the European project PIACERE (Horizon 2020 Research and Innovation Programme, under grant agreement No. 101000162)

    Planning and Management of Cloud Computing Networks

    Get PDF
    RĂ©sumĂ© L’évolution de l’internet a un effet important sur une grande partie de la population mondiale. On l’utilise pour communiquer, consulter de l’information, travailler et se divertir. Son utilitĂ© exceptionnelle a conduit Ă  une explosion de la quantitĂ© d’applications et de ressources informatiques. Cependant, la croissance du rĂ©seau entraĂźne une importante consommation Ă©nergĂ©tique. Si la consommation Ă©nergĂ©tique des rĂ©seaux de tĂ©lĂ©communications et des centres de donnĂ©es Ă©tait celle d’un pays, il se classerait 5e pays du monde. Pis, le nombre de serveurs dans le monde devrait ĂȘtre multipliĂ© par 10 entre 2013 et 2020. Ce contexte nous a motivĂ© Ă  Ă©tudier des techniques et des mĂ©thodes pour affecter les ressources d’une façon optimale par rapport aux coĂ»ts, Ă  la qualitĂ© de service, Ă  la consommation Ă©nergĂ©tique et `a l’impact Ă©cologique. Les rĂ©sultats que nous avons obtenus minimisent les dĂ©penses d’investissement (CAPEX) et les dĂ©penses d’exploitation (OPEX), rĂ©duisent d’un facteur 6 le temps de rĂ©ponse, diminuent la consommation Ă©nergĂ©tique de 30% et divisent les Ă©missions de CO2 par un facteur 60. L’infonuagique permet l’accĂšs dynamique aux ressources informatiques comme un service. Les programmes sont exĂ©cutĂ©s sur des serveurs connectĂ©s `a l’internet, et les usagers peuvent les utiliser depuis leurs ordinateurs et dispositifs mobiles. Le premier avantage de cette architecture est de rĂ©duire le temps de mise en place des applications et l’interopĂ©rabilitĂ©. En effet, un nouvel utilisateur n’a besoin que d’un navigateur web. Il n’est forcĂ© ni d’installer de programmes sur son ordinateur, ni de possĂ©der un systĂšme d’exploitation spĂ©cifique. Le deuxiĂšme avantage est la disponibilitĂ© des applications et de l’information de fa ̧con continue. Celles-ci peuvent ĂȘtre utilisĂ©es `a partir de n’importe quel endroit et de n’importe quel dis- positif connectĂ© `a l’internet. De plus, les serveurs et les ressources informatiques peuvent ĂȘtre affectĂ©s aux applications de fa ̧con dynamique, selon la quantitĂ© d’utilisateurs et la charge de travail. C’est ce que l’on appelle l’élasticitĂ© des applications.---------- Abstract The evolution of the Internet has a great impact on a big part of the population. People use it to communicate, query information, receive news, work, and as entertainment. Its extraordinary usefulness as a communication media made the number of applications and technological resources explode. However, that network expansion comes at the cost of an important power consumption. If the power consumption of telecommunication networks and data centers is considered as the power consumption of a country, it would rank at the 5th place in the world. Furthermore, the number of servers in the world is expected to grow by a factor of 10 between 2013 and 2020. This context motivates us to study techniques and methods to allocate cloud computing resources in an optimal way with respect to cost, quality of service (QoS), power consumption, and environmental impact. The results we obtained from our test cases show that besides minimizing capital expenditures (CAPEX) and operational expenditures (OPEX), the response time can be reduced up to 6 times, power consumption by 30%, and CO2 emissions by a factor of 60. Cloud computing provides dynamic access to IT resources as a service. In this paradigm, programs are executed in servers connected to the Internet that users access from their computers and mobile devices. The first advantage of this architecture is to reduce the time of application deployment and interoperability, because a new user only needs a web browser and does not need to install software on local computers with specific operating systems. Second, applications and information are available from everywhere and with any device with an Internet access

    CloudOps: Towards the Operationalization of the Cloud Continuum: Concepts, Challenges and a Reference Framework

    Get PDF
    The current trend of developing highly distributed, context aware, heterogeneous computing intense and data-sensitive applications is changing the boundaries of cloud computing. Encouraged by the growing IoT paradigm and with flexible edge devices available, an ecosystem of a combination of resources, ranging from high density compute and storage to very lightweight embedded computers running on batteries or solar power, is available for DevOps teams from what is known as the Cloud Continuum. In this dynamic context, manageability is key, as well as controlled operations and resources monitoring for handling anomalies. Unfortunately, the operation and management of such heterogeneous computing environments (including edge, cloud and network services) is complex and operators face challenges such as the continuous optimization and autonomous (re-)deployment of context-aware stateless and stateful applications where, however, they must ensure service continuity while anticipating potential failures in the underlying infrastructure. In this paper, we propose a novel CloudOps workflow (extending the traditional DevOps pipeline), proposing techniques and methods for applications’ operators to fully embrace the possibilities of the Cloud Continuum. Our approach will support DevOps teams in the operationalization of the Cloud Continuum. Secondly, we provide an extensive explanation of the scope, possibilities and future of the CloudOps.This research was funded by the European project PIACERE (Horizon 2020 Research and Innovation Programme, under grant agreement No. 101000162)

    SLA Violation Detection Model and SLA Assured Service Brokering (SLaB) in Multi-Cloud Architecture

    Get PDF
    Cloud brokering facilitates CSUs to find cloud services according to their requirements. In the current practice, CSUs or Cloud Service Brokers (CSBs) select cloud services according to SLA committed by CSPs in their website. In our observation, it is found that most of the CSPs do not fulfill the service commitment mentioned in the SLA agreement. Verified cloud service performances against their SLA commitment of CSPs provide an additional trust on CSBs to recommend services to the CSUs. In this thesis work, we propose a SLA assured service-brokering framework, which considers both committed and delivered SLA by CSPs in cloud service recommendation to the users. For the evaluation of the performance of CSPs, two evaluation techniques: Heat Map and IFL are proposed, which include both directly measurable and non-measurable parameters in the performance evaluation CSPs. These two techniques are implemented using real data measured from CSPs. The result shows that Heat Map technique is more transparent and consistent in CSP performance evaluation than IFL technique. In this work, regulatory compliance of the CSPs is also analyzed and visualized in performance heat map table to provide legal status of CSPs. Moreover, missing points in their terms of service and SLA document are analyzed and recommended to add in the contract document. In the revised European GPDR, DPIA is going to be mandatory for all organizations/tools. The decision recommendation tool developed using above mentioned evaluation techniques may cause potential harm to individuals in assessing data from multiple CSPs. So, DPIA is carried out to assess the potential harm/risks to individuals due to our tool and necessary precaution to be taken in the tool to minimize possible data privacy risks. It also analyzes the service pattern and future performance behavior of CSPs to help CSUs in decision making to select appropriate CSP

    Enhanced Living Environments

    Get PDF
    This open access book was prepared as a Final Publication of the COST Action IC1303 “Algorithms, Architectures and Platforms for Enhanced Living Environments (AAPELE)”. The concept of Enhanced Living Environments (ELE) refers to the area of Ambient Assisted Living (AAL) that is more related with Information and Communication Technologies (ICT). Effective ELE solutions require appropriate ICT algorithms, architectures, platforms, and systems, having in view the advance of science and technology in this area and the development of new and innovative solutions that can provide improvements in the quality of life for people in their homes and can reduce the financial burden on the budgets of the healthcare providers. The aim of this book is to become a state-of-the-art reference, discussing progress made, as well as prompting future directions on theories, practices, standards, and strategies related to the ELE area. The book contains 12 chapters and can serve as a valuable reference for undergraduate students, post-graduate students, educators, faculty members, researchers, engineers, medical doctors, healthcare organizations, insurance companies, and research strategists working in this area

    A Game-Theoretic Approach to Strategic Resource Allocation Mechanisms in Edge and Fog Computing

    Get PDF
    With the rapid growth of Internet of Things (IoT), cloud-centric application management raises questions related to quality of service for real-time applications. Fog and edge computing (FEC) provide a complement to the cloud by filling the gap between cloud and IoT. Resource management on multiple resources from distributed and administrative FEC nodes is a key challenge to ensure the quality of end-user’s experience. To improve resource utilisation and system performance, researchers have been proposed many fair allocation mechanisms for resource management. Dominant Resource Fairness (DRF), a resource allocation policy for multiple resource types, meets most of the required fair allocation characteristics. However, DRF is suitable for centralised resource allocation without considering the effects (or feedbacks) of large-scale distributed environments like multi-controller software defined networking (SDN). Nash bargaining from micro-economic theory or competitive equilibrium equal incomes (CEEI) are well suited to solving dynamic optimisation problems proposing to ‘proportionately’ share resources among distributed participants. Although CEEI’s decentralised policy guarantees load balancing for performance isolation, they are not faultproof for computation offloading. The thesis aims to propose a hybrid and fair allocation mechanism for rejuvenation of decentralised SDN controller deployment. We apply multi-agent reinforcement learning (MARL) with robustness against adversarial controllers to enable efficient priority scheduling for FEC. Motivated by software cybernetics and homeostasis, weighted DRF is generalised by applying the principles of feedback (positive or/and negative network effects) in reverse game theory (GT) to design hybrid scheduling schemes for joint multi-resource and multitask offloading/forwarding in FEC environments. In the first piece of study, monotonic scheduling for joint offloading at the federated edge is addressed by proposing truthful mechanism (algorithmic) to neutralise harmful negative and positive distributive bargain externalities respectively. The IP-DRF scheme is a MARL approach applying partition form game (PFG) to guarantee second-best Pareto optimality viii | P a g e (SBPO) in allocation of multi-resources from deterministic policy in both population and resource non-monotonicity settings. In the second study, we propose DFog-DRF scheme to address truthful fog scheduling with bottleneck fairness in fault-probable wireless hierarchical networks by applying constrained coalition formation (CCF) games to implement MARL. The multi-objective optimisation problem for fog throughput maximisation is solved via a constraint dimensionality reduction methodology using fairness constraints for efficient gateway and low-level controller’s placement. For evaluation, we develop an agent-based framework to implement fair allocation policies in distributed data centre environments. In empirical results, the deterministic policy of IP-DRF scheme provides SBPO and reduces the average execution and turnaround time by 19% and 11.52% as compared to the Nash bargaining or CEEI deterministic policy for 57,445 cloudlets in population non-monotonic settings. The processing cost of tasks shows significant improvement (6.89% and 9.03% for fixed and variable pricing) for the resource non-monotonic setting - using 38,000 cloudlets. The DFog-DRF scheme when benchmarked against asset fair (MIP) policy shows superior performance (less than 1% in time complexity) for up to 30 FEC nodes. Furthermore, empirical results using 210 mobiles and 420 applications prove the efficacy of our hybrid scheduling scheme for hierarchical clustering considering latency and network usage for throughput maximisation.Abubakar Tafawa Balewa University, Bauchi (Tetfund, Nigeria
    • 

    corecore