311 research outputs found

    The Need of an Optimal QoS Repository and Assessment Framework in Forming a Trusted Relationship in Cloud: A Systematic Review

    Full text link
    © 2017 IEEE. Due to the cost-effectiveness and scalable features of the cloud the demand of its services is increasing every next day. Quality of Service (QOS) is one of the crucial factor in forming a viable Service Level Agreement (SLA) between a consumer and the provider that enable them to establish and maintain a trusted relationship with each other. SLA identifies and depicts the service requirements of the user and the level of service promised by provider. Availability of enormous service solutions is troublesome for cloud users in selecting the right service provider both in terms of price and the degree of promised services. On the other end a service provider need a centralized and reliable QoS repository and assessment framework that help them in offering an optimal amount of marginal resources to requested consumer. Although there are number of existing literatures that assist the interaction parties to achieve their desired goal in some way, however, there are still many gaps that need to be filled for establishing and maintaining a trusted relationship between them. In this paper we tried to identify all those gaps that is necessary for a trusted relationship between a service provider and service consumer. The aim of this research is to present an overview of the existing literature and compare them based on different criteria such as QoS integration, QoS repository, QoS filtering, trusted relationship and an SLA

    A Distributed Architecture for the Monitoring of Clouds and CDNs: Applications to Amazon AWS

    Get PDF
    Clouds and CDNs are systems that tend to separate the content being requested by users from the physical servers capable of serving it. From the network point of view, monitoring and optimizing performance for the traffic they generate are challenging tasks, given that the same resource can be located in multiple places, which can, in turn, change at any time. The first step in understanding cloud and CDN systems is thus the engineering of a monitoring platform. In this paper, we propose a novel solution that combines passive and active measurements and whose workflow has been tailored to specifically characterize the traffic generated by cloud and CDN infrastructures. We validate our platform by performing a longitudinal characterization of the very well known cloud and CDN infrastructure provider Amazon Web Services (AWS). By observing the traffic generated by more than 50 000 Internet users of an Italian Internet Service Provider, we explore the EC2, S3, and CloudFront AWS services, unveiling their infrastructure, the pervasiveness of web services they host, and their traffic allocation policies as seen from our vantage points. Most importantly, we observe their evolution over a two-year-long period. The solution provided in this paper can be of interest for the following: 1) developers aiming at building measurement tools for cloud infrastructure providers; 2) developers interested in failure and anomaly detection systems; and 3) third-party service-level agreement certificators who can design systems to independently monitor performance. Finally, we believe that the results about AWS presented in this paper are interes

    Security in Cloud Computing: Evaluation and Integration

    Get PDF
    Au cours de la dernière décennie, le paradigme du Cloud Computing a révolutionné la manière dont nous percevons les services de la Technologie de l’Information (TI). Celui-ci nous a donné l’opportunité de répondre à la demande constamment croissante liée aux besoins informatiques des usagers en introduisant la notion d’externalisation des services et des données. Les consommateurs du Cloud ont généralement accès, sur demande, à un large éventail bien réparti d’infrastructures de TI offrant une pléthore de services. Ils sont à même de configurer dynamiquement les ressources du Cloud en fonction des exigences de leurs applications, sans toutefois devenir partie intégrante de l’infrastructure du Cloud. Cela leur permet d’atteindre un degré optimal d’utilisation des ressources tout en réduisant leurs coûts d’investissement en TI. Toutefois, la migration des services au Cloud intensifie malgré elle les menaces existantes à la sécurité des TI et en crée de nouvelles qui sont intrinsèques à l’architecture du Cloud Computing. C’est pourquoi il existe un réel besoin d’évaluation des risques liés à la sécurité du Cloud durant le procédé de la sélection et du déploiement des services. Au cours des dernières années, l’impact d’une efficace gestion de la satisfaction des besoins en sécurité des services a été pris avec un sérieux croissant de la part des fournisseurs et des consommateurs. Toutefois, l’intégration réussie de l’élément de sécurité dans les opérations de la gestion des ressources du Cloud ne requiert pas seulement une recherche méthodique, mais aussi une modélisation méticuleuse des exigences du Cloud en termes de sécurité. C’est en considérant ces facteurs que nous adressons dans cette thèse les défis liés à l’évaluation de la sécurité et à son intégration dans les environnements indépendants et interconnectés du Cloud Computing. D’une part, nous sommes motivés à offrir aux consommateurs du Cloud un ensemble de méthodes qui leur permettront d’optimiser la sécurité de leurs services et, d’autre part, nous offrons aux fournisseurs un éventail de stratégies qui leur permettront de mieux sécuriser leurs services d’hébergements du Cloud. L’originalité de cette thèse porte sur deux aspects : 1) la description innovatrice des exigences des applications du Cloud relativement à la sécurité ; et 2) la conception de modèles mathématiques rigoureux qui intègrent le facteur de sécurité dans les problèmes traditionnels du déploiement des applications, d’approvisionnement des ressources et de la gestion de la charge de travail au coeur des infrastructures actuelles du Cloud Computing. Le travail au sein de cette thèse est réalisé en trois phases.----------ABSTRACT: Over the past decade, the Cloud Computing paradigm has revolutionized the way we envision IT services. It has provided an opportunity to respond to the ever increasing computing needs of the users by introducing the notion of service and data outsourcing. Cloud consumers usually have online and on-demand access to a large and distributed IT infrastructure providing a plethora of services. They can dynamically configure and scale the Cloud resources according to the requirements of their applications without becoming part of the Cloud infrastructure, which allows them to reduce their IT investment cost and achieve optimal resource utilization. However, the migration of services to the Cloud increases the vulnerability to existing IT security threats and creates new ones that are intrinsic to the Cloud Computing architecture, thus the need for a thorough assessment of Cloud security risks during the process of service selection and deployment. Recently, the impact of effective management of service security satisfaction has been taken with greater seriousness by the Cloud Service Providers (CSP) and stakeholders. Nevertheless, the successful integration of the security element into the Cloud resource management operations does not only require methodical research, but also necessitates the meticulous modeling of the Cloud security requirements. To this end, we address throughout this thesis the challenges to security evaluation and integration in independent and interconnected Cloud Computing environments. We are interested in providing the Cloud consumers with a set of methods that allow them to optimize the security of their services and the CSPs with a set of strategies that enable them to provide security-aware Cloud-based service hosting. The originality of this thesis lies within two aspects: 1) the innovative description of the Cloud applications’ security requirements, which paved the way for an effective quantification and evaluation of the security of Cloud infrastructures; and 2) the design of rigorous mathematical models that integrate the security factor into the traditional problems of application deployment, resource provisioning, and workload management within current Cloud Computing infrastructures. The work in this thesis is carried out in three phases

    Elastic Business Process Management: State of the Art and Open Challenges for BPM in the Cloud

    Full text link
    With the advent of cloud computing, organizations are nowadays able to react rapidly to changing demands for computational resources. Not only individual applications can be hosted on virtual cloud infrastructures, but also complete business processes. This allows the realization of so-called elastic processes, i.e., processes which are carried out using elastic cloud resources. Despite the manifold benefits of elastic processes, there is still a lack of solutions supporting them. In this paper, we identify the state of the art of elastic Business Process Management with a focus on infrastructural challenges. We conceptualize an architecture for an elastic Business Process Management System and discuss existing work on scheduling, resource allocation, monitoring, decentralized coordination, and state management for elastic processes. Furthermore, we present two representative elastic Business Process Management Systems which are intended to counter these challenges. Based on our findings, we identify open issues and outline possible research directions for the realization of elastic processes and elastic Business Process Management.Comment: Please cite as: S. Schulte, C. Janiesch, S. Venugopal, I. Weber, and P. Hoenisch (2015). Elastic Business Process Management: State of the Art and Open Challenges for BPM in the Cloud. Future Generation Computer Systems, Volume NN, Number N, NN-NN., http://dx.doi.org/10.1016/j.future.2014.09.00

    Dynamic collaboration and secure access of services in multi-cloud environments

    Get PDF
    The cloud computing services have gained popularity in both public and enterprise domains and they process a large amount of user data with varying privacy levels. The increasing demand for cloud services including storage and computation requires new functional elements and provisioning schemes to meet user requirements. Multi-clouds can optimise the user requirements by allowing them to choose best services from a large number of services offered by various cloud providers as they are massively scalable, can be dynamically configured, and delivered on demand with large-scale infrastructure resources. A major concern related to multi-cloud adoption is the lack of models for them and their associated security issues which become more unpredictable in a multi-cloud environment. Moreover, in order to trust the services in a foreign cloud users depend on their assurances given by the cloud provider but cloud providers give very limited evidence or accountability to users which offers them the ability to hide some behaviour of the service. In this thesis, we propose a model for multi-cloud collaboration that can securely establish dynamic collaboration between heterogeneous clouds using the cloud on-demand model in a secure way. Initially, threat modelling for cloud services has been done that leads to the identification of various threats to service interfaces along with the possible attackers and the mechanisms to exploit those threats. Based on these threats the cloud provider can apply suitable mechanisms to protect services and user data from these threats. In the next phase, we present a lightweight and novel authentication mechanism which provides a single sign-on (SSO) to users for authentication at runtime between multi-clouds before granting them service access and it is formally verified. Next, we provide a service scheduling mechanism to select the best services from multiple cloud providers that closely match user quality of service requirements (QoS). The scheduling mechanism achieves high accuracy by providing distance correlation weighting mechanism among a large number of services QoS parameters. In the next stage, novel service level agreement (SLA) management mechanisms are proposed to ensure secure service execution in the foreign cloud. The usage of SLA mechanisms ensures that user QoS parameters including the functional (CPU, RAM, memory etc.) and non-functional requirements (bandwidth, latency, availability, reliability etc.) of users for a particular service are negotiated before secure collaboration between multi-clouds is setup. The multi-cloud handling user requests will be responsible to enforce mechanisms that fulfil the QoS requirements agreed in the SLA. While the monitoring phase in SLA involves monitoring the service execution in the foreign cloud to check its compliance with the SLA and report it back to the user. Finally, we present the use cases of applying the proposed model in scenarios such as Internet of Things (IoT) and E-Healthcare in multi-clouds. Moreover, the designed protocols are empirically implemented on two different clouds including OpenStack and Amazon AWS. Experiments indicate that the proposed model is scalable, authentication protocols result only in a limited overhead compared to standard authentication protocols, service scheduling achieves high efficiency and any SLA violations by a cloud provider can be recorded and reported back to the user.My research for first 3 years of PhD was funded by the College of Engineering and Technology

    End-to-End Trust Fulfillment of Big Data Workflow Provisioning over Competing Clouds

    Get PDF
    Cloud Computing has emerged as a promising and powerful paradigm for delivering data- intensive, high performance computation, applications and services over the Internet. Cloud Computing has enabled the implementation and success of Big Data, a relatively recent phenomenon consisting of the generation and analysis of abundant data from various sources. Accordingly, to satisfy the growing demands of Big Data storage, processing, and analytics, a large market has emerged for Cloud Service Providers, offering a myriad of resources, platforms, and infrastructures. The proliferation of these services often makes it difficult for consumers to select the most suitable and trustworthy provider to fulfill the requirements of building complex workflows and applications in a relatively short time. In this thesis, we first propose a quality specification model to support dual pre- and post-cloud workflow provisioning, consisting of service provider selection and workflow quality enforcement and adaptation. This model captures key properties of the quality of work at different stages of the Big Data value chain, enabling standardized quality specification, monitoring, and adaptation. Subsequently, we propose a two-dimensional trust-enabled framework to facilitate end-to-end Quality of Service (QoS) enforcement that: 1) automates cloud service provider selection for Big Data workflow processing, and 2) maintains the required QoS levels of Big Data workflows during runtime through dynamic orchestration using multi-model architecture-driven workflow monitoring, prediction, and adaptation. The trust-based automatic service provider selection scheme we propose in this thesis is comprehensive and adaptive, as it relies on a dynamic trust model to evaluate the QoS of a cloud provider prior to taking any selection decisions. It is a multi-dimensional trust model for Big Data workflows over competing clouds that assesses the trustworthiness of cloud providers based on three trust levels: (1) presence of the most up-to-date cloud resource verified capabilities, (2) reputational evidence measured by neighboring users and (3) a recorded personal history of experiences with the cloud provider. The trust-based workflow orchestration scheme we propose aims to avoid performance degradation or cloud service interruption. Our workflow orchestration approach is not only based on automatic adaptation and reconfiguration supported by monitoring, but also on predicting cloud resource shortages, thus preventing performance degradation. We formalize the cloud resource orchestration process using a state machine that efficiently captures different dynamic properties of the cloud execution environment. In addition, we use a model checker to validate our monitoring model in terms of reachability, liveness, and safety properties. We evaluate both our automated service provider selection scheme and cloud workflow orchestration, monitoring and adaptation schemes on a workflow-enabled Big Data application. A set of scenarios were carefully chosen to evaluate the performance of the service provider selection, workflow monitoring and the adaptation schemes we have implemented. The results demonstrate that our service selection outperforms other selection strategies and ensures trustworthy service provider selection. The results of evaluating automated workflow orchestration further show that our model is self-adapting, self-configuring, reacts efficiently to changes and adapts accordingly while enforcing QoS of workflows

    Classifying malware attacks in IaaS cloud environments

    Get PDF
    In the last few years, research has been motivated to provide a categorization and classification of security concerns accompanying the growing adaptation of Infrastructure as a Service (IaaS) clouds. Studies have been motivated by the risks, threats and vulnerabilities imposed by the components within the environment and have provided general classifications of related attacks, as well as the respective detection and mitigation mechanisms. Virtual Machine Introspection (VMI) has been proven to be an effective tool for malware detection and analysis in virtualized environments. In this paper, we classify attacks in IaaS cloud that can be investigated using VMI-based mechanisms. This infers a special focus on attacks that directly involve Virtual Machines (VMs) deployed in an IaaS cloud. Our classification methodology takes into consideration the source, target, and direction of the attacks. As each actor in a cloud environment can be both source and target of attacks, the classification provides any cloud actor the necessary knowledge of the different attacks by which it can threaten or be threatened, and consequently deploy adapted VMI-based monitoring architectures. To highlight the relevance of attacks, we provide a statistical analysis of the reported vulnerabilities exploited by the classified attacks and their financial impact on actual business processes

    Cross-Layer Cloud Performance Monitoring, Analysis and Recovery

    Get PDF
    The basic idea of Cloud computing is to offer software and hardware resources as services. These services are provided at different layers: Software (Software as a Service: SaaS), Platform (Platform as a Service: PaaS) and Infrastructure (Infrastructure as a Service: IaaS). In such a complex environment, performance issues are quite likely and rather the norm than the exception. Consequently, performance-related problems may frequently occur at all layers. Thus, it is necessary to monitor all Cloud layers and analyze their performance parameters to detect and rectify related problems. This thesis presents a novel cross-layer reactive performance monitoring approach for Cloud computing environments, based on the methodology of Complex Event Processing (CEP). The proposed approach is called CEP4Cloud. It analyzes monitored events to detect performance-related problems and performs actions to fix them. The proposal is based on the use of (1) a novel multi-layer monitoring approach, (2) a new cross-layer analysis approach and (3) a novel recovery approach. The proposed monitoring approach operates at all Cloud layers, while collecting related parameters. It makes use of existing monitoring tools and a new monitoring approach for Cloud services at the SaaS layer. The proposed SaaS monitoring approach is called AOP4CSM. It is based on aspect-oriented programming and monitors quality-of-service parameters of the SaaS layer in a non-invasive manner. AOP4CSM neither modifies the server implementation nor the client implementation. The defined cross-layer analysis approach is called D-CEP4CMA. It is based on the methodology of Complex Event Processing (CEP). Instead of having to manually specify continuous queries on monitored event streams, CEP queries are derived from analyzing the correlations between monitored metrics across multiple Cloud layers. The results of the correlation analysis allow us to reduce the number of monitored parameters and enable us to perform a root cause analysis to identify the causes of performance-related problems. The derived analysis rules are implemented as queries in a CEP engine. D-CEP4CMA is designed to dynamically switch between different centralized and distributed CEP architectures depending on the load/memory of the CEP machine and network traffic conditions in the observed Cloud environment. The proposed recovery approach is based on a novel action manager framework. It applies recovery actions at all Cloud layers. The novel action manager framework assigns a set of repair actions to each performance-related problem and checks the success of the applied action. The results of several experiments illustrate the merits of the reactive performance monitoring approach and its main components (i.e., monitoring, analysis and recovery). First, experimental results show the efficiency of AOP4CSM (very low overhead). Second, obtained results demonstrate the benefits of the analysis approach in terms of precision and recall compared to threshold-based methods. They also show the accuracy of the analysis approach in identifying the causes of performance-related problems. Furthermore, experiments illustrate the efficiency of D-CEP4CMA and its performance in terms of precision and recall compared to centralized and distributed CEP architectures. Moreover, experimental results indicate that the time needed to fix a performance-related problem is reasonably short. They also show that the CPU overhead of using CEP4Cloud is negligible. Finally, experimental results demonstrate the merits of CEP4Cloud in terms of speeding up the repair and reducing the number of triggered alarms compared to baseline methods
    • …
    corecore