260 research outputs found

    On autonomic platform-as-a-service: characterisation and conceptual model

    Get PDF
    In this position paper, we envision a Platform-as-a-Service conceptual and architectural solution for large-scale and data intensive applications. Our architectural approach is based on autonomic principles, therefore, its ultimate goal is to reduce human intervention, the cost, and the perceived complexity by enabling the autonomic platform to manage such applications itself in accordance with highlevel policies. Such policies allow the platform to (i) interpret the application specifications; (ii) to map the specifications onto the target computing infrastructure, so that the applications are executed and their Quality of Service (QoS), as specified in their SLA, enforced; and, most importantly, (iii) to adapt automatically such previously established mappings when unexpected behaviours violate the expected. Such adaptations may involve modifications in the arrangement of the computational infrastructure, i.e. by re-designing a different communication network topology that dictates how computational resources interact, or even the live-migration to a different computational infrastructure. The ultimate goal of these challenges is to (de)provision computational machines, storage and networking links and their required topologies in order to supply for the application the virtualised infrastructure that better meets the SLAs. Generic architectural blueprints and principles have been provided for designing and implementing an autonomic computing system.We revisit them in order to provide a customised and specific viewfor PaaS platforms and integrate emerging paradigms such as DevOps for automate deployments, Monitoring as a Service for accurate and large-scale monitoring, or well-known formalisms such as Petri Nets for building performance models

    Context-aware Data Quality Assessment for Big Data

    Get PDF
    Big data changed the way in which we collect and analyze data. In particular, the amount of available information is constantly growing and organizations rely more and more on data analysis in order to achieve their competitive ad- vantage. However, such amount of data can create a real value only if combined with quality: good decisions and actions are the results of correct, reliable and complete data. In such a scenario, methods and techniques for the data quality assessment can support the identification of suitable data to process. If in tra- ditional database numerous assessment methods are proposed, in the big data scenario new algorithms have to be designed in order to deal with novel require- ments related to variety, volume and velocity issues. In particular, in this paper we highlight that dealing with heterogeneous sources requires an adaptive ap- proach able to trigger the suitable quality assessment methods on the basis of the data type and context in which data have to be used. Furthermore, we show that in some situations it is not possible to evaluate the quality of the entire dataset due to performance and time constraints. For this reason, we suggest to focus the data quality assessment only on a portion of the dataset and to take into account the consequent loss of accuracy by introducing a confidence factor as a measure of the reliability of the quality assessment procedure. We propose a methodology to build a data quality adapter module which selects the best configuration for the data quality assessment based on the user main require- ments: time minimization, confidence maximization, and budget minimization. Experiments are performed by considering real data gathered from a smart city case study

    Smart Cloud Engine and Solution Based on Knowledge Base

    Get PDF
    AbstractComplexity of cloud infrastructures needs models and tools for process management, configuration, scaling, elastic computing and healthiness control. This paper presents a Smart Cloud solution based on a Knowledge Base, KB, with the aim of modeling cloud resources, Service Level Agreements and their evolution, and enabling the reasoning on structures by implementing strategies of efficient smart cloud management and intelligence. The solution proposed provides formal verification tools and intelligence for cloud control. It can be easily integrated with any cloud configuration manager, cloud orchestrator, and monitoring tool, since the connections with these tools are performed by using REST calls and XML files. It has been validated in the large ICARO Cloud project with a national cloud service provider

    Feedback-control & queueing theory-based resource management for streaming applications

    Get PDF
    Recent advances in sensor technologies and instrumentation have led to an extraordinary growth of data sources and streaming applications. A wide variety of devices, from smart phones to dedicated sensors, have the capability of collecting and streaming large amounts of data at unprecedented rates. A number of distinct streaming data models have been proposed. Typical applications for this include smart cites & built environments for instance, where sensor-based infrastructures continue to increase in scale and variety. Understanding how such streaming content can be processed within some time threshold remains a non-trivial and important research topic. We investigate how a cloud-based computational infrastructure can autonomically respond to such streaming content, offering Quality of Service guarantees. We propose an autonomic controller (based on feedback control and queueing theory) to elastically provision virtual machines to meet performance targets associated with a particular data stream. Evaluation is carried out using a federated Cloud-based infrastructure (implemented using CometCloud) – where the allocation of new resources can be based on: (i) differences between sites, i.e. types of resources supported (e.g. GPU vs. CPU only), (ii) cost of execution; (iii) failure rate and likely resilience, etc. In particular, we demonstrate how Little’s Law –a widely used result in queuing theory– can be adapted to support dynamic control in the context of such resource provisioning

    A manifesto for future generation cloud computing: research directions for the next decade

    Get PDF
    The Cloud computing paradigm has revolutionised the computer science horizon during the past decade and has enabled the emergence of computing as the fifth utility. It has captured significant attention of academia, industries, and government bodies. Now, it has emerged as the backbone of modern economy by offering subscription-based services anytime, anywhere following a pay-as-you-go model. This has instigated (1) shorter establishment times for start-ups, (2) creation of scalable global enterprise applications, (3) better cost-to-value associativity for scientific and high performance computing applications, and (4) different invocation/execution models for pervasive and ubiquitous applications. The recent technological developments and paradigms such as serverless computing, software-defined networking, Internet of Things, and processing at network edge are creating new opportunities for Cloud computing. However, they are also posing several new challenges and creating the need for new approaches and research strategies, as well as the re-evaluation of the models that were developed to address issues such as scalability, elasticity, reliability, security, sustainability, and application models. The proposed manifesto addresses them by identifying the major open challenges in Cloud computing, emerging trends, and impact areas. It then offers research directions for the next decade, thus helping in the realisation of Future Generation Cloud Computing

    Cross-layer multi-cloud real-time application QoS monitoring and benchmarking as-a-service framework

    Full text link
    Cloud computing provides on-demand access to affordable hardware (e.g., multi-core CPUs, GPUs, disks, and networking equipment) and software (e.g., databases, application servers and data processing frameworks) platforms with features such as elasticity, pay-per-use, low upfront investment and low time to market. This has led to the proliferation of business criti-cal applications that leverage various cloud platforms. Such applications hosted on sin-gle/multiple cloud platforms have diverse characteristics requiring extensive monitoring and benchmarking mechanisms to ensure run-time Quality of Service (QoS) (e.g., latency and throughput). The process of monitoring and benchmarking cloud applications is as yet a criti-cal issue to be further studied and addressed. Current monitoring and benchmarking approaches do not provide a holistic view of per-formance QoS for distributed applications cross cloud layers on multi-cloud environments. Furthermore, current monitoring frameworks are limited to monitoring tasks and do not in-corporate benchmarking abilities. In other words, there is no unified framework that com-bines monitoring and benchmarking functionalities. To gain the ability of both monitoring and benchmarking all under one framework will empower the cloud user to gain more in-depth control and awareness of cloud services. The Thesis identifies and discusses the major research dimensions and design issues relat-ed to developing techniques that can monitor and benchmark an application’s components cross-layers on multi-clouds. Furthermore, the thesis discusses to what extent such research dimensions and design issues are handled by current academic research papers as well as by the existing commercial monitoring tools. Moreover, the Thesis addresses an important research challenge of how to undertake cross-layer cloud monitoring and benchmarking in multi-cloud environments to provide es-sential information for effective cloud applications QoS management. It proposes, develops, implements and validates CLAMBS: Cross-Layer Multi-Cloud Application Monitoring and Benchmarking, as-a-Service Framework. The core contributions made by this thesis are the development of the CLAMBS framework and underlying monitoring and benchmarking tech-niques which are capable of: i) performing QoS monitoring of application components (e.g. ii database, web server, application server, etc.) that may be deployed across multiple cloud platforms (e.g. Amazon EC2, and Microsoft Azure); and ii) giving visibility into the QoS of in-dividual application components, which is not supported by current monitoring and bench-marking frameworks. Experiments are conducted on real-world multi-cloud platforms to em-pirically evaluate the framework and the results validate that CLAMBS can effectively monitor and benchmark applications running cross-layers on multi-clouds. The thesis presents implementation and evaluation details of the proposed CLAMBS framework. It demonstrates the feasibility and scalability of the proposed framework in real-world environments by implementing a proof-of-concept prototype on multi-cloud platforms. Finally, it presents a model for analysing the communication overheads introduced by various components (e.g. agents and manager) of CLAMBS in multi cloud environments

    A Process Framework for Managing Quality of Service in Private Cloud

    Get PDF
    As information systems leaders tap into the global market of cloud computing-based services, they struggle to maintain consistent application performance due to lack of a process framework for managing quality of service (QoS) in the cloud. Guided by the disruptive innovation theory, the purpose of this case study was to identify a process framework for meeting the QoS requirements of private cloud service users. Private cloud implementation was explored by selecting an organization in California through purposeful sampling. Information was gathered by interviewing 23 information technology (IT) professionals, a mix of frontline engineers, managers, and leaders involved in the implementation of private cloud. Another source of data was documents such as standard operating procedures, policies, and guidelines related to private cloud implementation. Interview transcripts and documents were coded and sequentially analyzed. Three prominent themes emerged from the analysis of data: (a) end user expectations, (b) application architecture, and (c) trending analysis. The findings of this study may help IT leaders in effectively managing QoS in cloud infrastructure and deliver reliable application performance that may help in increasing customer population and profitability of organizations. This study may contribute to positive social change as information systems managers and workers can learn and apply the process framework for delivering stable and reliable cloud-hosted computer applications
    • …
    corecore