1,426 research outputs found

    An efficient resource sharing technique for multi-tenant databases

    Get PDF
    Multi-tenancy is one of the key components of cloud computing environment. Multi-tenant database system in SaaS (Software as a Service) has gained a lot of attention in academics, research and business arena. These database systems provide scalability and economic benefits for both cloud service providers and customers(organizations/companies referred as tenants) by sharing same resources and infrastructure in isolation of shared databases, network and computing resources with Service level agreement (SLA) compliances. In a multitenant scenario, active tenants compete for resources in order to access the database. If one tenant blocks up the resources, the performance of all the other tenants may be restricted and a fair sharing of the resources may be compromised. The performance of tenants must not be affected by resource-intensive activities and volatile workloads of other tenants. Moreover, the prime goal of providers is to accomplish low cost of operation, satisfying specific schemas/SLAs of each tenant. Consequently, there is a need to design and develop effective and dynamic resource sharing algorithms which can handle above mentioned issues. This work presents a model embracing a query classification and worker sorting technique to efficiently share I/O, CPU and Memory thus enhancing dynamic resource sharing and improvising the utilization of idle instances proficiently. The model is referred as Multi-Tenant Dynamic Resource Scheduling Model (MTDRSM) .The MTDRSM support workload execution of different benchmark such as TPC-C(Transaction Processing Performance Council), YCSB(The Yahoo! Cloud Serving Benchmark)etc. and on different database such as MySQL, Oracle, H2 database etc. Experiments are conducted for different benchmarks with and without SLA compliances to evaluate the performance of MTDRSM in terms of latency and throughput achieved. The experiments show significant performance improvement over existing Mute Bench model in terms of latency and throughput

    Task Runtime Prediction in Scientific Workflows Using an Online Incremental Learning Approach

    Full text link
    Many algorithms in workflow scheduling and resource provisioning rely on the performance estimation of tasks to produce a scheduling plan. A profiler that is capable of modeling the execution of tasks and predicting their runtime accurately, therefore, becomes an essential part of any Workflow Management System (WMS). With the emergence of multi-tenant Workflow as a Service (WaaS) platforms that use clouds for deploying scientific workflows, task runtime prediction becomes more challenging because it requires the processing of a significant amount of data in a near real-time scenario while dealing with the performance variability of cloud resources. Hence, relying on methods such as profiling tasks' execution data using basic statistical description (e.g., mean, standard deviation) or batch offline regression techniques to estimate the runtime may not be suitable for such environments. In this paper, we propose an online incremental learning approach to predict the runtime of tasks in scientific workflows in clouds. To improve the performance of the predictions, we harness fine-grained resources monitoring data in the form of time-series records of CPU utilization, memory usage, and I/O activities that are reflecting the unique characteristics of a task's execution. We compare our solution to a state-of-the-art approach that exploits the resources monitoring data based on regression machine learning technique. From our experiments, the proposed strategy improves the performance, in terms of the error, up to 29.89%, compared to the state-of-the-art solutions.Comment: Accepted for presentation at main conference track of 11th IEEE/ACM International Conference on Utility and Cloud Computin

    BRAHMA : an intelligent framework for automated scaling of streaming and deadline-critical workflows

    Get PDF
    The prevalent use of multi-component, multi-tenant models for building novel Software-as-a-Service (SaaS) applications has resulted in wide-spread research on automatic scaling of the resultant complex application workflows. In this paper, we propose a holistic solution to Automatic Workflow Scaling under the combined presence of Streaming and Deadline-critical workflows, called AWS-SD. To solve the AWS-SD problem, we propose a framework BRAHMA, that learns workflow behavior to build a knowledge-base and leverages this info to perform intelligent automated scaling decisions. We propose and evaluate different resource provisioning algorithms through CloudSim. Our results on time-varying workloads show that the proposed algorithms are effective and produce good cost-quality trade-offs while preventing deadline violations. Empirically, the proposed hybrid algorithm combining learning and monitoring, is able to restrict deadline violations to a small fraction (3-5%), while only suffering a marginal increase in average cost per component of 1-2% over our baseline naive algorithm, which provides the least costly provisioning but suffers from a large number (35-45%) of deadline violations

    Latency-Sensitive Web Service Workflows: A Case for a Software-Defined Internet

    Full text link
    The Internet, at large, remains under the control of service providers and autonomous systems. The Internet of Things (IoT) and edge computing provide an increasing demand and potential for more user control for their web service workflows. Network Softwarization revolutionizes the network landscape in various stages, from building, incrementally deploying, and maintaining the environment. Software-Defined Networking (SDN) and Network Functions Virtualization (NFV) are two core tenets of network softwarization. SDN offers a logically centralized control plane by abstracting away the control of the network devices in the data plane. NFV virtualizes dedicated hardware middleboxes and deploys them on top of servers and data centers as network functions. Thus, network softwarization enables efficient management of the system by enhancing its control and improving the reusability of the network services. In this work, we propose our vision for a Software-Defined Internet (SDI) for latency-sensitive web service workflows. SDI extends network softwarization to the Internet-scale, to enable a latency-aware user workflow execution on the Internet.Comment: Accepted for Publication at The Seventh International Conference on Software Defined Systems (SDS-2020

    SecFlow: Adaptive Security-Aware Workflow Management System in Multi-Cloud Environments

    Full text link
    In this paper, we propose an architecture for a security-aware workflow management system (WfMS) we call SecFlow in answer to the recent developments of combining workflow management systems with Cloud environments and the still lacking abilities of such systems to ensure the security and privacy of cloud-based workflows. The SecFlow architecture focuses on full workflow life cycle coverage as, in addition to the existing approaches to design security-aware processes, there is a need to fill in the gap of maintaining security properties of workflows during their execution phase. To address this gap, we derive the requirements for such a security-aware WfMS and design a system architecture that meets these requirements. SecFlow integrates key functional components such as secure model construction, security-aware service selection, security violation detection, and adaptive response mechanisms while considering all potential malicious parties in multi-tenant and cloud-based WfMS.Comment: 16 pages, 6 figure

    Smart Ontology Framework for Multi-Tenant Cloud Architecture

    Get PDF
    The exponential growth of data complexity in an era marked by the rapid expansion of the computer environment has led to an increase in the demand for scalable and effective systems. The crucial stage of data management, which acts as a vital conduit for accelerating the processing of enormous amounts of data, is at the centre of this paradigm. Scientific workflows must be coordinated in order to orchestrate the management of large datasets within this complex ecosystem. These workflows differ from generic workflows in that they involve a complex interplay of scheduling, algorithms, data flow, processes, operational protocols, and a focused attention on data-intensive systems. Software as a Service's (SaaS) distinctive feature of multi-tenancy is inextricably related to the growth of the industry. In this complex fabric, the investigation of scientific processes reveals a mutually beneficial relationship with the multi-tenant cloud orchestration environment, revealing a realm that goes beyond simple control and data propagation. It opens a fresh path for system development and makes service delivery's previously hidden facets visible. This study pioneers an exploration into a thorough framework for scientific operations within the context of multi-tenant cloud orchestration. Semantics-based workflows, which leverage semantics to help users manage the complexities of data orchestration, form the basis of this paradigm. In addition, policy-based processes provide another level of intricacy, giving users a flexible way to manoeuvre the complex environment of multi-tenancy, orchestration, and service identification. The study focuses on the fundamentals of orchestrating scientific workflows in a multi-tenant cloud environment, where the creative, scalable, and effective composition results from the harmonious integration of data and semantics under the guidance of rules

    Toward Customizable Multi-tenant SaaS Applications

    Get PDF
    abstract: Nowadays, Computing is so pervasive that it has become indeed the 5th utility (after water, electricity, gas, telephony) as Leonard Kleinrock once envisioned. Evolved from utility computing, cloud computing has emerged as a computing infrastructure that enables rapid delivery of computing resources as a utility in a dynamically scalable, virtualized manner. However, the current industrial cloud computing implementations promote segregation among different cloud providers, which leads to user lockdown because of prohibitive migration cost. On the other hand, Service-Orented Computing (SOC) including service-oriented architecture (SOA) and Web Services (WS) promote standardization and openness with its enabling standards and communication protocols. This thesis proposes a Service-Oriented Cloud Computing Architecture by combining the best attributes of the two paradigms to promote an open, interoperable environment for cloud computing development. Mutil-tenancy SaaS applicantions built on top of SOCCA have more flexibility and are not locked down by a certain platform. Tenants residing on a multi-tenant application appear to be the sole owner of the application and not aware of the existence of others. A multi-tenant SaaS application accommodates each tenant’s unique requirements by allowing tenant-level customization. A complex SaaS application that supports hundreds, even thousands of tenants could have hundreds of customization points with each of them providing multiple options, and this could result in a huge number of ways to customize the application. This dissertation also proposes innovative customization approaches, which studies similar tenants’ customization choices and each individual users behaviors, then provides guided semi-automated customization process for the future tenants. A semi-automated customization process could enable tenants to quickly implement the customization that best suits their business needs.Dissertation/ThesisDoctoral Dissertation Computer Science 201

    A combined computing framework for load balancing in multi-tenant cloud eco-system

    Get PDF
    Since the world is getting digitalized, cloud computing has become a core part of it. Massive data on a daily basis is processed, stored, and transferred over the internet. Cloud computing has become quite popular because of its superlative quality and enhanced capability to improvise data management, offering better computing resources and data to its user bases (UBs). However, there are many issues in the existing cloud traffic management approaches and how to manage data during service execution. The study introduces two distinct research models: data center virtualization framework under multi-tenant cloud-ecosystem (DCVF-MT) and collaborative workflow of multi-tenant load balancing (CW-MTLB) with analytical research modeling. The sequence of execution flow considers a set of algorithms for both models that address the core problem of load balancing and resource allocation in the cloud computing (CC) ecosystem. The research outcome illustrates that DCVF-MT, outperforms the one-to-one approach by approximately 24.778% performance improvement in traffic scheduling. It also yields a 40.33% performance improvement in managing cloudlet handling time. Moreover, it attains an overall 8.5133% performance improvement in resource cost optimization, which is significant to ensure the adaptability of the frameworks into futuristic cloud applications where adequate virtualization and resource mapping will be required
    • …
    corecore