540 research outputs found

    A Fuzzy Predictable Load Balancing Approach in Cloud Computing

    Full text link
    Cloud computing is a new paradigm for hosting and delivering services on demand over the internet where users access services. It is an example of an ultimately virtualized system, and a natural evolution for data centers that employ automated systems management, workload balancing, and virtualization technologies. Live virtual machine (VM) migration is a technique to achieve load balancing in cloud environment by transferring an active overload VM from one physical host to another one without disrupting the VM. In this study, to eliminate whole VM migration in load balancing process, we propose a Fuzzy Predictable Load Balancing (FPLB) approach which confronts with the problem of overload VM, by assigning the extra tasks from overloaded VM to another similar VM instead of whole VM migration. In addition, we propose a Fuzzy Prediction Method (FPM) to predict VMs migration time. This approach also contains a multi-objective optimization model to migrate these tasks to a new VM host. In proposed FPLB approach there is no need to pause VM during migration time. Furthermore, considering this fact that VM live migration contrast to tasks migration takes longer to complete and needs more idle capacity in host physical machine (PM), the proposed approach will significantly reduce time, idle memory and cost consumption

    A Novel Technique for Task Re-Allocation in Distributed Computing System

    Get PDF
    A distributed computing is software system in which components are located on different attached computers can communicate and organize their actions by transferring messages. A task applied on the distributed system must be reliable and feasible. The distributed system for instance grid networks, robotics, air traffic control systems, etc. exceedingly depends on time. If not detected accurately and recovered at the proper time, a single error in real time distributed system can cause a whole system failure. Fault-tolerance is the key method which is mostly used to provide continuous reliability in these systems. There are some challenges in distributed computing system such as resource sharing, transparency, dependability, Complex mappings, concurrency, Fault tolerance etc. In this paper, we focus on fault tolerance which is responsible for the degradation of the system. A novel technique is proposed based upon reliability to overcome fault tolerance problem and re-allocate the task. DOI: 10.17762/ijritcc2321-8169.15080

    A Robust Adaptive Workload Orchestration in Pure Edge Computing

    Full text link
    Pure Edge computing (PEC) aims to bring cloud applications and services to the edge of the network to support the growing user demand for time-sensitive applications and data-driven computing. However, mobility and limited computational capacity of edge devices pose challenges in supporting some urgent and computationally intensive tasks with strict response time demands. If the execution results of these tasks exceed the deadline, they become worthless and can cause severe safety issues. Therefore, it is essential to ensure that edge nodes complete as many latency-sensitive tasks as possible. \\In this paper, we propose a Robust Adaptive Workload Orchestration (R-AdWOrch) model to minimize deadline misses and data loss by using priority definition and a reallocation strategy. The results show that R-AdWOrch can minimize deadline misses of urgent tasks while minimizing the data loss of lower priority tasks under all conditions.Comment: 9 pages, Accepted in ICAART conferenc

    Twos Company, Threes A Cloud: Challenges To Implementing Service Models

    Get PDF
    Although three models are currently being used in cloud computing (Software as a Service, Platform as a Service, and infrastructure as a service, there remain many challenges before most business accept cloud computing as a reality. Virtualization in cloud computing has many advantages but carries a penalty because of state configurations, kernel drivers, and user interface environments. In addition, many non-standard architectures exist to power cloud models that are often incompatible. Another issue is adequately provisioning the resources required for a multi-tier cloud-based application in such a way that on-demand elasticity is present at vastly different scales yet is carried out efficiently. For networks that have large geographical footprints another problem arises from bottlenecks between elements supporting virtual machines and their control. While many solutions have been proposed to alleviate these problems, some of which are already commercial, much remains to be done to see whether these solutions will be practicable at scale up and address business concerns

    HPS-HDS:High Performance Scheduling for Heterogeneous Distributed Systems

    Get PDF
    Heterogeneous Distributed Systems (HDS) are often characterized by a variety of resources that may or may not be coupled with specific platforms or environments. Such type of systems are Cluster Computing, Grid Computing, Peer-to-Peer Computing, Cloud Computing and Ubiquitous Computing all involving elements of heterogeneity, having a large variety of tools and software to manage them. As computing and data storage needs grow exponentially in HDS, increasing the size of data centers brings important diseconomies of scale. In this context, major solutions for scalability, mobility, reliability, fault tolerance and security are required to achieve high performance. More, HDS are highly dynamic in its structure, because the user requests must be respected as an agreement rule (SLA) and ensure QoS, so new algorithm for events and tasks scheduling and new methods for resource management should be designed to increase the performance of such systems. In this special issues, the accepted papers address the advance on scheduling algorithms, energy-aware models, self-organizing resource management, data-aware service allocation, Big Data management and processing, performance analysis and optimization

    Conceptual Service Level Agreement Mechanism to Minimize the SLA Violation with SLA Negotiation Process in Cloud Computing Environment

    Get PDF
    تُستخدم الخدمة عبر الإنترنت لتكون بمثابة الدفع لكل استخدام في الحوسبة السحابية. لا يحتاج مستخدم الخدمة إلى عقد طويل مع مزودي الخدمات السحابية. اتفاقية مستوى الخدمة (SLAs) هي تفاهمات تم تحديدها بين مزودي الخدمة السحابية وغيرهم ، على سبيل المثال ، مستخدم الخدمة أو المشغل الوسيط أو المشغلين المراقبين. نظرًا لأن الحوسبة السحابية هي تقنية مستمرة تقدم العديد من الخدمات لتطبيقات الأعمال الأساسية وأنظمة قابلة للتكيف لإدارة الاتفاقيات عبر الإنترنت تعتبر مهمة تحافظ على اتفاقية مستوى الخدمةو جودة الخدمة لمستخدم السحابة. إذا فشل مزود الخدمة في الحفاظ على الخدمة المطلوبة ، فإن اتفاقية مستوى الخدمة تعتبر انتهاكًا لاتفاقية مستوى الخدمة. الهدف الرئيسي هو تقليل انتهاكات اتفاقية مستوى الخدمة (SLA) للحفاظ على جودة الخدمة لمستخدمي السحابة. في هذه المقالة البحثية ، اقترحنا صندوق أدوات للمساعدة في إجراء تبادل اتفاقية مستوى الخدمة مع مزودي الخدمة والذي سيمكن العميل السحابي من الإشارة إلى متطلبات جودة الخدمة واقترح خوارزمية بالإضافة إلى نموذج التفاوض من اجل التفاوض على الطلب مع الخدمة لمقدمي الخدمة لإنتاج اتفاقية أفضل بين مقدم الخدمة ومستهلك الخدمة السحابية. وبالتالي ، يمكن للإطار الذي تمت مناقشته تقليل انتهاكات اتفاقية مستوى الخدمة وكذلك خيبات الأمل في المفاوضات وتوسيع نطاق كفاية التكلفة. علاوة على ذلك ، فإن مجموعة أدوات اتفاقية مستوى الخدمة المقترحة منتجة بشكل إضافي للعملاء حتى يتمكن العملاء من تأمين سداد قيمة معقولة مقابل تقليل جودة الخدمة أو وقت التنازل. يوضح هذا البحث أنه يمكن الحفاظ على مستوى الضمان في موفري الخدمات السحابية من خلال نقل الخدمات دون انقطاع من منظور العميل.Online service is used to be as Pay-Per-Use in Cloud computing. Service user need not be in a long time contract with cloud service providers. Service level agreements (SLAs) are understandings marked between a cloud service providers and others, for example, a service user, intermediary operator, or observing operators. Since cloud computing is an ongoing technology giving numerous services to basic business applications and adaptable systems to manage online agreements are significant. SLA maintains the quality-of-service to the cloud user. If service provider fails to maintain the required service SLA is considered to be SLA violated. The main aim is to minimize the SLA violations for maintain the QoS of their cloud users. In this research article, a toolbox is proposed to help the procedure of exchanging of a SLA with the service providers that will enable the cloud client in indicating service quality demands and an algorithm as well as Negotiation model is also proposed to negotiate the request with the service providers to produce a better agreement between service provider and cloud service consumer. Subsequently, the discussed framework can reduce SLA violations as well as negotiation disappointments and have expanded cost-adequacy. Moreover, the suggested SLA toolkit is additionally productive to clients so clients can secure a sensible value repayment for diminished QoS or conceding time. This research shows the assurance level in the cloud service providers can be kept up by as yet conveying the services with no interruption from the client's perspectiv

    Trusted resource allocation in volunteer edge-cloud computing for scientific applications

    Get PDF
    Data-intensive science applications in fields such as e.g., bioinformatics, health sciences, and material discovery are becoming increasingly dynamic and demanding with resource requirements. Researchers using these applications which are based on advanced scientific workflows frequently require a diverse set of resources that are often not available within private servers or a single Cloud Service Provider (CSP). For example, a user working with Precision Medicine applications would prefer only those CSPs who follow guidelines from HIPAA (Health Insurance Portability and Accountability Act) for implementing their data services and might want services from other CSPs for economic viability. With the generation of more and more data these workflows often require deployment and dynamic scaling of multi-cloud resources in an efficient and high-performance manner (e.g., quick setup, reduced computation time, and increased application throughput). At the same time, users seek to minimize the costs of configuring the related multi-cloud resources. While performance and cost are among the key factors to decide upon CSP resource selection, the scientific workflows often process proprietary/confidential data that introduces additional constraints of security postures. Thus, users have to make an informed decision on the selection of resources that are most suited for their applications while trading off between the key factors of resource selection which are performance, agility, cost, and security (PACS). Furthermore, even with the most efficient resource allocation across multi-cloud, the cost to solution might not be economical for all users which have led to the development of new paradigms of computing such as volunteer computing where users utilize volunteered cyber resources to meet their computing requirements. For economical and readily available resources, it is essential that such volunteered resources can integrate well with cloud resources for providing the most efficient computing infrastructure for users. In this dissertation, individual stages such as user requirement collection, user's resource preferences, resource brokering and task scheduling, in lifecycle of resource brokering for users are tackled. For collection of user requirements, a novel approach through an iterative design interface is proposed. In addition, fuzzy interference-based approach is proposed to capture users' biases and expertise for guiding their resource selection for their applications. The results showed improvement in performance i.e. time to execute in 98 percent of the studied applications. The data collected on user's requirements and preferences is later used by optimizer engine and machine learning algorithms for resource brokering. For resource brokering, a new integer linear programming based solution (OnTimeURB) is proposed which creates multi-cloud template solutions for resource allocation while also optimizing performance, agility, cost, and security. The solution was further improved by the addition of a machine learning model based on naive bayes classifier which captures the true QoS of cloud resources for guiding template solution creation. The proposed solution was able to improve the time to execute for as much as 96 percent of the largest applications. As discussed above, to fulfill necessity of economical computing resources, a new paradigm of computing viz-a-viz Volunteer Edge Computing (VEC) is proposed which reduces cost and improves performance and security by creating edge clusters comprising of volunteered computing resources close to users. The initial results have shown improved time of execution for application workflows against state-of-the-art solutions while utilizing only the most secure VEC resources. Consequently, we have utilized reinforcement learning based solutions to characterize volunteered resources for their availability and flexibility towards implementation of security policies. The characterization of volunteered resources facilitates efficient allocation of resources and scheduling of workflows tasks which improves performance and throughput of workflow executions. VEC architecture is further validated with state-of-the-art bioinformatics workflows and manufacturing workflows.Includes bibliographical references
    corecore