806 research outputs found

    Reputation-guided Evolutionary Scheduling Algorithm for Independent Tasks in inter-Clouds Environments

    Get PDF
    Self-adaptation provides software with flexibility to different behaviours (configurations) it incorporates and the (semi-) autonomous ability to switch between these behaviours in response to changes. To empower clouds with the ability to capture and respond to quality feedback provided by users at runtime, we propose a reputation guided genetic scheduling algorithm for independent tasks. Current resource management services consider evolutionary strategies to improve the performance on resource allocation procedures or tasks scheduling algorithms, but they fail to consider the user as part of the scheduling process. Evolutionary computing offers different methods to find a near-optimal solution. In this paper we extended previous work with new optimisation heuristics for the problem of scheduling. We show how reputation is considered as an optimisation metric, and analyse how our metrics can be considered as upper bounds for others in the optimisation algorithm. By experimental comparison, we show our techniques can lead to optimised results.Peer Reviewe

    Trusted resource allocation in volunteer edge-cloud computing for scientific applications

    Get PDF
    Data-intensive science applications in fields such as e.g., bioinformatics, health sciences, and material discovery are becoming increasingly dynamic and demanding with resource requirements. Researchers using these applications which are based on advanced scientific workflows frequently require a diverse set of resources that are often not available within private servers or a single Cloud Service Provider (CSP). For example, a user working with Precision Medicine applications would prefer only those CSPs who follow guidelines from HIPAA (Health Insurance Portability and Accountability Act) for implementing their data services and might want services from other CSPs for economic viability. With the generation of more and more data these workflows often require deployment and dynamic scaling of multi-cloud resources in an efficient and high-performance manner (e.g., quick setup, reduced computation time, and increased application throughput). At the same time, users seek to minimize the costs of configuring the related multi-cloud resources. While performance and cost are among the key factors to decide upon CSP resource selection, the scientific workflows often process proprietary/confidential data that introduces additional constraints of security postures. Thus, users have to make an informed decision on the selection of resources that are most suited for their applications while trading off between the key factors of resource selection which are performance, agility, cost, and security (PACS). Furthermore, even with the most efficient resource allocation across multi-cloud, the cost to solution might not be economical for all users which have led to the development of new paradigms of computing such as volunteer computing where users utilize volunteered cyber resources to meet their computing requirements. For economical and readily available resources, it is essential that such volunteered resources can integrate well with cloud resources for providing the most efficient computing infrastructure for users. In this dissertation, individual stages such as user requirement collection, user's resource preferences, resource brokering and task scheduling, in lifecycle of resource brokering for users are tackled. For collection of user requirements, a novel approach through an iterative design interface is proposed. In addition, fuzzy interference-based approach is proposed to capture users' biases and expertise for guiding their resource selection for their applications. The results showed improvement in performance i.e. time to execute in 98 percent of the studied applications. The data collected on user's requirements and preferences is later used by optimizer engine and machine learning algorithms for resource brokering. For resource brokering, a new integer linear programming based solution (OnTimeURB) is proposed which creates multi-cloud template solutions for resource allocation while also optimizing performance, agility, cost, and security. The solution was further improved by the addition of a machine learning model based on naive bayes classifier which captures the true QoS of cloud resources for guiding template solution creation. The proposed solution was able to improve the time to execute for as much as 96 percent of the largest applications. As discussed above, to fulfill necessity of economical computing resources, a new paradigm of computing viz-a-viz Volunteer Edge Computing (VEC) is proposed which reduces cost and improves performance and security by creating edge clusters comprising of volunteered computing resources close to users. The initial results have shown improved time of execution for application workflows against state-of-the-art solutions while utilizing only the most secure VEC resources. Consequently, we have utilized reinforcement learning based solutions to characterize volunteered resources for their availability and flexibility towards implementation of security policies. The characterization of volunteered resources facilitates efficient allocation of resources and scheduling of workflows tasks which improves performance and throughput of workflow executions. VEC architecture is further validated with state-of-the-art bioinformatics workflows and manufacturing workflows.Includes bibliographical references

    A new MDA-SOA based framework for intercloud interoperability

    Get PDF
    Cloud computing has been one of the most important topics in Information Technology which aims to assure scalable and reliable on-demand services over the Internet. The expansion of the application scope of cloud services would require cooperation between clouds from different providers that have heterogeneous functionalities. This collaboration between different cloud vendors can provide better Quality of Services (QoS) at the lower price. However, current cloud systems have been developed without concerns of seamless cloud interconnection, and actually they do not support intercloud interoperability to enable collaboration between cloud service providers. Hence, the PhD work is motivated to address interoperability issue between cloud providers as a challenging research objective. This thesis proposes a new framework which supports inter-cloud interoperability in a heterogeneous computing resource cloud environment with the goal of dispatching the workload to the most effective clouds available at runtime. Analysing different methodologies that have been applied to resolve various problem scenarios related to interoperability lead us to exploit Model Driven Architecture (MDA) and Service Oriented Architecture (SOA) methods as appropriate approaches for our inter-cloud framework. Moreover, since distributing the operations in a cloud-based environment is a nondeterministic polynomial time (NP-complete) problem, a Genetic Algorithm (GA) based job scheduler proposed as a part of interoperability framework, offering workload migration with the best performance at the least cost. A new Agent Based Simulation (ABS) approach is proposed to model the inter-cloud environment with three types of agents: Cloud Subscriber agent, Cloud Provider agent, and Job agent. The ABS model is proposed to evaluate the proposed framework.Fundação para a Ciência e a Tecnologia (FCT) - (Referencia da bolsa: SFRH SFRH / BD / 33965 / 2009) and EC 7th Framework Programme under grant agreement n° FITMAN 604674 (http://www.fitman-fi.eu

    Energy Aware Resource Allocation for Clouds Using Two Level Ant Colony Optimization

    Get PDF
    In cloud environment resources are dynamically allocated, adjusted, and deallocated. When to allocate and how many resources to allocate is a challenging task. Resources allocated optimally and at the right time not only improve the utilization of resources but also increase energy efficiency, provider's profit and customers' satisfaction. This paper presents ant colony optimization (ACO) based energy aware solution for resource allocation problem. The proposed energy aware resource allocation (EARA) methodology strives to optimize allocation of resources in order to improve energy efficiency of the cloud infrastructure while satisfying quality of service (QoS) requirements of the end users. Resources are allocated to jobs according to their QoS requirements. For energy efficient and QoS aware allocation of resources, EARA uses ACO at two levels. First level ACO allocates Virtual Machines (VMs) resources to jobs whereas second level ACO allocates Physical Machines (PMs) resources to VMs. Server consolidation and dynamic performance scaling of PMs are employed to conserve energy. The proposed methodology is implemented in CloudSim and the results are compared with existing popular resource allocation methods. Simulation results demonstrate that EARA achieves desired QoS and superior energy gains through better utilization of resources. EARA outperforms major existing resource allocation methods and achieves up to 10.56 % saving in energy consumption

    Advances in Grid Computing

    Get PDF
    This book approaches the grid computing with a perspective on the latest achievements in the field, providing an insight into the current research trends and advances, and presenting a large range of innovative research papers. The topics covered in this book include resource and data management, grid architectures and development, and grid-enabled applications. New ideas employing heuristic methods from swarm intelligence or genetic algorithm and quantum encryption are considered in order to explain two main aspects of grid computing: resource management and data management. The book addresses also some aspects of grid computing that regard architecture and development, and includes a diverse range of applications for grid computing, including possible human grid computing system, simulation of the fusion reaction, ubiquitous healthcare service provisioning and complex water systems

    Broker-mediated Multiple-Cloud Orchestration Mechanisms for Cloud Computing

    Get PDF
    Ph.DDOCTOR OF PHILOSOPH

    An energy-aware scheduling approach for resource-intensive jobs using smart mobile devices as resource providers

    Get PDF
    The ever-growing adoption of smart mobile devices is a worldwide phenomenon that positions smart-phones and tablets as primary devices for communication and Internet access. In addition to this, the computing capabilities of such devices, often underutilized by their owners, are in continuous improvement. Today, smart mobile devices have multi-core CPUs, several gigabytes of RAM, and ability to communicate through several wireless networking technologies. These facts caught the attention of researchers who have proposed to leverage smart mobile devices aggregated computing capabilities for running resource intensive software. However, such idea is conditioned by key features, named singularities in the context of this thesis, that characterize resource provision with smart mobile devices.These are the ability of devices to change location (user mobility), the shared or non-dedicated nature of resources provided (lack of ownership) and the limited operation time given by the finite energy source (exhaustible resources).Existing proposals materializing this idea differ in the singularities combinations they target and the way they address each singularity, which make them suitable for distinct goals and resource exploitation opportunities. The latter are represented by real life situations where resources provided by groups of smart mobile devices can be exploited, which in turn are characterized by a social context and a networking support used to link and coordinate devices. The behavior of people in a given social context configure a special availability level of resources, while the underlying networking support imposes restrictionson how information flows, computational tasks are distributed and results are collected. The latter constitutes one fundamental difference of proposals mainly because each networking support ?i.e., ad-hoc and infrastructure based? has its own application scenarios. Aside from the singularities addressed and the networking support utilized, the weakest point of most of the proposals is their practical applicability. The performance achieved heavily relies on the accuracy with which task information, including execution time and/or energy required for execution, is provided to feed the resource allocator.The expanded usage of wireless communication infrastructure in public and private buildings, e.g., shoppings, work offices, university campuses and so on, constitutes a networking support that can be naturally re-utilized for leveraging smart mobile devices computational capabilities. In this context, this thesisproposal aims to contribute with an easy-to-implement  scheduling approach for running CPU-bound applications on a cluster of smart mobile devices. The approach is aware of the finite nature of smart mobile devices energy, and it does not depend on tasks information to operate. By contrast, it allocatescomputational resources to incoming tasks using a node ranking-based strategy. The ranking weights nodes combining static and dynamic parameters, including benchmark results, battery level, number of queued tasks, among others. This node ranking-based task assignment, or first allocation phase, is complemented with a re-balancing phase using job stealing techniques. The second allocation phase is an aid to the unbalanced load provoked as consequence of the non-dedicated nature of smart mobile devices CPU usage, i.e., the effect of the owner interaction, tasks heterogeneity, and lack of up-to-dateand accurate information of remaining energy estimations. The evaluation of the scheduling approach is through an in-vitro simulation. A novel simulator which exploits energy consumption profiles of real smart mobile devices, as well as, fluctuating CPU usage built upon empirical models, derived from real users interaction data, is another major contribution. Tests that validate the simulation tool are provided and the approach is evaluated in scenarios varying the composition of nodes, tasks and nodes characteristics including different tasks arrival rates, tasks requirements and different levels of nodes resource utilization.Fil: Hirsch Jofré, Matías Eberardo. Consejo Nacional de Investigaciones Científicas y Técnicas. Centro Científico Tecnológico Conicet - Tandil. Instituto Superior de Ingeniería del Software. Universidad Nacional del Centro de la Provincia de Buenos Aires. Instituto Superior de Ingeniería del Software; Argentin

    End-to-End Trust Fulfillment of Big Data Workflow Provisioning over Competing Clouds

    Get PDF
    Cloud Computing has emerged as a promising and powerful paradigm for delivering data- intensive, high performance computation, applications and services over the Internet. Cloud Computing has enabled the implementation and success of Big Data, a relatively recent phenomenon consisting of the generation and analysis of abundant data from various sources. Accordingly, to satisfy the growing demands of Big Data storage, processing, and analytics, a large market has emerged for Cloud Service Providers, offering a myriad of resources, platforms, and infrastructures. The proliferation of these services often makes it difficult for consumers to select the most suitable and trustworthy provider to fulfill the requirements of building complex workflows and applications in a relatively short time. In this thesis, we first propose a quality specification model to support dual pre- and post-cloud workflow provisioning, consisting of service provider selection and workflow quality enforcement and adaptation. This model captures key properties of the quality of work at different stages of the Big Data value chain, enabling standardized quality specification, monitoring, and adaptation. Subsequently, we propose a two-dimensional trust-enabled framework to facilitate end-to-end Quality of Service (QoS) enforcement that: 1) automates cloud service provider selection for Big Data workflow processing, and 2) maintains the required QoS levels of Big Data workflows during runtime through dynamic orchestration using multi-model architecture-driven workflow monitoring, prediction, and adaptation. The trust-based automatic service provider selection scheme we propose in this thesis is comprehensive and adaptive, as it relies on a dynamic trust model to evaluate the QoS of a cloud provider prior to taking any selection decisions. It is a multi-dimensional trust model for Big Data workflows over competing clouds that assesses the trustworthiness of cloud providers based on three trust levels: (1) presence of the most up-to-date cloud resource verified capabilities, (2) reputational evidence measured by neighboring users and (3) a recorded personal history of experiences with the cloud provider. The trust-based workflow orchestration scheme we propose aims to avoid performance degradation or cloud service interruption. Our workflow orchestration approach is not only based on automatic adaptation and reconfiguration supported by monitoring, but also on predicting cloud resource shortages, thus preventing performance degradation. We formalize the cloud resource orchestration process using a state machine that efficiently captures different dynamic properties of the cloud execution environment. In addition, we use a model checker to validate our monitoring model in terms of reachability, liveness, and safety properties. We evaluate both our automated service provider selection scheme and cloud workflow orchestration, monitoring and adaptation schemes on a workflow-enabled Big Data application. A set of scenarios were carefully chosen to evaluate the performance of the service provider selection, workflow monitoring and the adaptation schemes we have implemented. The results demonstrate that our service selection outperforms other selection strategies and ensures trustworthy service provider selection. The results of evaluating automated workflow orchestration further show that our model is self-adapting, self-configuring, reacts efficiently to changes and adapts accordingly while enforcing QoS of workflows

    Data Replication and Its Alignment with Fault Management in the Cloud Environment

    Get PDF
    Nowadays, the exponential data growth becomes one of the major challenges all over the world. It may cause a series of negative impacts such as network overloading, high system complexity, and inadequate data security, etc. Cloud computing is developed to construct a novel paradigm to alleviate massive data processing challenges with its on-demand services and distributed architecture. Data replication has been proposed to strategically distribute the data access load to multiple cloud data centres by creating multiple data copies at multiple cloud data centres. A replica-applied cloud environment not only achieves a decrease in response time, an increase in data availability, and more balanced resource load but also protects the cloud environment against the upcoming faults. The reactive fault tolerance strategy is also required to handle the faults when the faults already occurred. As a result, the data replication strategies should be aligned with the reactive fault tolerance strategies to achieve a complete management chain in the cloud environment. In this thesis, a data replication and fault management framework is proposed to establish a decentralised overarching management to the cloud environment. Three data replication strategies are firstly proposed based on this framework. A replica creation strategy is proposed to reduce the total cost by jointly considering the data dependency and the access frequency in the replica creation decision making process. Besides, a cloud map oriented and cost efficiency driven replica creation strategy is proposed to achieve the optimal cost reduction per replica in the cloud environment. The local data relationship and the remote data relationship are further analysed by creating two novel data dependency types, Within-DataCentre Data Dependency and Between-DataCentre Data Dependency, according to the data location. Furthermore, a network performance based replica selection strategy is proposed to avoid potential network overloading problems and to increase the number of concurrent-running instances at the same time
    corecore