137,361 research outputs found

    An Elastic Scheduling Algorithm For Resource Co-Allocation Based on System Generated Predictions With Priority

    Get PDF
    Resource Co-Allocation is basically used to execute multiple site jobs in a large scale computing environments with secure, faultless and in transparent manner. To be precise we are actually allocating multiple resources for different jobs taking into account the time parameter. Here we make use of the Scheduling queue and Resource Co-Allocation to reduce the Turn-around time with an advanced concept of System Generated Prediction based on Priority. In existing works we are scheduling the resource co-allocation request from user runtime estimation. As user runtime estimations are usually very imprecise that is not clear. In proposed work we are scheduling the resource co-allocation request based on system generated predictions through Discovery service & Priority (fairness and user experience) through topological sorting technique. The system generated predictions are better parameters than user runtime estimates for Resource co-Allocation scheduling, because System generated predictions reduce the scheduling time through proxy ser based discovery service technique. The proposed work consider priorities like advanced reservation, system Generated Predictions, Negotiation, Co-scheduling, policy (SLA, Price, Trust) for resource Co-Allocation. The system generated predictions are better than user runtime estimates for Resource co- Allocation scheduling, using the experimental data’s we proved this concept. End User doesn’t want the grid and resource knowledge only submit job to the portal. This proposed portal will take care of all knowledge about the resource collocation automatically with fast and efficient manner

    Exploring the catallactic coordination approach for peer-to-peer systems

    Get PDF
    Efficient discovery and resource allocation is one of the challenges of current Peer-to-Peer systems. In centralized approaches, the user requests can be matched to the fastest, cheapest or most available resource. This approach, however, shows scalability limits. In this paper, we explore the catallactic coordination as a decentralized economic approach for resource allocation in peer-topeer networks. The economic model of the catallaxy is based on the selfinterested maximization of utility and the negotiation of prices between agents. We evaluate the feasibility of our approach by means of simulations and compare the proposed system with a centralized baseline approach. Our results indicate that while in the catallacic approach the number of control messages exchanged between the peers grows due to the negotiation process, its service provision rate is fairly constant in different dynamic environments.Peer Reviewe

    Grid metadata management: requirements and architecture

    Full text link
    Metadata annotations of grid resources can potentially be used for a number of purposes, including accurate resource allocation to jobs, discovery of services, and precise retrieval of information resources. In order to realize this potential on a large scale, various aspects of metadata must be managed. These include uniform and secure access to distributed and independently maintained metadata repositories, as well as management of metadata lifecycle. In this paper we analyze these issues and present a service-oriented architecture for metadata management, called S-OGSA, that addresses them in a systematic way

    A fuzzy virtual machine workload prediction method for cloud environments

    Full text link
    © 2017 IEEE. Due to the dynamic nature of cloud environments, the workload of virtual machines (VMs) fluctuates leading to imbalanced loads and utilization of virtual and physical cloud resources. It is, therefore, essential that cloud providers accurately forecast VM performance and resource utilization so they can appropriately manage their assets to deliver better quality cloud services on demand. Current workload and resource prediction methods forecast the workload or CPU utilization pattern of the given web-based applications based on their historical data. This gives cloud providers an indication of the required number of resources (VMs or CPUs) for these applications to optimize resource allocation for software as a service (SaaS) or platform as a service (PaaS), reducing their service costs. However, historical data cannot be used as the only data source for VM workload predictions as it may not be available in every situation. Nor can historical data provide information about sudden and unexpected peaks in user demand. To solve these issues, we have developed a fuzzy workload prediction method that monitors both historical and current VM CPU utilization and workload to predict VMs that are likely to be performing poorly. This model can also predict the utilization of physical machine (PM) resources for virtual resource discovery

    MPICH-G2: A Grid-Enabled Implementation of the Message Passing Interface

    Full text link
    Application development for distributed computing "Grids" can benefit from tools that variously hide or enable application-level management of critical aspects of the heterogeneous environment. As part of an investigation of these issues, we have developed MPICH-G2, a Grid-enabled implementation of the Message Passing Interface (MPI) that allows a user to run MPI programs across multiple computers, at the same or different sites, using the same commands that would be used on a parallel computer. This library extends the Argonne MPICH implementation of MPI to use services provided by the Globus Toolkit for authentication, authorization, resource allocation, executable staging, and I/O, as well as for process creation, monitoring, and control. Various performance-critical operations, including startup and collective operations, are configured to exploit network topology information. The library also exploits MPI constructs for performance management; for example, the MPI communicator construct is used for application-level discovery of, and adaptation to, both network topology and network quality-of-service mechanisms. We describe the MPICH-G2 design and implementation, present performance results, and review application experiences, including record-setting distributed simulations.Comment: 20 pages, 8 figure

    VINEA: a policy-based virtual network embedding architecture

    Full text link
    Network virtualization has enabled new business models by allowing infrastructure providers to lease or share their physical network. To concurrently run multiple customized virtual network services, such infrastructure providers need to run a virtual network embedding protocol. The virtual network embedding is the (NP-hard) problem of matching constrained virtual networks onto the physical network. We present the design and implementation of a policy-based architecture for the virtual network embedding problem. By policy, we mean a variant aspect of any of the (invariant) embedding mechanisms: resource discovery, virtual network mapping, and allocation on the physical infrastructure. Our architecture adapts to different scenarios by instantiating appropriate policies, and has bounds on embedding efficiency and on convergence embedding time, over a single provider, or across multiple federated providers. The performance of representative novel policy configurations are compared over a prototype implementation. We also present an object model as a foundation for a protocol specification, and we release a testbed to enable users to test their own embedding policies, and to run applications within their virtual networks. The testbed uses a Linux system architecture to reserve virtual node and link capacities.National Science Foundation (CNS-0963974

    A Taxonomy for Management and Optimization of Multiple Resources in Edge Computing

    Full text link
    Edge computing is promoted to meet increasing performance needs of data-driven services using computational and storage resources close to the end devices, at the edge of the current network. To achieve higher performance in this new paradigm one has to consider how to combine the efficiency of resource usage at all three layers of architecture: end devices, edge devices, and the cloud. While cloud capacity is elastically extendable, end devices and edge devices are to various degrees resource-constrained. Hence, an efficient resource management is essential to make edge computing a reality. In this work, we first present terminology and architectures to characterize current works within the field of edge computing. Then, we review a wide range of recent articles and categorize relevant aspects in terms of 4 perspectives: resource type, resource management objective, resource location, and resource use. This taxonomy and the ensuing analysis is used to identify some gaps in the existing research. Among several research gaps, we found that research is less prevalent on data, storage, and energy as a resource, and less extensive towards the estimation, discovery and sharing objectives. As for resource types, the most well-studied resources are computation and communication resources. Our analysis shows that resource management at the edge requires a deeper understanding of how methods applied at different levels and geared towards different resource types interact. Specifically, the impact of mobility and collaboration schemes requiring incentives are expected to be different in edge architectures compared to the classic cloud solutions. Finally, we find that fewer works are dedicated to the study of non-functional properties or to quantifying the footprint of resource management techniques, including edge-specific means of migrating data and services.Comment: Accepted in the Special Issue Mobile Edge Computing of the Wireless Communications and Mobile Computing journa
    • …
    corecore