17,067 research outputs found

    Metascheduling and Heuristic Co-Allocation Strategies in Distributed Computing

    Get PDF
    In this paper, we address problems of efficient computing in distributed systems with non-dedicated resources including utility grid. There are global job flows from external users along with resource owner's local tasks upon the resource non-dedication condition. Competition for resource reservation between independent users, local and global job flows substantially complicates scheduling and the requirement to provide the necessary quality of service. A metascheduling concept, justified in this work, assumes a complex combination of job flow dispatching and application-level scheduling methods for parallel jobs, as well as resource sharing and consumption policies established in virtual organizations and based on economic principles. We introduce heuristic slot selection and co-allocation strategies for parallel jobs. They are formalized by given criteria and implemented by algorithms of linear complexity on an available slots number

    User subscription-based resource management for Desktop-as-a-Service platforms

    Get PDF
    The Desktop-as-a-Service (DaaS) idiom consists of utilizing a cloud or other server infrastructure to host the user's desktop environment as a virtual desktop. Typical for cloud and DaaS services is the pay-as-you-go pricing model in combination with the availability of multiple subscription types to accommodate the needs of the users. However, optimal cost-efficient allocation of the virtual desktops to the infrastructure proves to be a combinatorial NP-hard problem, for which a heuristic is presented in the current article. We present a cost model for the DaaS service, from which a revenue of different configurations of virtual desktops to the servers can be derived. In this cost model, both subscription fee and penalties for degraded service are recorded, that are described in service-level agreements (SLAs) between the service provider and the users, and make realistic assumptions that different subscription types result in particular SLA contracts. The heuristic proposed states that for a given user base for which the virtual desktops (VDs) must be hosted, the VDs should be spread evenly over the infrastructure. Experiments through discrete event simulation show that this heuristic yields an approximation within 1 % of the theoretically achievable revenue

    Scheduling of data-intensive workloads in a brokered virtualized environment

    Full text link
    Providing performance predictability guarantees is increasingly important in cloud platforms, especially for data-intensive applications, for which performance depends greatly on the available rates of data transfer between the various computing/storage hosts underlying the virtualized resources assigned to the application. With the increased prevalence of brokerage services in cloud platforms, there is a need for resource management solutions that consider the brokered nature of these workloads, as well as the special demands of their intra-dependent components. In this paper, we present an offline mechanism for scheduling batches of brokered data-intensive workloads, which can be extended to an online setting. The objective of the mechanism is to decide on a packing of the workloads in a batch that minimizes the broker's incurred costs, Moreover, considering the brokered nature of such workloads, we define a payment model that provides incentives to these workloads to be scheduled as part of a batch, which we analyze theoretically. Finally, we evaluate the proposed scheduling algorithm, and exemplify the fairness of the payment model in practical settings via trace-based experiments

    Toolflows for Mapping Convolutional Neural Networks on FPGAs: A Survey and Future Directions

    Get PDF
    In the past decade, Convolutional Neural Networks (CNNs) have demonstrated state-of-the-art performance in various Artificial Intelligence tasks. To accelerate the experimentation and development of CNNs, several software frameworks have been released, primarily targeting power-hungry CPUs and GPUs. In this context, reconfigurable hardware in the form of FPGAs constitutes a potential alternative platform that can be integrated in the existing deep learning ecosystem to provide a tunable balance between performance, power consumption and programmability. In this paper, a survey of the existing CNN-to-FPGA toolflows is presented, comprising a comparative study of their key characteristics which include the supported applications, architectural choices, design space exploration methods and achieved performance. Moreover, major challenges and objectives introduced by the latest trends in CNN algorithmic research are identified and presented. Finally, a uniform evaluation methodology is proposed, aiming at the comprehensive, complete and in-depth evaluation of CNN-to-FPGA toolflows.Comment: Accepted for publication at the ACM Computing Surveys (CSUR) journal, 201
    corecore