10,902 research outputs found

    Multi-capacity bin packing with dependent items and its application to the packing of brokered workloads in virtualized environments

    Full text link
    Providing resource allocation with performance predictability guarantees is increasingly important in cloud platforms, especially for data-intensive applications, in which performance depends greatly on the available rates of data transfer between the various computing/storage hosts underlying the virtualized resources assigned to the application. Existing resource allocation solutions either assume that applications manage their data transfer between their virtualized resources, or that cloud providers manage their internal networking resources. With the increased prevalence of brokerage services in cloud platforms, there is a need for resource allocation solutions that provides predictability guarantees in settings, in which neither application scheduling nor cloud provider resources can be managed/controlled by the broker. This paper addresses this problem, as we define the Network-Constrained Packing (NCP) problem of finding the optimal mapping of brokered resources to applications with guaranteed performance predictability. We prove that NCP is NP-hard, and we define two special instances of the problem, for which exact solutions can be found efficiently. We develop a greedy heuristic to solve the general instance of the NCP problem , and we evaluate its efficiency using simulations on various application workloads, and network models.This work was done while author was at Boston University. It was partially supported by NSF CISE awards #1430145, #1414119, #1239021 and #1012798. (1430145 - NSF CISE; 1414119 - NSF CISE; 1239021 - NSF CISE; 1012798 - NSF CISE

    Network-constrained packing of brokered workloads in virtualized environments

    Full text link
    Providing resource allocation with performance predictability guarantees is increasingly important in cloud platforms, especially for data-intensive applications, in which performance depends greatly on the available rates of data transfer between the various computing/storage hosts underlying the virtualized resources assigned to the application. Existing resource allocation solutions either assume that applications manage their data transfer between their virtualized resources, or that cloud providers manage their internal networking resources.With the increased prevalence of brokerage services in cloud platforms, there is a need for resource allocation solutions that provides predictability guarantees in settings, in which neither application scheduling nor cloud provider resources can be managed/controlled by the broker. This paper addresses this problem, as we define the Network-Constrained Packing (NCP)problem of finding the optimal mapping of brokered resources to applications with guaranteed performance predictability. We prove that NCP is NP-hard, and we define two special instances of the problem, for which exact solutions can be found efficiently. We develop a greedy heuristic to solve the general instance of the NCP problem, and we evaluate its efficiency using simulations on various application workloads, and network models.This work is supported by NSF CISE CNS Award #1347522, # 1239021, # 1012798

    The assignment problem in distributed computing

    Get PDF
    This dissertation focuses on the problem of assigning the modules of a program to the processors in a distributed system with the goal of minimizing the overall cost of running the program. The cost depends on the execution times of the modules on the processors and on the cost of communication between modules. This module allocation problem arises in a variety of situations where one is interested in making optimum use of available computer resources. The general module allocation problem is intractable; however it becomes polynomially-solvable when the communication graph is restricted. In this dissertation, we restrict our attention to k-trees;As the first problem, we study parametric module allocation on partial k-trees. We allow the costs, both execution and communication, to vary linearly as functions of a real parameter t. We show that if the number of processors is fixed, the sequence of optimum assignments that are obtained, as t varies from zero to infinity, can be constructed in polynomial time. As an auxiliary result, we develop a linear-time algorithm to find a separator in a k-tree. We discuss the implications of our results for parametric versions of the weighted vertex cover, independent set, and 0-1 quadratic programming problems on partial k-trees;Next, we consider two variants of the assignment problem. The first problem is to find a minimum-cost assignment when one of the processors has a limited memory. The second is to find an assignment that minimizes the maximum processor load. We present exact dynamic programming algorithms for both problems, which lead to approximation schemes for the case where the communication graph is a partial k-tree. Faster algorithms are presented for trees with uniform costs. In contrast to these results, we show that, for arbitrary graphs, no fully polynomial time approximation schemes exist unless P = NP. Both dynamic programming algorithms have been implemented. The implementation details and our experimental results are presented

    Implementing PRISMA/DB in an OOPL

    Get PDF
    PRISMA/DB is implemented in a parallel object-oriented language to gain insight in the usage of parallelism. This environment allows us to experiment with parallelism by simply changing the allocation of objects to the processors of the PRISMA machine. These objects are obtained by a strictly modular design of PRISMA/DB. Communication between the objects is required to cooperatively handle the various tasks, but it limits the potential for parallelism. From this approach, we hope to gain a better understanding of parallelism, which can be used to enhance the performance of PRISMA/DB.\ud The work reported in this document was conducted as part of the PRISMA project, a joint effort with Philips Research Eindhoven, partially supported by the Dutch "Stimuleringsprojectteam Informaticaonderzoek (SPIN)

    Three Highly Parallel Computer Architectures and Their Suitability for Three Representative Artificial Intelligence Problems

    Get PDF
    Virtually all current Artificial Intelligence (AI) applications are designed to run on sequential (von Neumann) computer architectures. As a result, current systems do not scale up. As knowledge is added to these systems, a point is reached where their performance quickly degrades. The performance of a von Neumann machine is limited by the bandwidth between memory and processor (the von Neumann bottleneck). The bottleneck is avoided by distributing the processing power across the memory of the computer. In this scheme the memory becomes the processor (a smart memory ). This paper highlights the relationship between three representative AI application domains, namely knowledge representation, rule-based expert systems, and vision, and their parallel hardware realizations. Three machines, covering a wide range of fundamental properties of parallel processors, namely module granularity, concurrency control, and communication geometry, are reviewed: the Connection Machine (a fine-grained SIMD hypercube), DADO (a medium-grained MIMD/SIMD/MSIMD tree-machine), and the Butterfly (a coarse-grained MIMD Butterflyswitch machine)

    Workflow scheduling for service oriented cloud computing

    Get PDF
    Service Orientation (SO) and grid computing are two computing paradigms that when put together using Internet technologies promise to provide a scalable yet flexible computing platform for a diverse set of distributed computing applications. This practice gives rise to the notion of a computing cloud that addresses some previous limitations of interoperability, resource sharing and utilization within distributed computing. In such a Service Oriented Computing Cloud (SOCC), applications are formed by composing a set of services together. In addition, hierarchical service layers are also possible where general purpose services at lower layers are composed to deliver more domain specific services at the higher layer. In general an SOCC is a horizontally scalable computing platform that offers its resources as services in a standardized fashion. Workflow based applications are a suitable target for SOCC where workflow tasks are executed via service calls within the cloud. One or more workflows can be deployed over an SOCC and their execution requires scheduling of services to workflow tasks as the task become ready following their interdependencies. In this thesis heuristics based scheduling policies are evaluated for scheduling workflows over a collection of services offered by the SOCC. Various execution scenarios and workflow characteristics are considered to understand the implication of the heuristic based workflow scheduling

    Real-time Container Transport Planning with Decision Trees based on Offline Obtained Optimal Solutions

    Get PDF
    Hinterland networks for container transportation require planning methods in order to increase efficiency and reliability of the inland road, rail and waterway connections. In this paper we aim to derive real-time decision rules for suitable allocations of containers to inland services by analysing the solution structure of a centralised optimisation method used offline on historic data. The decision tree can be used in a decision support system (DSS) for instantaneously allocating incoming orders to suitable services, without the need for continuous planning updates. Such a DSS is beneficial, as it is easy to implement in the current practice of container transportation. Earlier proposed centralised methods can find the optimal solution for the intermodal inland transportation problem in retrospect, but are not suitable when information becomes gradually available. The main contributions are threefold: firstly, a structured method for creating decision trees from optimal solutions is proposed. Secondly, an innovative method is used for obtaining multiple equivalent optimal solutions to prevent overfitting of the decision tree. And finally, a structured analysis of three error types is presented for assessing the quality of an obtained tree. A case study illustrates the method’s purpose by comparing the quality of the resulting plan with alternative methods
    • …
    corecore