260 research outputs found

    Isomorphic Strategy for Processor Allocation in k-Ary n-Cube Systems

    Get PDF
    Due to its topological generality and flexibility, the k-ary n-cube architecture has been actively researched for various applications. However, the processor allocation problem has not been adequately addressed for the k-ary n-cube architecture, even though it has been studied extensively for hypercubes and meshes. The earlier k-ary n-cube allocation schemes based on conventional slice partitioning suffer from internal fragmentation of processors. In contrast, algorithms based on job-based partitioning alleviate the fragmentation problem but require higher time complexity. This paper proposes a new allocation scheme based on isomorphic partitioning, where the processor space is partitioned into higher dimensional isomorphic subcubes. The proposed scheme minimizes the fragmentation problem and is general in the sense that any size request can be supported and the host architecture need not be isomorphic. Extensive simulation study reveals that the proposed scheme significantly outperforms earlier schemes in terms of mean response time for practical size k-ary and n-cube architectures. The simulation results also show that reduction of external fragmentation is more substantial than internal fragmentation with the proposed scheme

    Precedence-constrained scheduling problems parameterized by partial order width

    Full text link
    Negatively answering a question posed by Mnich and Wiese (Math. Program. 154(1-2):533-562), we show that P2|prec,pj{1,2}p_j{\in}\{1,2\}|CmaxC_{\max}, the problem of finding a non-preemptive minimum-makespan schedule for precedence-constrained jobs of lengths 1 and 2 on two parallel identical machines, is W[2]-hard parameterized by the width of the partial order giving the precedence constraints. To this end, we show that Shuffle Product, the problem of deciding whether a given word can be obtained by interleaving the letters of kk other given words, is W[2]-hard parameterized by kk, thus additionally answering a question posed by Rizzi and Vialette (CSR 2013). Finally, refining a geometric algorithm due to Servakh (Diskretn. Anal. Issled. Oper. 7(1):75-82), we show that the more general Resource-Constrained Project Scheduling problem is fixed-parameter tractable parameterized by the partial order width combined with the maximum allowed difference between the earliest possible and factual starting time of a job.Comment: 14 pages plus appendi

    Isomorphic Strategy for Processor Allocation in k-Ary n-Cube Systems

    Get PDF
    Due to its topological generality and flexibility, the k-ary n-cube architecture has been actively researched for various applications. However, the processor allocation problem has not been adequately addressed for the k-ary n-cube architecture, even though it has been studied extensively for hypercubes and meshes. The earlier k-ary n-cube allocation schemes based on conventional slice partitioning suffer from internal fragmentation of processors. In contrast, algorithms based on job-based partitioning alleviate the fragmentation problem but require higher time complexity. This paper proposes a new allocation scheme based on isomorphic partitioning, where the processor space is partitioned into higher dimensional isomorphic subcubes. The proposed scheme minimizes the fragmentation problem and is general in the sense that any size request can be supported and the host architecture need not be isomorphic. Extensive simulation study reveals that the proposed scheme significantly outperforms earlier schemes in terms of mean response time for practical size k-ary and n-cube architectures. The simulation results also show that reduction of external fragmentation is more substantial than internal fragmentation with the proposed scheme

    Edge Computing in the Dark: Leveraging Contextual-Combinatorial Bandit and Coded Computing

    Full text link
    With recent advancements in edge computing capabilities, there has been a significant increase in utilizing the edge cloud for event-driven and time-sensitive computations. However, large-scale edge computing networks can suffer substantially from unpredictable and unreliable computing resources which can result in high variability of service quality. Thus, it is crucial to design efficient task scheduling policies that guarantee quality of service and the timeliness of computation queries. In this paper, we study the problem of computation offloading over unknown edge cloud networks with a sequence of timely computation jobs. Motivated by the MapReduce computation paradigm, we assume each computation job can be partitioned to smaller Map functions that are processed at the edge, and the Reduce function is computed at the user after the Map results are collected from the edge nodes. We model the service quality (success probability of returning result back to the user within deadline) of each edge device as function of context (collection of factors that affect edge devices). The user decides the computations to offload to each device with the goal of receiving a recoverable set of computation results in the given deadline. Our goal is to design an efficient edge computing policy in the dark without the knowledge of the context or computation capabilities of each device. By leveraging the \emph{coded computing} framework in order to tackle failures or stragglers in computation, we formulate this problem using contextual-combinatorial multi-armed bandits (CC-MAB), and aim to maximize the cumulative expected reward. We propose an online learning policy called \emph{online coded edge computing policy}, which provably achieves asymptotically-optimal performance in terms of regret loss compared with the optimal offline policy for the proposed CC-MAB problem

    Efficient processor management strategies for multicomputer systems

    Get PDF
    Multicomputers are cost-effective alternatives to the conventional supercomputers. Contemporary processor management schemes tend to underutilize the processors and leave many of the processors in the system idle while jobs are waiting for execution;Instead of designing faster processors or interconnection networks, a substantial performance improvement can be obtained by implementing better processor management strategies. This dissertation studies the performance issues related to the processor management schemes and proposes several ways to enhance the multicomputer systems by means of processor management. The proposed schemes incorporate the concepts of size-reduction, non-contiguous allocation, as well as job migration. Job scheduling using a bypass-queue is also studied. All the proposed schemes are proven effective in improving the system performance via extensive simulations. Each proposed scheme has different implementation cost and constraints. In order to take advantage of these schemes, judicious selection of system parameters is important and is discussed

    Isomorphic routing on a toroidal mesh

    Get PDF
    We study a routing problem that arises on SIMD parallel architectures whose communication network forms a toroidal mesh. We assume there exists a set of k message descriptors (xi, yi), where (xi, yi) indicates that the ith message's recipient is offset from its sender by xi hops in one mesh dimension, and yi hops in the other. Every processor has k messages to send, and all processors use the same set of message routing descriptors. The SIMD constraint implies that at any routing step, every processor is actively routing messages with the same descriptors as any other processor. We call this isomorphic routing. Our objective is to find the isomorphic routing schedule with least makespan. We consider a number of variations on the problem, yielding complexity results from O(k) to NP-complete. Most of our results follow after we transform the problem into a scheduling problem, where it is related to other well-known scheduling problems

    Processor allocation strategies for modified hypercubes

    Get PDF
    Parallel processing has been widely accepted to be the future in high speed computing. Among the various parallel architectures proposed/implemented, the hypercube has shown a lot of promise because of its poweful properties, like regular topology, fault tolerance, low diameter, simple routing, and ability to efficiently emulate other architectures. The major drawback of the hypercube network is that it can not be expanded in practice because the number of communication ports for each processor grows as the logarithm of the total number of processors in the system. Therefore, once a hypercube supercomputer of a certain dimensionality has been built, any future expansions can be accomplished only by replacing the VLSI chips. This is an undesirable feature and a lot of work has been under progress to eliminate this stymie, thus providing a platform for easier expansion. Modified hypercubes (MHs) have been proposed as the building blocks of hypercube-based systems supporting incremental growth techniques without introducing extra resources for individual hypercubes. However, processor allocation on MHs proves to be a challenge due to a slight deviation in their topology from that of the standard hypercube network. This thesis addresses the issue of processor allocation on MHs and proposes various strategies which are based, partially or entirely, on table look-up approaches. A study of the various task allocation strategies for standard hypercubes is conducted and their suitability for MHs is evaluated. It is shown that the proposed strategies have a perfect subcube recognition ability and a superior performance. Existing processor allocation strategies for pure hypercube networks are demonstrated to be ineffective for MHs, in the light of their inability to recognize all available subcubes. A comparative analysis that involves the buddy strategy and the new strategies is carried out using simulation results
    corecore