499 research outputs found

    The Eureka Programming Model for Speculative Task Parallelism

    Get PDF
    In this paper, we describe the Eureka Programming Model (EuPM) that simplifies the expression of speculative parallel tasks, and is especially well suited for parallel search and optimization applications. The focus of this work is to provide a clean semantics for, and efficiently support, such "eureka-style" computations (EuSCs) in general structured task parallel programming models. In EuSCs, a eureka event is a point in a program that announces that a result has been found. A eureka triggered by a speculative task can cause a group of related speculative tasks to become redundant, and enable them to be terminated at well-defined program points. Our approach provides a bound on the additional work done in redundant speculative tasks after such a eureka event occurs. We identify various patterns that are supported by our eureka construct, which include search, optimization, convergence, and soft real-time deadlines. These different patterns of computations can also be safely combined or nested in the EuPM, along with regular task-parallel constructs, thereby enabling high degrees of composability and reusability. As demonstrated by our implementation, the EuPM can also be implemented efficiently. We use a cooperative runtime that uses delimited continuations to manage the termination of redundant tasks and their synchronization at join points. In contrast to current approaches, EuPM obviates the need for cumbersome manual refactoring by the programmer that may (for example) require the insertion of if checks and early return statements in every method in the call chain. Experimental results show that solutions using the EuPM simplify programmability, achieve performance comparable to hand-coded speculative task-based solutions and out-perform non-speculative task-based solutions

    Scheduling Algorithms for Cloud: A Survey and Analysis

    Get PDF
    Cloud Computing is a fast growing computing paradigm due to the vast benefits it provides to the users. Scheduling becomes one of the key aspects due to the pay-as-you-go nature of the Cloud. The factors affecting the technique of scheduling applied change with change in scenarios. For instance for scheduling in hybrid clouds the data transfer speed has to be taken into consideration whereas for mobile environments scheduling becomes dependent on context change. Moreover scheduling can be improvised on many fronts such as energy efficiency, cost minimization, Maximization of resource utilization, etc. This paper surveys scheduling techniques in various Cloud Computing scenarios and sites the most efficient scheduling technique available for a particular set of user needs by comparing various techniques and the problems they address

    Approximations and Bounds for (n, k) Fork-Join Queues: A Linear Transformation Approach

    Full text link
    Compared to basic fork-join queues, a job in (n, k) fork-join queues only needs its k out of all n sub-tasks to be finished. Since (n, k) fork-join queues are prevalent in popular distributed systems, erasure coding based cloud storages, and modern network protocols like multipath routing, estimating the sojourn time of such queues is thus critical for the performance measurement and resource plan of computer clusters. However, the estimating keeps to be a well-known open challenge for years, and only rough bounds for a limited range of load factors have been given. In this paper, we developed a closed-form linear transformation technique for jointly-identical random variables: An order statistic can be represented by a linear combination of maxima. This brand-new technique is then used to transform the sojourn time of non-purging (n, k) fork-join queues into a linear combination of the sojourn times of basic (k, k), (k+1, k+1), ..., (n, n) fork-join queues. Consequently, existing approximations for basic fork-join queues can be bridged to the approximations for non-purging (n, k) fork-join queues. The uncovered approximations are then used to improve the upper bounds for purging (n, k) fork-join queues. Simulation experiments show that this linear transformation approach is practiced well for moderate n and relatively large k.Comment: 10 page

    Redundancy Scheduling with Locally Stable Compatibility Graphs

    Full text link
    Redundancy scheduling is a popular concept to improve performance in parallel-server systems. In the baseline scenario any job can be handled equally well by any server, and is replicated to a fixed number of servers selected uniformly at random. Quite often however, there may be heterogeneity in job characteristics or server capabilities, and jobs can only be replicated to specific servers because of affinity relations or compatibility constraints. In order to capture such situations, we consider a scenario where jobs of various types are replicated to different subsets of servers as prescribed by a general compatibility graph. We exploit a product-form stationary distribution and weak local stability conditions to establish a state space collapse in heavy traffic. In this limiting regime, the parallel-server system with graph-based redundancy scheduling operates as a multi-class single-server system, achieving full resource pooling and exhibiting strong insensitivity to the underlying compatibility constraints.Comment: 28 pages, 4 figure

    Re-designing Main Memory Subsystems with Emerging Monolithic 3D (M3D) Integration and Phase Change Memory Technologies

    Get PDF
    Over the past two decades, Dynamic Random-Access Memory (DRAM) has emerged as the dominant technology for implementing the main memory subsystems of all types of computing systems. However, inferring from several recent trends, computer architects in both the industry and academia have widely accepted that the density (memory capacity per chip area) and latency of DRAM based main memory subsystems cannot sufficiently scale in the future to meet the requirements of future data-centric workloads related to Artificial Intelligence (AI), Big Data, and Internet-of-Things (IoT). In fact, the achievable density and access latency in main memory subsystems presents a very fundamental trade-off. Pushing for a higher density inevitably increases access latency, and pushing for a reduced access latency often leads to a decreased density. This trade-off is so fundamental in DRAM based main memory subsystems that merely looking to re-architect DRAM subsystems cannot improve this trade-off, unless disruptive technological advancements are realized for implementing main memory subsystems. In this thesis, we focus on two key contributions to overcome the density (represented as the total chip area for the given capacity) and access latency related challenges in main memory subsystems. First, we show that the fundamental area-latency trade-offs in DRAM can be significantly improved by redesigning the DRAM cell-array structure using the emerging monolithic 3D (M3D) integration technology. A DRAM bank structure can be split across two or more M3D-integrated tiers on the same DRAM chip, to consequently be able to significantly reduce the total on-chip area occupancy of the DRAM bank and its access peripherals. This approach is fundamentally different from the well known approach of through-silicon vias (TSVs)-based 3D stacking of DRAM tiers. This is because the M3D integration based approach does not require a separate DRAM chip per tier, whereas the 3D-stacking based approach does. Our evaluation results for PARSEC benchmarks show that our designed M3D DRAM cellarray organizations can yield up to 9.56% less latency and up to 21.21% less energy-delay product (EDP), with up to 14% less DRAM die area, compared to the conventional 2D DDR4 DRAM. Second, we demonstrate a pathway for eliminating the write disturbance errors in single-level-cell PCM, thereby positioning the PCM technology, which has inherently more relaxed density and latency trade-off compared to DRAM, as a more viable option for replacing the DRAM technology. We introduce low-temperature partial-RESET operations for writing ‘0’s in PCM cells. Compared to traditional operations that write \u270\u27s in PCM cells, partial-RESET operations do not cause disturbance errors in neighboring cells during PCM writes. The overarching theme that connects the two individual contributions into this single thesis is the density versus latency argument. The existing PCM technology has 3 to 4× higher write latency compared to DRAM; nevertheless, the existing PCM technology can store 2 to 4 bits in a single cell compared to one bit per cell storage capacity of DRAM. Therefore, unlike DRAM, it becomes possible to increase the density of PCM without consequently increasing PCM latency. In other words, PCM exhibits inherently improved (more relaxed) density and latency trade-off. Thus, both of our contributions in this thesis, the first contribution of re-designing DRAM with M3D integration technology and the second contribution of making the PCM technology a more viable replacement of DRAM by eliminating the write disturbance errors in PCM, connect to the common overarching goal of improving the density and latency trade-off in main memory subsystems. In addition, we also discuss in this thesis possible future research directions that are aimed at extending the impacts of our proposed ideas so that they can transform the performance of main memory subsystems of the future

    Stigmergic interoperability for autonomic systems: Managing complex interactions in multi-manager scenarios

    Get PDF
    The success of autonomic computing has led to its popular use in many application domains, leading to scenarios where multiple autonomic managers (AMs) coexist, but without adequate support for interoperability. This is evident, for example, in the increasing number of large datacentres with multiple managers which are independently designed. The increase in scale and size coupled with heterogeneity of services and platforms means that more AMs could be integrated to manage the arising complexity. This has led to the need for interoperability between AMs. Interoperability deals with how to manage multi-manager scenarios, to govern complex coexistence of managers and to arbitrate when conflicts arise. This paper presents an architecture-based stigmergic interoperability solution. The solution presented in this paper is based on the Trustworthy Autonomic Architecture (TAArch) and uses stigmergy (the means of indirect communication via the operating environment) to achieve indirect coordination among coexisting agents. Usually, in stigmergy-based coordination, agents may be aware of the existence of other agents. In the approach presented here in, agents (autonomic managers) do not need to be aware of the existence of others. Their design assumes that they are operating in 'isolation' and they simply respond to changes in the environment. Experimental results with a datacentre multi-manager scenario are used to analyse the proposed approach
    • …
    corecore