4,760 research outputs found

    Real-time disk scheduling in a mixed-media file system

    Get PDF
    This paper presents our real-time disk scheduler called the Delta L scheduler, which optimizes unscheduled best-effort disk requests by giving priority to best-effort disk requests while meeting real-time request deadlines. Our scheduler tries to execute real-time disk requests as much as possible in the background. Only when real-time request deadlines are endangered, our scheduler gives priority to real-time disk requests. The Delta L disk scheduler is part of our mixed-media file system called Clockwise. An essential part of our work is extensive and detailed raw disk performance measurements. The Delta L disk scheduler for its real-time schedulability analysis and to decide whether scheduling a best-effort request before a real-time request violates real-time constraints uses these raw performance measurements. Further, a Clockwise off-line simulator uses the raw performance measurements where a number of different disk schedulers are compared. We compare the Delta L scheduler with a prioritizing Latest Start Time (LST) scheduler and non-prioritizing EDF scheduler. The Delta L scheduler is comparable to LST in achieving low latencies for best-effort requests under light to moderate real-time loads and better in achieving low latencies for best-effort requests for extreme real-time loads. The simulator is calibrated to an actual Clockwise. Clockwise runs on a 200MHz Pentium-Pro based PC with PCI bus, multiple SCSI controllers and disks on Linux 2.2.x and the Nemesis kernel. Clockwise performance is dictated by the hardware: all available bandwidth can be committed to real-time streams, provided hardware overloads do not occur

    A Domain Specific Approach to High Performance Heterogeneous Computing

    Full text link
    Users of heterogeneous computing systems face two problems: firstly, in understanding the trade-off relationships between the observable characteristics of their applications, such as latency and quality of the result, and secondly, how to exploit knowledge of these characteristics to allocate work to distributed computing platforms efficiently. A domain specific approach addresses both of these problems. By considering a subset of operations or functions, models of the observable characteristics or domain metrics may be formulated in advance, and populated at run-time for task instances. These metric models can then be used to express the allocation of work as a constrained integer program, which can be solved using heuristics, machine learning or Mixed Integer Linear Programming (MILP) frameworks. These claims are illustrated using the example domain of derivatives pricing in computational finance, with the domain metrics of workload latency or makespan and pricing accuracy. For a large, varied workload of 128 Black-Scholes and Heston model-based option pricing tasks, running upon a diverse array of 16 Multicore CPUs, GPUs and FPGAs platforms, predictions made by models of both the makespan and accuracy are generally within 10% of the run-time performance. When these models are used as inputs to machine learning and MILP-based workload allocation approaches, a latency improvement of up to 24 and 270 times over the heuristic approach is seen.Comment: 14 pages, preprint draft, minor revisio
    • …
    corecore