5,347 research outputs found

    Sim_Dsc: Simulator for Optimizing the Performance of Disk Scheduling Algorithms

    Get PDF
    Disk scheduling involves a careful examination of pending requests to determine the most efficient way to service these requests. A disk scheduler examines the positional relationship among waiting requests, then reorders the queue so that the requests will be serviced with minimum seek. The purpose of the study is to obtain the best scheduling algorithm based on the seek time, rotation time and transfer time for moveable head disks. Keeping in view an attempt has been made to design a simulator for optimizing the performance of disk scheduling algorithms using Box-Muller transformation. The input for the simulator has been derived by using an algorithm for generating pseudo random numbers which follows box-muller transformations. Simulator takes access time which is generated using seek time, rotation time and transfer time, as the request of cylinder numbers, current position of read/write head as inputs. On the basis of these inputs, total head movement of each disk scheduling algorithm is calculated under various loads

    Efficient memory management in video on demand servers

    Get PDF
    In this article we present, analyse and evaluate a new memory management technique for video-on-demand servers. Our proposal, Memory Reservation Per Storage Device (MRPSD), relies on the allocation of a fixed, small number of memory buffers per storage device. Selecting adequate scheduling algorithms, information storage strategies and admission control mechanisms, we demonstrate that MRPSD is suited for the deterministic service of variable bit rate streams to intolerant clients. MRPSD allows large memory savings compared to traditional memory management techniques, based on the allocation of a certain amount of memory per client served, without a significant performance penaltyPublicad

    Real-time disk scheduling in a mixed-media file system

    Get PDF
    This paper presents our real-time disk scheduler called the Delta L scheduler, which optimizes unscheduled best-effort disk requests by giving priority to best-effort disk requests while meeting real-time request deadlines. Our scheduler tries to execute real-time disk requests as much as possible in the background. Only when real-time request deadlines are endangered, our scheduler gives priority to real-time disk requests. The Delta L disk scheduler is part of our mixed-media file system called Clockwise. An essential part of our work is extensive and detailed raw disk performance measurements. The Delta L disk scheduler for its real-time schedulability analysis and to decide whether scheduling a best-effort request before a real-time request violates real-time constraints uses these raw performance measurements. Further, a Clockwise off-line simulator uses the raw performance measurements where a number of different disk schedulers are compared. We compare the Delta L scheduler with a prioritizing Latest Start Time (LST) scheduler and non-prioritizing EDF scheduler. The Delta L scheduler is comparable to LST in achieving low latencies for best-effort requests under light to moderate real-time loads and better in achieving low latencies for best-effort requests for extreme real-time loads. The simulator is calibrated to an actual Clockwise. Clockwise runs on a 200MHz Pentium-Pro based PC with PCI bus, multiple SCSI controllers and disks on Linux 2.2.x and the Nemesis kernel. Clockwise performance is dictated by the hardware: all available bandwidth can be committed to real-time streams, provided hardware overloads do not occur

    Efficient memory management in VOD disk array servers usingPer-Storage-Device buffering

    Get PDF
    We present a buffering technique that reduces video-on-demand server memory requirements in more than one order of magnitude. This technique, Per-Storage-Device Buffering (PSDB), is based on the allocation of a fixed number of buffers per storage device, as opposed to existing solutions based on per-stream buffering allocation. The combination of this technique with disk array servers is studied in detail, as well as the influence of Variable Bit Streams. We also present an interleaved data placement strategy, Constant Time Length Declustering, that results in optimal performance in the service of VBR streams. PSDB is evaluated by extensive simulation of a disk array server model that incorporates a simulation based admission test.This research was supported in part by the National R&D Program of Spain, Project Number TIC97-0438.Publicad

    Improving I/O performance through an in-kernel disk simulator

    Get PDF
    This paper presents two mechanisms that can significantly improve the I/O performance of both hard and solid-state drives for read operations: KDSim and REDCAP. KDSim is an in-kernel disk simulator that provides a framework for simultaneously simulating the performance obtained by different I/O system mechanisms and algorithms, and for dynamically turning them on and off, or selecting between different options or policies, to improve the overall system performance. REDCAP is a RAM-based disk cache that effectively enlarges the built-in cache present in disk drives. By using KDSim, this cache is dynamically activated/deactivated according to the throughput achieved. Results show that, by using KDSim and REDCAP together, a system can improve its I/O performance up to 88% for workloads with some spatial locality on both hard and solid-state drives, while it achieves the same performance as a ‘regular system’ for workloads with random or sequential access patterns.Peer ReviewedPostprint (author's final draft

    Improving the Performance and Endurance of Persistent Memory with Loose-Ordering Consistency

    Full text link
    Persistent memory provides high-performance data persistence at main memory. Memory writes need to be performed in strict order to satisfy storage consistency requirements and enable correct recovery from system crashes. Unfortunately, adhering to such a strict order significantly degrades system performance and persistent memory endurance. This paper introduces a new mechanism, Loose-Ordering Consistency (LOC), that satisfies the ordering requirements at significantly lower performance and endurance loss. LOC consists of two key techniques. First, Eager Commit eliminates the need to perform a persistent commit record write within a transaction. We do so by ensuring that we can determine the status of all committed transactions during recovery by storing necessary metadata information statically with blocks of data written to memory. Second, Speculative Persistence relaxes the write ordering between transactions by allowing writes to be speculatively written to persistent memory. A speculative write is made visible to software only after its associated transaction commits. To enable this, our mechanism supports the tracking of committed transaction ID and multi-versioning in the CPU cache. Our evaluations show that LOC reduces the average performance overhead of memory persistence from 66.9% to 34.9% and the memory write traffic overhead from 17.1% to 3.4% on a variety of workloads.Comment: This paper has been accepted by IEEE Transactions on Parallel and Distributed System
    corecore