64,834 research outputs found

    Bulk Scheduling with the DIANA Scheduler

    Full text link
    Results from the research and development of a Data Intensive and Network Aware (DIANA) scheduling engine, to be used primarily for data intensive sciences such as physics analysis, are described. In Grid analyses, tasks can involve thousands of computing, data handling, and network resources. The central problem in the scheduling of these resources is the coordinated management of computation and data at multiple locations and not just data replication or movement. However, this can prove to be a rather costly operation and efficient sing can be a challenge if compute and data resources are mapped without considering network costs. We have implemented an adaptive algorithm within the so-called DIANA Scheduler which takes into account data location and size, network performance and computation capability in order to enable efficient global scheduling. DIANA is a performance-aware and economy-guided Meta Scheduler. It iteratively allocates each job to the site that is most likely to produce the best performance as well as optimizing the global queue for any remaining jobs. Therefore it is equally suitable whether a single job is being submitted or bulk scheduling is being performed. Results indicate that considerable performance improvements can be gained by adopting the DIANA scheduling approach.Comment: 12 pages, 11 figures. To be published in the IEEE Transactions in Nuclear Science, IEEE Press. 200

    Real-time disk scheduling in a mixed-media file system

    Get PDF
    This paper presents our real-time disk scheduler called the Delta L scheduler, which optimizes unscheduled best-effort disk requests by giving priority to best-effort disk requests while meeting real-time request deadlines. Our scheduler tries to execute real-time disk requests as much as possible in the background. Only when real-time request deadlines are endangered, our scheduler gives priority to real-time disk requests. The Delta L disk scheduler is part of our mixed-media file system called Clockwise. An essential part of our work is extensive and detailed raw disk performance measurements. The Delta L disk scheduler for its real-time schedulability analysis and to decide whether scheduling a best-effort request before a real-time request violates real-time constraints uses these raw performance measurements. Further, a Clockwise off-line simulator uses the raw performance measurements where a number of different disk schedulers are compared. We compare the Delta L scheduler with a prioritizing Latest Start Time (LST) scheduler and non-prioritizing EDF scheduler. The Delta L scheduler is comparable to LST in achieving low latencies for best-effort requests under light to moderate real-time loads and better in achieving low latencies for best-effort requests for extreme real-time loads. The simulator is calibrated to an actual Clockwise. Clockwise runs on a 200MHz Pentium-Pro based PC with PCI bus, multiple SCSI controllers and disks on Linux 2.2.x and the Nemesis kernel. Clockwise performance is dictated by the hardware: all available bandwidth can be committed to real-time streams, provided hardware overloads do not occur

    Creation of the selection list for the Experiment Scheduling Program (ESP)

    Get PDF
    The efforts to develop a procedure to construct selection groups to augment the Experiment Scheduling Program (ESP) are summarized. Included is a User's Guide and a sample scenario to guide in the use of the software system that implements the developed procedures

    Physiology-Aware Rural Ambulance Routing

    Full text link
    In emergency patient transport from rural medical facility to center tertiary hospital, real-time monitoring of the patient in the ambulance by a physician expert at the tertiary center is crucial. While telemetry healthcare services using mobile networks may enable remote real-time monitoring of transported patients, physiologic measures and tracking are at least as important and requires the existence of high-fidelity communication coverage. However, the wireless networks along the roads especially in rural areas can range from 4G to low-speed 2G, some parts with communication breakage. From a patient care perspective, transport during critical illness can make route selection patient state dependent. Prompt decisions with the relative advantage of a longer more secure bandwidth route versus a shorter, more rapid transport route but with less secure bandwidth must be made. The trade-off between route selection and the quality of wireless communication is an important optimization problem which unfortunately has remained unaddressed by prior work. In this paper, we propose a novel physiology-aware route scheduling approach for emergency ambulance transport of rural patients with acute, high risk diseases in need of continuous remote monitoring. We mathematically model the problem into an NP-hard graph theory problem, and approximate a solution based on a trade-off between communication coverage and shortest path. We profile communication along two major routes in a large rural hospital settings in Illinois, and use the traces to manifest the concept. Further, we design our algorithms and run preliminary experiments for scalability analysis. We believe that our scheduling techniques can become a compelling aid that enables an always-connected remote monitoring system in emergency patient transfer scenarios aimed to prevent morbidity and mortality with early diagnosis treatment.Comment: 6 pages, The Fifth IEEE International Conference on Healthcare Informatics (ICHI 2017), Park City, Utah, 201

    MARACAS: a real-time multicore VCPU scheduling framework

    Full text link
    This paper describes a multicore scheduling and load-balancing framework called MARACAS, to address shared cache and memory bus contention. It builds upon prior work centered around the concept of virtual CPU (VCPU) scheduling. Threads are associated with VCPUs that have periodically replenished time budgets. VCPUs are guaranteed to receive their periodic budgets even if they are migrated between cores. A load balancing algorithm ensures VCPUs are mapped to cores to fairly distribute surplus CPU cycles, after ensuring VCPU timing guarantees. MARACAS uses surplus cycles to throttle the execution of threads running on specific cores when memory contention exceeds a certain threshold. This enables threads on other cores to make better progress without interference from co-runners. Our scheduling framework features a novel memory-aware scheduling approach that uses performance counters to derive an average memory request latency. We show that latency-based memory throttling is more effective than rate-based memory access control in reducing bus contention. MARACAS also supports cache-aware scheduling and migration using page recoloring to improve performance isolation amongst VCPUs. Experiments show how MARACAS reduces multicore resource contention, leading to improved task progress.http://www.cs.bu.edu/fac/richwest/papers/rtss_2016.pdfAccepted manuscrip
    corecore