3,999 research outputs found

    A Novel Workload Allocation Strategy for Batch Jobs

    Get PDF
    The distribution of computational tasks across a diverse set of geographically distributed heterogeneous resources is a critical issue in the realisation of true computational grids. Conventionally, workload allocation algorithms are divided into static and dynamic approaches. Whilst dynamic approaches frequently outperform static schemes, they usually require the collection and processing of detailed system information at frequent intervals - a task that can be both time consuming and unreliable in the real-world. This paper introduces a novel workload allocation algorithm for optimally distributing the workload produced by the arrival of batches of jobs. Results show that, for the arrival of batches of jobs, this workload allocation algorithm outperforms other commonly used algorithms in the static case. A hybrid scheduling approach (using this workload allocation algorithm), where information about the speed of computational resources is inferred from previously completed jobs, is then introduced and the efficiency of this approach demonstrated using a real world computational grid. These results are compared to the same workload allocation algorithm used in the static case and it can be seen that this hybrid approach comprehensively outperforms the static approach

    A Practical Cooperative Multicell MIMO-OFDMA Network Based on Rank Coordination

    Get PDF
    An important challenge of wireless networks is to boost the cell edge performance and enable multi-stream transmissions to cell edge users. Interference mitigation techniques relying on multiple antennas and coordination among cells are nowadays heavily studied in the literature. Typical strategies in OFDMA networks include coordinated scheduling, beamforming and power control. In this paper, we propose a novel and practical type of coordination for OFDMA downlink networks relying on multiple antennas at the transmitter and the receiver. The transmission ranks, i.e.\ the number of transmitted streams, and the user scheduling in all cells are jointly optimized in order to maximize a network utility function accounting for fairness among users. A distributed coordinated scheduler motivated by an interference pricing mechanism and relying on a master-slave architecture is introduced. The proposed scheme is operated based on the user report of a recommended rank for the interfering cells accounting for the receiver interference suppression capability. It incurs a very low feedback and backhaul overhead and enables efficient link adaptation. It is moreover robust to channel measurement errors and applicable to both open-loop and closed-loop MIMO operations. A 20% cell edge performance gain over uncoordinated LTE-A system is shown through system level simulations.Comment: IEEE Transactions or Wireless Communications, Accepted for Publicatio

    Fluid flow queue models for fixed-mobile network evaluation

    Get PDF
    A methodology for fast and accurate end-to-end KPI, like throughput and delay, estimation is proposed based on the service-centric traffic flow analysis and the fluid flow queuing model named CURSA-SQ. Mobile network features, like shared medium and mobility, are considered defining the models to be taken into account such as the propagation models and the fluid flow scheduling model. The developed methodology provides accurate computation of these KPIs, while performing orders of magnitude faster than discrete event simulators like ns-3. Finally, this methodology combined to its capacity for performance estimation in MPLS networks enables its application for near real-time converged fixed-mobile networks operation as it is proven in three use case scenarios

    Learning Scheduling Algorithms for Data Processing Clusters

    Full text link
    Efficiently scheduling data processing jobs on distributed compute clusters requires complex algorithms. Current systems, however, use simple generalized heuristics and ignore workload characteristics, since developing and tuning a scheduling policy for each workload is infeasible. In this paper, we show that modern machine learning techniques can generate highly-efficient policies automatically. Decima uses reinforcement learning (RL) and neural networks to learn workload-specific scheduling algorithms without any human instruction beyond a high-level objective such as minimizing average job completion time. Off-the-shelf RL techniques, however, cannot handle the complexity and scale of the scheduling problem. To build Decima, we had to develop new representations for jobs' dependency graphs, design scalable RL models, and invent RL training methods for dealing with continuous stochastic job arrivals. Our prototype integration with Spark on a 25-node cluster shows that Decima improves the average job completion time over hand-tuned scheduling heuristics by at least 21%, achieving up to 2x improvement during periods of high cluster load
    • 

    corecore