307 research outputs found

    Efficient Web Requests Scheduling Considering Resources Sharing

    Get PDF
    International audienceRequests scheduling in Web servers is a hot research topic. Many works aim at providing optimal algorithms according to various metrics. Most of these works are based on classical scheduling metrics, considering jobs completion times, but ignoring intermediate states. We claim that this choice conduces to the design of algorithm that do not efficiently share the system resources. Indeed, Web servers have some properties that make them different than the system considered in usual scheduling theory. The classical round-robin policy, used in most production Web servers, has intrinsic qualities: it shares equally the system resources and avoids any job starvation. We introduce a novel parameterizable algorithm proposing a compromise between the benefits of the round-robin and the policies that provide the best performances. Then, we discuss the appropriate choice of the parameter depending in the requirements and the context of the Web server

    Revisiting Size-Based Scheduling with Estimated Job Sizes

    Full text link
    We study size-based schedulers, and focus on the impact of inaccurate job size information on response time and fairness. Our intent is to revisit previous results, which allude to performance degradation for even small errors on job size estimates, thus limiting the applicability of size-based schedulers. We show that scheduling performance is tightly connected to workload characteristics: in the absence of large skew in the job size distribution, even extremely imprecise estimates suffice to outperform size-oblivious disciplines. Instead, when job sizes are heavily skewed, known size-based disciplines suffer. In this context, we show -- for the first time -- the dichotomy of over-estimation versus under-estimation. The former is, in general, less problematic than the latter, as its effects are localized to individual jobs. Instead, under-estimation leads to severe problems that may affect a large number of jobs. We present an approach to mitigate these problems: our technique requires no complex modifications to original scheduling policies and performs very well. To support our claim, we proceed with a simulation-based evaluation that covers an unprecedented large parameter space, which takes into account a variety of synthetic and real workloads. As a consequence, we show that size-based scheduling is practical and outperforms alternatives in a wide array of use-cases, even in presence of inaccurate size information.Comment: To be published in the proceedings of IEEE MASCOTS 201

    A Scalable Solution For Interactive Video Streaming

    Get PDF
    This dissertation presents an overall solution for interactive Near Video On Demand (NVOD) systems, where limited server and network resources prevent the system from servicing all customers’ requests. The interactive nature of recent workloads complicates matters further. Interactive requests require additional resources to be handled. This dissertation analyzes the system performance under a realistic workload using different stream merging techniques and scheduling policies. It considers a wide range of system parameters and studies their impact on the waiting and blocking metrics. In order to improve waiting customers experience, we propose a new scheduling policy for waiting customers that is fairer and delivers a descent performance. Blocking is a major issue in interactive NVOD systems and we propose a few techniques to minimize it. In particular, we study the maximum Interactive Stream (I-Stream) length (Threshold) that should be allowed in order to prevent a few requests from using the expensive I-Streams for a prolonged period of time, which starves other requests from a chance of using this valuable resource. Using a reasonable I-Stream threshold proves very effective in improving blocking metrics. Moreover, we introduce an I-Stream provisioning policy to dynamically shift resources based on the system requirements at the time. The proposed policy proves to be highly effective in improving the overall system performance. To account for both average waiting time and average blocking time, we introduce a new metric (Aggregate Delay) . We study the client-side cache management policy. We utilize the customer’s cache to service most interactive requests, which reduces the load on the server. We propose three purging algorithms to clear data when the cache gets full. Purge Oldest removes the oldest data in the cache, whereas Purge Furthest clears the furthest data from the client’s playback point. In contrast, Adaptive Purge tries to avoid purging any data that includes the customer’s playback point or the playback point of any stream that is being listened to by the client. Additionally, we study the impact of the purge block, which is the least amount of data to be cleared, on the system performance. Finally, we study the effect of bookmarking on the system performance. A video segment that is searched and watched repeatedly is called a hotspot and is pointed to by a bookmark. We introduce three enhancements to effectively support bookmarking. Specifically, we propose a new purging algorithm to avoid purging hotspot data if it is already cached. On top of that, we fetch hotspot data for customers not listening to any stream. Furthermore, we reserve multicast channels to fetch hotspot data

    Downstream Bandwidth Management for Emerging DOCSIS-based Networks

    Get PDF
    In this dissertation, we consider the downstream bandwidth management in the context of emerging DOCSIS-based cable networks. The latest DOCSIS 3.1 standard for cable access networks represents a significant change to cable networks. For downstream, the current 6 MHz channel size is replaced by a much larger 192 MHz channel which potentially can provide data rates up to 10 Gbps. Further, the current standard requires equipment to support a relatively new form of active queue management (AQM) referred to as delay-based AQM. Given that more than 50 million households (and climbing) use cable for Internet access, a clear understanding of the impacts of bandwidth management strategies used in these emerging networks is crucial. Further, given the scope of the change provided by emerging cable systems, now is the time to develop and introduce innovative new methods for managing bandwidth. With this motivation, we address research questions pertaining to next generation of cable access networks. The cable industry has had to deal with the problem of a small number of subscribers who utilize the majority of network resources. This problem will grow as access rates increase to gigabits per second. Fundamentally this is a problem on how to manage data flows in a fair manner and provide protection. A well known performance issue in the Internet, referred to as bufferbloat, has received significant attention recently. High throughput network flows need sufficiently large buffer to keep the pipe full and absorb occasional burstiness. Standard practice however has led to equipment offering very large unmanaged buffers that can result in sustained queue levels increasing packet latency. One reason why these problems continue to plague cable access networks is the desire for low complexity and easily explainable (to access network subscribers and to the Federal Communications Commission) bandwidth management. This research begins by evaluating modern delay-based AQM algorithms in downstream DOCSIS 3.0 environments with a focus on fairness and application performance capabilities of single queue AQMs. We are especially interested in delay-based AQM schemes that have been proposed to combat the bufferbloat problem. Our evaluation involves a variety of scenarios that include tiered services and application workloads. Based on our results, we show that in scenarios involving realistic workloads, modern delay-based AQMs can effectively mitigate bufferbloat. However they do not address the other problem related to managing the fairness. To address the combined problem of fairness and bufferbloat, we propose a novel approach to bandwidth management that provides a compromise among the conflicting requirements. We introduce a flow quantization method referred to as adaptive bandwidth binning where flows that are observed to consume similar levels of bandwidth are grouped together with the system managed through a hierarchical scheduler designed to approximate weighted fairness while addressing bufferbloat. Based on a simulation study that considers many system experimental parameters including workloads and network configurations, we provide evidence of the efficacy of the idea. Our results suggest that the scheme is able to provide long term fairness and low delay with a performance close to that of a reference approach based on fair queueing. A further contribution is our idea for replacing `tiered\u27 levels of service based on service rates with tiering based on weights. The application of our bandwidth binning scheme offers a timely and innovative alternative to broadband service that leverages the potential offered by emerging DOCSIS-based cable systems

    Enterprise networks (modern techniques for analysis, measurement and performance improvement)

    Get PDF
    Dans l'évaluation d'Internet au cours des années, un grand nombre d'applications apparaissent, avec différentes exigences de service en termes de bande passante, délai et ainsi de suite. Pourtant, le trafic Internet présente encore une propriété de haute variabilité. Plusieurs études révèlent que les flux court sont en général liés à des applications interactives-pour ceux-ci, on s'attend à obtenir de bonne performance que l'utilisateur perçoit, le plus souvent en termes de temps de réponse court. Cependant, le schéma classique FIFO/drop-tail déployé des routeurs/commutateurs d'aujourd'hui est bien connu de parti pris contre les flux courts. Pour résoudre ce problème sur un réseau best-effort, nous avons proposé un nouveau et simple algorithme d'ordonnancement appelé EFD (Early Flow Discard). Dans ce manuscrit, nous avons d'abord évaluer la performance d'EFD dans un réseau câblé avec un seul goulot d'étranglement au moyen d'étendu simulations. Nous discutons aussi des variantes possibles de EFD et les adaptations de EFD à 802.11 WLAN - se réfèrent principalement à EFDACK et PEFD, qui enregistre les volumes échangés dans deux directions ou compte simplement les paquets dans une direction, visant à améliorer l'équité à niveau flot et l'interactivité dans les WLANs. Enfin, nous nous consacrons à profiler le trafic de l'entreprise, en plus de elaborer deux modèles de trafic-l'une qui considère la structure topologique de l'entreprise et l'autre qui intègre l'impact des applications au-dessus de TCP - pour aider à évaluer et à comparer les performances des politiques d'ordonnancement dans les réseaux d'entreprise classiques.As the Internet evolves over the years, a large number of applications emerge with varying service requirements in terms of bandwidth, delay, loss rate and so on. Still, the Internet traffic exhibits a high variability property the majority of the flows are of small sizes while a small percentage of very long flows contribute to a large portion of the traffic volume. Several studies reveal that small flows are in general related to interactive applications for which one expects to obtain good user perceived performance, most often in terms of short response time. However, the classical FIFO/drop-tail scheme deployed in today s routers/switches is well known to bias against short flows over long ones. To tackle this issue over a best-effort network, we have proposed a novel and simple scheduling algorithm named EFD (Early Flow Discard). In this manuscript, we first evaluate the performance of EFD in a single-bottleneck wired network through extensive simulations. We then discuss the possible variants of EFD and EFD s adaptations to 802.11 WLANs mainly refer to EFDACK and PEFD. Finally, we devote ourselves to profiling enterprise traffic, and further devise two workload models - one that takes into account the enterprise topological structure and the other that incorporates the impact of the applications on top of TCP - to help to evaluate and compare the performance of scheduling policies in typical enterprise networks.PARIS-Télécom ParisTech (751132302) / SudocSudocFranceF

    Contributions to QoS and energy efficiency in wi-fi networks

    Get PDF
    The Wi-Fi technology has been in the recent years fostering the proliferation of attractive mobile computing devices with broadband capabilities. Current Wi-Fi radios though severely impact the battery duration of these devices thus limiting their potential applications. In this thesis we present a set of contributions that address the challenge of increasing energy efficiency in Wi-Fi networks. In particular, we consider the problem of how to optimize the trade-off between performance and energy effciency in a wide variety of use cases and applications. In this context, we introduce novel energy effcient algorithms for real-time and data applications, for distributed and centralized Wi-Fi QoS and power saving protocols and for Wi-Fi stations and Access Points. In addition, the diÂżerent algorithms presented in this thesis adhere to the following design guidelines: i) they are implemented entirely at layer two, and can hence be easily re-used in any device with a Wi-Fi interface, ii) they do not require modiÂżcations to current 802.11 standards, and can hence be readily deployed in existing Wi-Fi devices, and iii) whenever possible they favor client side solutions, and hence mobile computing devices implementing them can benefit from an increased energy efficiency regardless of the Access Point they connect to. Each of our proposed algorithms is thoroughly evaluated by means of both theoretical analysis and packet level simulations. Thus, the contributions presented in this thesis provide a realistic set of tools to improve energy efficiency in current Wi-Fi networks
    • …
    corecore