292 research outputs found

    A Generic Coq Proof of Typical Worst-Case Analysis

    Get PDF
    International audienceThis paper presents a generic proof of Typical Worst-Case Analysis (TWCA), an analysis technique for weakly-hard real-time uniprocessor systems. TWCA was originally introduced for systems with fixed priority preemptive (FPP) schedulers and has since been extended to fixed-priority nonpreemptive (FPNP) and earliest-deadline-first (EDF) schedulers. Our generic analysis is based on an abstract model that characterizes the exact properties needed to make TWCA applicable to any system model. Our results are formalized and checked using the Coq proof assistant along with the Prosa schedulability analysis library. Our experience with formalizing real-time systems analyses shows that this is not only a way to increase confidence in our claimed results: The discipline required to obtain machine checked proofs helps understanding the exact assumptions required by a given analysis, its key intermediate steps and how this analysis can be generalized

    ATM virtual connection performance modeling

    Get PDF

    Stratégies d’ordonnancement pour un système en temps-réel surchargé

    Get PDF
    This paper introduces and assesses novel strategies to schedule firm real-time jobs on an overloaded server. The jobs are released periodically and have the same relative deadline. Job execution times obey an arbitrary probability distribution and can take unbounded values (no WCET). We introduce three control parameters to decide when to start or interrupt a job. We couple this dynamic scheduling with several admission policies and investigate several optimization criteria, the most prominent being the Deadline Miss Ratio (DMR). Then we derive a Markov model and use its stationary distribution to determine the best value of each control parameter. Finally we conduct an extensive simulation campaign with 14 different probability distributions; the results nicely demonstrate how the new control parameters help improve system performance compared with traditional approaches. In particular, we show that (i) the best admission policy is to admit all jobs; (ii) the key control parameter is to upper bound the start time of each job; (iii) the best scheduling strategy decreases the DMR by up to 0.35 over traditional competitors.Ce travail présente et évalue de nouvelles stratégies d’ordonnancement pour exécuter des tâches périodiques en temps réel sur une plate-forme surchargée. Les tâches arrivent périodiquement et ont le même délai relatif pour leur exécution. Les temps d’exécution des tâches obéissent à une distribution de probabilité arbitraire et peuvent prendre des valeurs illimitées (pas de WCET). Certaines tâches peuvent être interrompues à leur admission dans le système ou bien en cours d’exécution. Nous introduisons trois paramètres de contrôle pour décider quand démarrer ou interrompre une tâche. Nous associons cet ordonnancement dynamique à plusieurs politiques d’admission et étudions plusieurs critères d’optimisation, le plus important étant le Deadline Miss Ratio (DMR). Ensuite, nous dérivons un modèle deMarkov et utilisons sa distribution stationnaire pour déterminer la meilleure valeur de chaque paramètre de contrôle. Enfin, nous conduisons de vastes simulations avec 14 distributions de probabilité différentes ; les résultats démontrentbien comment les nouveaux paramètres de contrôle contribuent à améliorer les performances du système par rapport aux approches traditionnelles. En particulier, nous montrons que (i) la meilleure politique d’admission est d’admettre toutes les tâches; (ii) le paramètre de contrôle clé est de limiter le temps de début de chaque tâche après son admission; (iii) la meilleure stratégie de planification diminue le DMR jusqu’à 0,35 par rapport aux concurrents traditionnels

    Scheduling for today’s computer systems: bridging theory and practice

    Get PDF
    Scheduling is a fundamental technique for improving performance in computer systems. From web servers to routers to operating systems, how the bottleneck device is scheduled has an enormous impact on the performance of the system as a whole. Given the immense literature studying scheduling, it is easy to think that we already understand enough about scheduling. But, modern computer system designs have highlighted a number of disconnects between traditional analytic results and the needs of system designers. In particular, the idealized policies, metrics, and models used by analytic researchers do not match the policies, metrics, and scenarios that appear in real systems. The goal of this thesis is to take a step towards modernizing the theory of scheduling in order to provide results that apply to today’s computer systems, and thus ease the burden on system designers. To accomplish this goal, we provide new results that help to bridge each of the disconnects mentioned above. We will move beyond the study of idealized policies by introducing a new analytic framework where the focus is on scheduling heuristics and techniques rather than individual policies. By moving beyond the study of individual policies, our results apply to the complex hybrid policies that are often used in practice. For example, our results enable designers to understand how the policies that favor small job sizes are affected by the fact that real systems only have estimates of job sizes. In addition, we move beyond the study of mean response time and provide results characterizing the distribution of response time and the fairness of scheduling policies. These results allow us to understand how scheduling affects QoS guarantees and whether favoring small job sizes results in large job sizes being treated unfairly. Finally, we move beyond the simplified models traditionally used in scheduling research and provide results characterizing the effectiveness of scheduling in multiserver systems and when users are interactive. These results allow us to answer questions about the how to design multiserver systems and how to choose a workload generator when evaluating new scheduling designs

    Control de CongestiĂłn TCP y mecanismos AQM

    Get PDF
    En los últimos años se ha ido poniendo énfasis particularmente en la importancia del retraso sobre la capacidad. Hoy en día, nuestras redes se están volviendo más y más sensibles a la latencia debido a la proliferación de aplicaciones y servicios como el VoIP, la IPTV o el juego online donde un retardo bajo es esencial para un desempeño adecuado y una buena experiencia de usuario. La mayor parte de este retraso innecesario se debe al mal funcionamiento de algunos búferes que pueblan internet. En vez de desempeñar la tarea para la que fueron creados, absorber eventuales ráfagas de paquetes con el fin de prevenir su pérdida, hacen creer al mecanismo de control de congestión que la ruta hacia el destino actual tiene más ancho de banda que el que posee realmente. Cuando la pérdida de paquetes ocurre, si es que lo hace, es demasiado tarde y el daño en el enlace, en forma de tiempo de transmisión adicional, ya se ha producido. En este trabajo de fín de grado intentaremos arrojar luz sobre una solución específica cuyo objetivo es el de reducir el retardo extra producido por esos hinchados búferes, la Gestión Avanzada de Colas o Active Queue Management (AQM). Hemos testeado un grupo de estos algoritmos AQM junto con diferentes modificaciones del control de congestión de TCP con el fín de entender las interacciones generadas entre esos dos mecanismos, realizando simulaciones en varios escenarios caracterísiticos tales como enlaces transoceánicos o enlaces de acceso a red, entre otros.In recent years, the relevance of delay over throughput has been particularly emphasized. Nowadays our networks are getting more and more sensible to latency due to the proliferation of applications and services like VoIP, IPTV or online gaming where a low delay is essential for a proper performance and a good user experience. Most of this unnecessary delay is created by the misbehaviour of many bu ers that populate Internet. Instead of performing the task for what they were created for, absorbing eventual packet bursts to prevent loss, they deceive the sender's congestion control mechanisms into believing that the current path to the destination has more bandwidth than it really has. When the loss event occurs, if it does, it's too late and the damage on the path, in terms of additional transmission time, has been done. On this bachelor thesis we will try to throw light over an speci c solution that aims to reduce the extra delay produced by these bloated bu ers: Active Queue Management. We have tested a bunch of AQM algorithms with di erent TCP modi cations in order to understand the interactions between these two mechanisms. We performed simulations testing various characteristic scenarios like Transoceanic links or Access link scenarios, among other.Ingeniería Telemátic

    Abstract Dependency Graphs for Model Verification

    Get PDF

    Working Notes from the 1992 AAAI Spring Symposium on Practical Approaches to Scheduling and Planning

    Get PDF
    The symposium presented issues involved in the development of scheduling systems that can deal with resource and time limitations. To qualify, a system must be implemented and tested to some degree on non-trivial problems (ideally, on real-world problems). However, a system need not be fully deployed to qualify. Systems that schedule actions in terms of metric time constraints typically represent and reason about an external numeric clock or calendar and can be contrasted with those systems that represent time purely symbolically. The following topics are discussed: integrating planning and scheduling; integrating symbolic goals and numerical utilities; managing uncertainty; incremental rescheduling; managing limited computation time; anytime scheduling and planning algorithms, systems; dependency analysis and schedule reuse; management of schedule and plan execution; and incorporation of discrete event techniques

    NEW APPROACHES FOR ANALYZING SYSTEMS WITH HISTORY-DEPENDENT EFFICIENCY

    Get PDF
    In my dissertational work, I propose two novel models for analyzing systems in which the operational efficiency depends on the past history, e.g., systems with human-in-the-loop and energy harvesting sensors. First, I investigate a queuing system with a single server that serves multiple queues with different types of tasks. The server has a state that is affected by the current and past actions. The task completion probability of each kind of task is a function of the server state. A task scheduling policy is specified by a function that determines the probability of assigning a task to the server. The main results with multiple types of tasks include: (i) necessary and sufficient conditions for the existence of a randomized stationary policy that stabilizes the queues; and (ii) the existence of threshold type policies that can stabilize any stabilizable system. For a single type system, I also identify task scheduling policies under which the utilization rate is arbitrarily close to that of an optimal policy that minimizes the utilization rate. Here, the utilization rate is defined to be the long-term fraction of time the server is required to work. Second, I study a remote estimation problem over an activity packet drop link. The link undergoes packet drops and has an (activity) state that is influenced by past transmission requests. The packet-drop probability is governed by a given function of the link's state. A scheduler determines the probability of a transmission request regarding the link's state. The main results include: (i) necessary and sufficient conditions for the existence of a randomized stationary policy that stabilizes the estimation error in the second-moment sense; and (ii) the existence of deterministic policies that can stabilize any stabilizable system. The second result implies that it suffices to search for deterministic strategies for stabilizing the estimation error. The search can be further narrowed to threshold policies when the function for the packet-drop probability is non-decreasing

    Automatic Scaling in Cloud Computing

    Get PDF
    This dissertation thesis deals with automatic scaling in cloud computing, mainly focusing on the performance of interactive workloads, that is web servers and services, running in an elastic cloud environment. In the rst part of the thesis, the possibility of forecasting the daily curve of workload is evaluated using long-range seasonal techniques of statistical time series analysis. The accuracy is high enough to enable either green computing or lling the unused capacity with batch jobs, hence the need for long-range forecasts. The second part focuses on simulations of automatic scaling, which is necessary for the interactive workload to actually free up space when it is not being utilized at peak capacity. Cloud users are mostly scared of letting a machine control their servers, which is why realistic simulations are needed. We have explored two methods, event-driven simulation and queuetheoretic models. During work on the rst, we have extended the widely-used CloudSim simulation package to be able to dynamically scale the simulation setup at run time and have corrected its engine using knowledge from queueing theory. Our own simulator then relies solely on theoretical models, making it much more precise and much faster than the more general CloudSim. The tools from the two parts together constitute the theoretical foundation which, once implemented in practice, can help leverage cloud technology to actually increase the e ciency of data center hardware. In particular, the main contributions of the dissertation thesis are as follows: 1. New methodology for forecasting time series of web server load and its validation 2. Extension of the often-used simulator CloudSim for interactive load and increasing the accuracy of its output 3. Design and implementation of a fast and accurate simulator of automatic scaling using queueing theoryTato dizerta cn pr ace se zab yv a cloud computingem, konkr etn e se zam e ruje na v ykon interaktivn z at e ze, nap r klad webov ych server u a slu zeb, kter e b e z v elastick em cloudov em prost red . V prvn c asti pr ace je zhodnocena mo znost p redpov d an denn k rivky z at e ze pomoc metod statistick e anal yzy casov ych rad se sez onn m prvkem a dlouh ym dosahem. P resnost je dostate cn e vysok a, aby umo znila bu d set ren energi nebo vypl nov an nevyu zit e kapacity d avkov ymi ulohami, jejich z doba b ehu je hlavn m d uvodem pro pot rebu dlouhodob e p redpov edi. Druh a c ast se zam e ruje na simulace automatick eho sk alov an , kter e je nutn e, aby interaktivn z at e z skute cn e uvolnila prostor, pokud nen vyt e zov ana na plnou kapacitu. U zivatel e cloud u se p rev a zn e boj nechat stroj, aby ovl adal jejich servery, a pr av e proto jsou pot reba realistick e simulace. Prozkoumali jsme dv e metody, konkr etn e simulaci s prom enn ym casov ym krokem r zen ym ud alostmi a modely z teorie hromadn e obsluhy. B ehem pr ace na prvn z t echto metod jsme roz s rili siroce pou z van y simula cn bal k CloudSim o mo znost dynamicky sk alovat simulovan y syst em za b ehu a opravili jsme jeho j adro za pomoci znalost z teorie hromadn e obsluhy. N a s vlastn simul ator se pak spol eh a pouze na teoretick e modely, co z ho cin p resn ej s m a mnohem rychlej s m ne zli obecn ej s CloudSim. N astroje z obou c ast pr ace tvo r dohromady teoretick y z aklad, kter y, pokud bude implementov an v praxi, pom u ze vyu z t technologii cloudu tak, aby se skute cn e zv y sila efektivita vyu zit hardwaru datov ych center. Hlavn p r nosy t eto dizerta cn pr ace jsou n asleduj c : 1. Stanoven metodologie pro p redpov d an casov ych rad z at e ze webov ych server u a jej validace 2. Roz s ren casto citovan eho simul atoru CloudSim o mo znost simulace interaktivn z at e ze a zp resn en jeho v ysledk u 3. N avrh a implementace rychl eho a p resn eho simul atoru automatick eho sk alov an vyu z vaj c ho teorii hromadn e obsluhyKatedra kybernetik
    • …
    corecore