4 research outputs found

    A hierarchical multiprocessor bandwidth reservation scheme with timing guarantees

    Get PDF
    A multiprocessor scheduling scheme is presented for supporting hierarchical containers that encapsulate sporadic soft and hard real-time tasks. In this scheme, each container is allocated a specified bandwidth, which it uses to schedule its children (some of which may also be containers). This scheme is novel in that, with only soft real-time tasks, no utilization loss is incurred when provisioning containers, even in arbitrarily deep hierarchies. Presented experiments show that the proposed scheme performs well compared to conventional real-time scheduling techniques that do not provide container isolation

    Planificaci贸n global en sistemas multiprocesador de tiempo real

    Get PDF
    Esta tesis afronta el problema de la planificaci贸n de sistemas de tiempo real utilizando sistemas multiprocesador con memoria compartida. Seg煤n laliteratura este problema es NP-Hard. En las aplicaciones de sistemas de tiempo real se imponen unos plazos temporales para la realizaci贸n de las tareas. As铆, lo importante es obtener los resultados a tiempo y no lo es tanto el obtener un rendimiento alto en promedio. La soluci贸n al problematradicionalmente ha consistido en repartir las tareas en tiempo de dise帽o y tratar a losprocesadores como monoprocesadores aislados. La soluci贸n alternativa, la planificaci贸n global del multiprocesador, tiene una teor铆a poco desarrollada. Los l铆mites de utilizaci贸n del sistema con garant铆as para losplazos son muy bajos, del orden del 50%, y la capacidad sobrante dif铆cilmente se puede usar para dar servicio a las tareas aperi贸dicas. As铆, el objetivoprincipal de la tesis es la planificaci贸n global con garant铆as de los plazos y con buen servicio a las tareas aperi贸dicas, llegando a usar el 100% de la capacidad de proceso. Primero se estudiaron cuatro posibilidades de distribuci贸n: est谩tica o din谩mica seg煤n las tareas, peri贸dicas o aperi贸dicas. Para ello se trat贸 el servicioa las tareas aperi贸dicas con dos m茅todos distintos: con servidores y sin servidores. En las distribuciones din谩micas, con el m茅todo de los servidoresse encontraron dificultades en su dimensionado y en las garant铆as de los plazos. Los m茅todos sin servidores probados fueron los planificadores Slack Stealing y Total Bandwidth. Ambos solo se pudieron adaptar para la planificaci贸n est谩tica de las tareas peri贸dicas. Las simulaciones mostraron que laplanificaci贸n local con Slack Stealing y un distribuidor de las tareas aperi贸dicas tipo Next-Fit proporcionan los mejores tiempos de respuesta medios para las tareas aperi贸dicas. Sin embargo, cuando las cargas son muy altas su tiempo de respuesta se dispara. Todos los m茅todos ensayados hasta elmomento quedaron desestimados para la planificaci贸n global. En segundo lugar se adapt贸 a la planificaci贸n global el algoritmo Dual Priority. Primero se analizaron sus caracter铆sticas en monoprocesadores y se realizaron diversas mejoras. El algoritmo depende del c谩lculo off-line del peor tiempo de respuesta de las tareas peri贸dicas y la f贸rmula paracalcularlos en monoprocesadores no es v谩lida para multiprocesadores. As铆, se analizaron tres m茅todos para su c谩lculo: un m茅todo anal铆tico, unm茅todo con simulaci贸n y un m茅todo con un algoritmo. El primero obtiene valores demasiado pesimistas; el segundo obtiene valores m谩s ajustados pero en ocasiones son demasiado optimistas; el tercero es un m茅todo aproximado y obtiene valores tanto optimistas como pesimistas. As铆, estem茅todo no garantiza los plazos y no se puede usar en sistemas de tiempo real estrictos. En sistemas laxos, con una monitorizaci贸n on-liney un ajuste din谩mico de las promociones, el n煤mero de plazos incumplidos es muy bajo y el tiempo de repuesta de las tareas aperi贸dicas es excelente. Finalmente, se presenta una soluci贸n h铆brida entre el repartimiento est谩tico de las tareas peri贸dicas y la planificaci贸n global. En tiempo de dise帽o, sereparten las tareas peri贸dicas entre los procesadores y se calculan las promociones para la planificaci贸n local. En tiempo de ejecuci贸n las tareasperi贸dicas se pueden ejecutar en cualquier procesador hasta el instante de su promoci贸n, instante en el que deben migrar a su procesador. As铆 segarantizan los plazos y se permite un cierto grado de balanceo din谩mico de la carga. La flexibilidad conferida por las promociones de las tareas y el balanceo de la carga se utiliza para (i) admitir tareas peri贸dicas que de otra forma no serian planificables, (ii) servir tareas aperi贸dicas y (iii) servirtareas aperi贸dicas con plazo o espor谩dicas. Para los tres casos se dise帽aron y analizaron distintos m茅todos de distribuci贸n de las tareas peri贸dicas en tiempo de dise帽o. Tambi茅n se dise帽o un m茅todo para reducir el n煤mero de migraciones. Las simulaciones mostraron que con este m茅todo se puedenconseguir cargas con solo tareas peri贸dicas muy cercanas al 100%, lejos del 50% de la teor铆a de la planificaci贸n global. Las simulaciones con tareasaperi贸dicas mostraron que su tiempo de repuesta medio es muy bueno. Se dise帽o un test de aceptaci贸n de las tareas espor谩dicas, de forma que si una tarea es aceptada entonces su plazo queda garantizado. El porcentaje de aceptaci贸n obtenido en los experimentos fue superior al 80%.Finalmente, se dise帽贸 un m茅todo de distribuci贸n de las tareas peri贸dicas pre-rutime capaz de facilitar en tiempo de ejecuci贸n la aceptaci贸n de un alto porcentaje de tareas espor谩dicas y mantener un buen nivel de servicio medio para las tareas aperi贸dicas.This thesis takes into consideration the problem of real-time systems scheduling using shared memory multiprocessor systems. According to the literature, this problem is NP-Hard. In real-time systems applications some time limits are imposed to tasks termination. Therefore, the really important thing is to get results on time and it is not so important to achieve high average performances. The solution to the problem traditionally has been to partition the tasks at design time and treat processors as isolated uniprocessors. The alternative solution, the global scheduling, has an undeveloped theory. The limit on the system utilization with deadlines guarantees is very low, around 50%, and spare capacity can hardly be used to service aperiodic tasks. Thus, the main goal of this thesis is to develop global scheduling techniques providing deadlines guarantees and achieving good service for aperiodic tasks, being able to use 100% of the processing capacity.First of all, we explored four possibilities of distribution: static or dynamic depending on the tasks, periodic or aperiodic. We tried to schedule aperiodic tasks with two different methods: with servers and without servers. In dynamic distributions, with the method of servers were found difficulties in its size and guarantees for deadlines. The methods without servers were The Slack Stealing and The Total Bandwidth. Both were adapted only for scheduling the static case. The simulations showed that the local scheduling with Slack Stealing and an allocation of aperiodic tasks kind Next-Fit provides the best mean average response time for the aperiodic tasks. However, when the load is very high response time increases. All methods tested so far were dismissed for the global scheduling.Secondly the Dual Priority algorithm was adapted to global scheduling. First we discussed its characteristics in uniprocessors and various improvements were made. The algorithm depends on the off-line calculation of the worst case response time for the task and the formula to compute them in uniprocessors is not valid for multiprocessors. We have analyzed three methods for its calculation: an analytical method, a simulation method and an algorithmic method. The former gets too pessimistic values, the second gets adjusted values but are sometimes too optimistic, and the third is a method that obtains approximate values. Thus, this method does not guarantee deadlines and may not be used in hard real-time systems. However, it is very suitable for soft real-time systems. In these systems, using an on-line monitoring and dynamic adjustment of promotions, the number of missed deadlines is very low and the response time of aperiodic tasks is excellent.Finally, we present a hybrid solution between static task allocation and global scheduling. At design time, is performed the distribution of periodic tasks among processors and their promotions are calculated for local scheduling. At runtime, the task can be run on any processor until the moment of its promotion, when it has to migrate to its processor. This will ensure deadlines and allowing a certain degree of dynamic load balancing. The flexibility provided by task promotions and load balancing is used (i) to admit task that would otherwise not be scheduled, (ii) to serve aperiodic tasks and (iii) to serve aperiodic tasks with deadlines or sporadic tasks. For the three cases were designed and analyzed various methods of task distribution at design time. We also designed a method to reduce the number of migrations. The simulations showed that this method can achieve with only periodic task loads very close to 100%, far from the 50% of the global scheduling theory. The simulations showed that aperiodic tasks average response time is very good. We designed an acceptance test for sporadic tasks, hence, if a task is accepted then its deadline is guaranteed. The acceptance rate obtained in the experiments was over 80%. Finally, we devised a pre-rutime distribution method of periodic tasks that is able to provide at run time a high acceptance ratio for sporadic tasks and maintain a good level of service for aperiodic task

    Scheduling and locking in multiprocessor real-time operating systems

    Get PDF
    With the widespread adoption of multicore architectures, multiprocessors are now a standard deployment platform for (soft) real-time applications. This dissertation addresses two questions fundamental to the design of multicore-ready real-time operating systems: (1) Which scheduling policies offer the greatest flexibility in satisfying temporal constraints; and (2) which locking algorithms should be used to avoid unpredictable delays? With regard to Question 1, LITMUSRT, a real-time extension of the Linux kernel, is presented and its design is discussed in detail. Notably, LITMUSRT implements link-based scheduling, a novel approach to controlling blocking due to non-preemptive sections. Each implemented scheduler (22 configurations in total) is evaluated under consideration of overheads on a 24-core Intel Xeon platform. The experiments show that partitioned earliest-deadline first (EDF) scheduling is generally preferable in a hard real-time setting, whereas global and clustered EDF scheduling are effective in a soft real-time setting. With regard to Question 2, real-time locking protocols are required to ensure that the maximum delay due to priority inversion can be bounded a priori. Several spinlock- and semaphore-based multiprocessor real-time locking protocols for mutual exclusion (mutex), reader-writer (RW) exclusion, and k-exclusion are proposed and analyzed. A new category of RW locks suited to worst-case analysis, termed phase-fair locks, is proposed and three efficient phase-fair spinlock implementations are provided (one with few atomic operations, one with low space requirements, and one with constant RMR complexity). Maximum priority-inversion blocking is proposed as a natural complexity measure for semaphore protocols. It is shown that there are two classes of schedulability analysis, namely suspension-oblivious and suspension-aware analysis, that yield two different lower bounds on blocking. Five asymptotically optimal locking protocols are designed and analyzed: a family of mutex, RW, and k-exclusion protocols for global, partitioned, and clustered scheduling that are asymptotically optimal in the suspension-oblivious case, and a mutex protocol for partitioned scheduling that is asymptotically optimal in the suspension-aware case. A LITMUSRT-based empirical evaluation is presented that shows these protocols to be practical
    corecore