6 research outputs found
A weakly hard scheduling approach of partitioned scheduling on multiprocessor systems
Real-time systems or tasks can be classified into three categories, based on the “seriousness” of deadline misses – hard, soft and weakly hard real-time tasks. The consequences of a deadline miss of a hard real-time task can be prohibitively expensive because all the tasks must meet their deadlines whereas soft real-time tasks tolerate “some” deadline misses. Meanwhile, in a weakly hard real-time task, the distribution of its met and missed deadlines is stated and specified precisely. As real-time application systems increasingly come to be implemented upon multiprocessor environments, thus, this study applies multiprocessor scheduling approach for verification of weakly hard real-time tasks and to guaranteeing the timing requirements of the tasks. In fact, within the multiprocessor, the task allocation problem seem even harder than in uniprocessor case; thus, in order to cater that problem, the sufficient and efficient scheduling algorithm supported by accurate schedulability analysis technique is present to provide weakly hard real-time guarantees. In this paper, a weakly hard scheduling approach has been proposed and schedulability analysis of proposed approach consists of the partitioned multiprocessor scheduling techniques with solutions for the bin-packing problem, called R-BOUND-MP-NFRNS (R-BOUND-MP with next-fit-ring noscaling) combining with the exact analysis, named hyperperiod analysis and deadline models; weakly hard constraints and µ-pattern under static priority scheduling. Then, Matlab simulation tool is used in order to validate the result of analysis. From the evaluation results, it can be proven that the proposed approach outperforms the existing approaches in terms of satisfaction of the tasks deadlines
Energy-aware Fault-tolerant Scheduling for Hard Real-time Systems
Over the past several decades, we have experienced tremendous growth of real-time systems in both scale and complexity. This progress is made possible largely due to advancements in semiconductor technology that have enabled the continuous scaling and massive integration of transistors on a single chip. In the meantime, however, the relentless transistor scaling and integration have dramatically increased the power consumption and degraded the system reliability substantially. Traditional real-time scheduling techniques with the sole emphasis on guaranteeing timing constraints have become insufficient.
In this research, we studied the problem of how to develop advanced scheduling methods on hard real-time systems that are subject to multiple design constraints, in particular, timing, energy consumption, and reliability constraints. To this end, we first investigated the energy minimization problem with fault-tolerance requirements for dynamic-priority based hard real-time tasks on a single-core processor. Three scheduling algorithms have been developed to judiciously make tradeoffs between fault tolerance and energy reduction since both design objectives usually conflict with each other. We then shifted our research focus from single-core platforms to multi-core platforms as the latter are becoming mainstream. Specifically, we launched our research in fault-tolerant multi-core scheduling for fixed-priority tasks as fixed-priority scheduling is one of the most commonly used schemes in the industry today. For such systems, we developed several checkpointing-based partitioning strategies with the joint consideration of fault tolerance and energy minimization. At last, we exploited the implicit relations between real-time tasks in order to judiciously make partitioning decisions with the aim of improving system schedulability.
According to the simulation results, our design strategies have been shown to be very promising for emerging systems and applications where timeliness, fault-tolerance, and energy reduction need to be simultaneously addressed
Memory-Aware Scheduling for Fixed Priority Hard Real-Time Computing Systems
As a major component of a computing system, memory has been a key performance and power consumption bottleneck in computer system design. While processor speeds have been kept rising dramatically, the overall computing performance improvement of the entire system is limited by how fast the memory can feed instructions/data to processing units (i.e. so-called memory wall problem). The increasing transistor density and surging access demands from a rapidly growing number of processing cores also significantly elevated the power consumption of the memory system. In addition, the interference of memory access from different applications and processing cores significantly degrade the computation predictability, which is essential to ensure timing specifications in real-time system design. The recent IC technologies (such as 3D-IC technology) and emerging data-intensive real-time applications (such as Virtual Reality/Augmented Reality, Artificial Intelligence, Internet of Things) further amplify these challenges. We believe that it is not simply desirable but necessary to adopt a joint CPU/Memory resource management framework to deal with these grave challenges.
In this dissertation, we focus on studying how to schedule fixed-priority hard real-time tasks with memory impacts taken into considerations. We target on the fixed-priority real-time scheduling scheme since this is one of the most commonly used strategies for practical real-time applications. Specifically, we first develop an approach that takes into consideration not only the execution time variations with cache allocations but also the task period relationship, showing a significant improvement in the feasibility of the system. We further study the problem of how to guarantee timing constraints for hard real-time systems under CPU and memory thermal constraints. We first study the problem under an architecture model with a single core and its main memory individually packaged. We develop a thermal model that can capture the thermal interaction between the processor and memory, and incorporate the periodic resource sever model into our scheduling framework to guarantee both the timing and thermal constraints. We further extend our research to the multi-core architectures with processing cores and memory devices integrated into a single 3D platform. To our best knowledge, this is the first research that can guarantee hard deadline constraints for real-time tasks under temperature constraints for both processing cores and memory devices. Extensive simulation results demonstrate that our proposed scheduling can improve significantly the feasibility of hard real-time systems under thermal constraints
On the Design of Real-Time Systems on Multi-Core Platforms under Uncertainty
Real-time systems are computing systems that demand the assurance of not only the logical correctness of computational results but also the timing of these results. To ensure timing constraints, traditional real-time system designs usually adopt a worst-case based deterministic approach. However, such an approach is becoming out of sync with the continuous evolution of IC technology and increased complexity of real-time applications. As IC technology continues to evolve into the deep sub-micron domain, process variation causes processor performance to vary from die to die, chip to chip, and even core to core. The extensive resource sharing on multi-core platforms also significantly increases the uncertainty when executing real-time tasks. The traditional approach can only lead to extremely pessimistic, and thus, unpractical design of real-time systems.
Our research seeks to address the uncertainty problem when designing real-time systems on multi-core platforms. We first attacked the uncertainty problem caused by process variation. We proposed a virtualization framework and developed techniques to optimize the system\u27s performance under process variation. We further studied the problem on peak temperature minimization for real-time applications on multi-core platforms. Three heuristics were developed to reduce the peak temperature for real-time systems. Next, we sought to address the uncertainty problem in real-time task execution times by developing statistical real-time scheduling techniques. We studied the problem of fixed-priority real-time scheduling of implicit periodic tasks with probabilistic execution times on multi-core platforms. We further extended our research for tasks with explicit deadlines. We introduced the concept of harmonic to a more general task set, i.e. tasks with explicit deadlines, and developed new task partitioning techniques. Throughout our research, we have conducted extensive simulations to study the effectiveness and efficiency of our developed techniques.
The increasing process variation and the ever-increasing scale and complexity of real-time systems both demand a paradigm shift in the design of real-time applications. Effectively dealing with the uncertainty in design of real-time applications is a challenging but also critical problem. Our research is such an effort in this endeavor, and we conclude this dissertation with discussions of potential future work
Planificación global en sistemas multiprocesador de tiempo real
Esta tesis afronta el problema de la planificación de sistemas de tiempo real utilizando sistemas multiprocesador con memoria compartida. Según laliteratura este problema es NP-Hard. En las aplicaciones de sistemas de tiempo real se imponen unos plazos temporales para la realización de las tareas. Así, lo importante es obtener los resultados a tiempo y no lo es tanto el obtener un rendimiento alto en promedio. La solución al problematradicionalmente ha consistido en repartir las tareas en tiempo de diseño y tratar a losprocesadores como monoprocesadores aislados. La solución alternativa, la planificación global del multiprocesador, tiene una teoría poco desarrollada. Los límites de utilización del sistema con garantías para losplazos son muy bajos, del orden del 50%, y la capacidad sobrante difícilmente se puede usar para dar servicio a las tareas aperiódicas. Así, el objetivoprincipal de la tesis es la planificación global con garantías de los plazos y con buen servicio a las tareas aperiódicas, llegando a usar el 100% de la capacidad de proceso. Primero se estudiaron cuatro posibilidades de distribución: estática o dinámica según las tareas, periódicas o aperiódicas. Para ello se trató el servicioa las tareas aperiódicas con dos métodos distintos: con servidores y sin servidores. En las distribuciones dinámicas, con el método de los servidoresse encontraron dificultades en su dimensionado y en las garantías de los plazos. Los métodos sin servidores probados fueron los planificadores Slack Stealing y Total Bandwidth. Ambos solo se pudieron adaptar para la planificación estática de las tareas periódicas. Las simulaciones mostraron que laplanificación local con Slack Stealing y un distribuidor de las tareas aperiódicas tipo Next-Fit proporcionan los mejores tiempos de respuesta medios para las tareas aperiódicas. Sin embargo, cuando las cargas son muy altas su tiempo de respuesta se dispara. Todos los métodos ensayados hasta elmomento quedaron desestimados para la planificación global. En segundo lugar se adaptó a la planificación global el algoritmo Dual Priority. Primero se analizaron sus características en monoprocesadores y se realizaron diversas mejoras. El algoritmo depende del cálculo off-line del peor tiempo de respuesta de las tareas periódicas y la fórmula paracalcularlos en monoprocesadores no es válida para multiprocesadores. Así, se analizaron tres métodos para su cálculo: un método analítico, unmétodo con simulación y un método con un algoritmo. El primero obtiene valores demasiado pesimistas; el segundo obtiene valores más ajustados pero en ocasiones son demasiado optimistas; el tercero es un método aproximado y obtiene valores tanto optimistas como pesimistas. Así, estemétodo no garantiza los plazos y no se puede usar en sistemas de tiempo real estrictos. En sistemas laxos, con una monitorización on-liney un ajuste dinámico de las promociones, el número de plazos incumplidos es muy bajo y el tiempo de repuesta de las tareas aperiódicas es excelente. Finalmente, se presenta una solución híbrida entre el repartimiento estático de las tareas periódicas y la planificación global. En tiempo de diseño, sereparten las tareas periódicas entre los procesadores y se calculan las promociones para la planificación local. En tiempo de ejecución las tareasperiódicas se pueden ejecutar en cualquier procesador hasta el instante de su promoción, instante en el que deben migrar a su procesador. Así segarantizan los plazos y se permite un cierto grado de balanceo dinámico de la carga. La flexibilidad conferida por las promociones de las tareas y el balanceo de la carga se utiliza para (i) admitir tareas periódicas que de otra forma no serian planificables, (ii) servir tareas aperiódicas y (iii) servirtareas aperiódicas con plazo o esporádicas. Para los tres casos se diseñaron y analizaron distintos métodos de distribución de las tareas periódicas en tiempo de diseño. También se diseño un método para reducir el número de migraciones. Las simulaciones mostraron que con este método se puedenconseguir cargas con solo tareas periódicas muy cercanas al 100%, lejos del 50% de la teoría de la planificación global. Las simulaciones con tareasaperiódicas mostraron que su tiempo de repuesta medio es muy bueno. Se diseño un test de aceptación de las tareas esporádicas, de forma que si una tarea es aceptada entonces su plazo queda garantizado. El porcentaje de aceptación obtenido en los experimentos fue superior al 80%.Finalmente, se diseñó un método de distribución de las tareas periódicas pre-rutime capaz de facilitar en tiempo de ejecución la aceptación de un alto porcentaje de tareas esporádicas y mantener un buen nivel de servicio medio para las tareas aperiódicas.This thesis takes into consideration the problem of real-time systems scheduling using shared memory multiprocessor systems. According to the literature, this problem is NP-Hard. In real-time systems applications some time limits are imposed to tasks termination. Therefore, the really important thing is to get results on time and it is not so important to achieve high average performances. The solution to the problem traditionally has been to partition the tasks at design time and treat processors as isolated uniprocessors. The alternative solution, the global scheduling, has an undeveloped theory. The limit on the system utilization with deadlines guarantees is very low, around 50%, and spare capacity can hardly be used to service aperiodic tasks. Thus, the main goal of this thesis is to develop global scheduling techniques providing deadlines guarantees and achieving good service for aperiodic tasks, being able to use 100% of the processing capacity.First of all, we explored four possibilities of distribution: static or dynamic depending on the tasks, periodic or aperiodic. We tried to schedule aperiodic tasks with two different methods: with servers and without servers. In dynamic distributions, with the method of servers were found difficulties in its size and guarantees for deadlines. The methods without servers were The Slack Stealing and The Total Bandwidth. Both were adapted only for scheduling the static case. The simulations showed that the local scheduling with Slack Stealing and an allocation of aperiodic tasks kind Next-Fit provides the best mean average response time for the aperiodic tasks. However, when the load is very high response time increases. All methods tested so far were dismissed for the global scheduling.Secondly the Dual Priority algorithm was adapted to global scheduling. First we discussed its characteristics in uniprocessors and various improvements were made. The algorithm depends on the off-line calculation of the worst case response time for the task and the formula to compute them in uniprocessors is not valid for multiprocessors. We have analyzed three methods for its calculation: an analytical method, a simulation method and an algorithmic method. The former gets too pessimistic values, the second gets adjusted values but are sometimes too optimistic, and the third is a method that obtains approximate values. Thus, this method does not guarantee deadlines and may not be used in hard real-time systems. However, it is very suitable for soft real-time systems. In these systems, using an on-line monitoring and dynamic adjustment of promotions, the number of missed deadlines is very low and the response time of aperiodic tasks is excellent.Finally, we present a hybrid solution between static task allocation and global scheduling. At design time, is performed the distribution of periodic tasks among processors and their promotions are calculated for local scheduling. At runtime, the task can be run on any processor until the moment of its promotion, when it has to migrate to its processor. This will ensure deadlines and allowing a certain degree of dynamic load balancing. The flexibility provided by task promotions and load balancing is used (i) to admit task that would otherwise not be scheduled, (ii) to serve aperiodic tasks and (iii) to serve aperiodic tasks with deadlines or sporadic tasks. For the three cases were designed and analyzed various methods of task distribution at design time. We also designed a method to reduce the number of migrations. The simulations showed that this method can achieve with only periodic task loads very close to 100%, far from the 50% of the global scheduling theory. The simulations showed that aperiodic tasks average response time is very good. We designed an acceptance test for sporadic tasks, hence, if a task is accepted then its deadline is guaranteed. The acceptance rate obtained in the experiments was over 80%. Finally, we devised a pre-rutime distribution method of periodic tasks that is able to provide at run time a high acceptance ratio for sporadic tasks and maintain a good level of service for aperiodic task