387 research outputs found

    Scheduling Techniques for Operating Systems for Medical and IoT Devices: A Review

    Get PDF
    Software and Hardware synthesis are the major subtasks in the implementation of hardware/software systems. Increasing trend is to build SoCs/NoC/Embedded System for Implantable Medical Devices (IMD) and Internet of Things (IoT) devices, which includes multiple Microprocessors and Signal Processors, allowing designing complex hardware and software systems, yet flexible with respect to the delivered performance and executed application. An important technique, which affect the macroscopic system implementation characteristics is the scheduling of hardware operations, program instructions and software processes. This paper presents a survey of the various scheduling strategies in process scheduling. Process Scheduling has to take into account the real-time constraints. Processes are characterized by their timing constraints, periodicity, precedence and data dependency, pre-emptivity, priority etc. The affect of these characteristics on scheduling decisions has been described in this paper

    Sample exam questions.

    Get PDF
    All original material in this collection is distributed under a Free Culture license. You are free to share, remix, and make commercial use of the work, under the Attribution and Share Alike conditions

    Energy-efficient thermal-aware multiprocessor scheduling for real-time tasks using TCPNs

    Get PDF
    We present an energy-effcient thermal-aware real-time global scheduler for a set of hard real-time (HRT) tasks running on a multiprocessor system. This global scheduler fulfills the thermal and temporal constraints by handling two independent variables, the task allocation time and the selection of clock frequency. To achieve its goal, the proposed scheduler is split into two stages. An off-line stage, based on a deadline partitioning scheme, computes the cycles that the HRT tasks must run per deadline interval at the minimum clock frequency to save energy while honoring the temporal and thermal constraints, and computes the maximum frequency at which the system can run below the maximum temperature. Then, an on-line, event-driven stage performs global task allocation applying a Fixed-Priority Zero-Laxity policy, reducing the overhead of quantum-based or interval-based global schedulers. The on-line stage embodies an adaptive scheduler that accepts or rejects soft RT aperiodic tasks throttling CPU frequency to the upper lowest available one to minimize power consumption while meeting time and thermal constraints. This approach leverages the best of two worlds: the off-line stage computes an ideal discrete HRT multiprocessor schedule, while the on-line stage manage soft real-time aperiodic tasks with minimum power consumption and maximum CPU utilization

    Control techniques for thermal-aware energy-efficient real time multiprocessor scheduling

    Get PDF
    La utilización de microprocesadores multinúcleo no sólo es atractiva para la industria sino que en muchos ámbitos es la única opción. La planificación tiempo real sobre estas plataformas es mucho más compleja que sobre monoprocesadores y en general empeoran el problema de sobre-diseño, llevando a la utilización de muchos más procesadores /núcleos de los necesarios. Se han propuesto algoritmos basados en planificación fluida que optimizan la utilización de los procesadores, pero hasta el momento presentan en general inconvenientes que los alejan de su aplicación práctica, no siendo el menor el elevado número de cambios de contexto y migraciones.Esta tesis parte de la hipótesis de que es posible diseñar algoritmos basados en planificación fluida, que optimizan la utilización de los procesadores, cumpliendo restricciones temporales, térmicas y energéticas, con un bajo número de cambios de contexto y migraciones, y compatibles tanto con la generación fuera de línea de ejecutivos cíclicos atractivos para la industria, como de planificadores que integran técnicas de control en tiempo de ejecución que permiten la gestión eficiente tanto de tareas aperiódicas como de desviaciones paramétricas o pequeñas perturbaciones.A este respecto, esta tesis contribuye con varias soluciones. En primer lugar, mejora una metodología de modelo que representa todas las dimensiones del problema bajo un único formalismo (Redes de Petri Continuas Temporizadas). En segundo lugar, propone un método de generación de un ejecutivo cíclico, calculado en ciclos de procesador, para un conjunto de tareas tiempo real duro sobre multiprocesadores que optimiza la utilización de los núcleos de procesamiento respetando también restricciones térmicas y de energía, sobre la base de una planificación fluida. Considerar la sobrecarga derivada del número de cambios de contexto y migraciones en un ejecutivo cíclico plantea un dilema de causalidad: el número de cambios de contexto (y en consecuencia su sobrecarga) no se conoce hasta generar el ejecutivo cíclico, pero dicho número no se puede minimizar hasta que se ha calculado. La tesis propone una solución a este dilema mediante un método iterativo de convergencia demostrada que logra minimizar la sobrecarga mencionada.En definitiva, la tesis consigue explotar la idea de planificación fluida para maximizar la utilización (donde maximizar la utilización es un gran problema en la industria) generando un sencillo ejecutivo cíclico de mínima sobrecarga (ya que la sobrecarga implica un gran problema de los planificadores basados en planificación fluida).Finalmente, se propone un método para utilizar las referencias de la planificación fuera de línea establecida en el ejecutivo cíclico para su seguimiento por parte de un controlador de frecuencia en línea, de modo que se pueden afrontar pequeñas perturbaciones y variaciones paramétricas, integrando la gestión de tareas aperiódicas (tiempo real blando) mientras se asegura la integridad de la ejecución del conjunto de tiempo real duro.Estas aportaciones constituyen una novedad en el campo, refrendada por las publicaciones derivadas de este trabajo de tesis.<br /

    On-Device Deep Learning Inference for System-on-Chip (SoC) Architectures

    Get PDF
    As machine learning becomes ubiquitous, the need to deploy models on real-time, embedded systems will become increasingly critical. This is especially true for deep learning solutions, whose large models pose interesting challenges for target architectures at the “edge” that are resource-constrained. The realization of machine learning, and deep learning, is being driven by the availability of specialized hardware, such as system-on-chip solutions, which provide some alleviation of constraints. Equally important, however, are the operating systems that run on this hardware, and specifically the ability to leverage commercial real-time operating systems which, unlike general purpose operating systems such as Linux, can provide the low-latency, deterministic execution required for embedded, and potentially safety-critical, applications at the edge. Despite this, studies considering the integration of real-time operating systems, specialized hardware, and machine learning/deep learning algorithms remain limited. In particular, better mechanisms for real-time scheduling in the context of machine learning applications will prove to be critical as these technologies move to the edge. In order to address some of these challenges, we present a resource management framework designed to provide a dynamic on-device approach to the allocation and scheduling of limited resources in a real-time processing environment. These types of mechanisms are necessary to support the deterministic behavior required by the control components contained in the edge nodes. To validate the effectiveness of our approach, we applied rigorous schedulability analysis to a large set of randomly generated simulated task sets and then verified the most time critical applications, such as the control tasks which maintained low-latency deterministic behavior even during off-nominal conditions. The practicality of our scheduling framework was demonstrated by integrating it into a commercial real-time operating system (VxWorks) then running a typical deep learning image processing application to perform simple object detection. The results indicate that our proposed resource management framework can be leveraged to facilitate integration of machine learning algorithms with real-time operating systems and embedded platforms, including widely-used, industry-standard real-time operating systems

    Simulation of Efficient Real-Time Scheduling and Power Optimisation

    Get PDF
    International audienceSophisticated applications turn out to be executed upon more than one CPU for practical and economic reasons. Due to advances in circuit technology and performance limitation, multi-core technology has become the mainstream in CPU designs. However, the most serious limitation of these devices is the battery lifetime since battery technology is not keeping up with the rest of the power-hungry processors and peripherals used in today's mobile devices. As a solution, many investigations have turned toward the algorithms of power management combined with some scheduling policies. They can make significant energy saving while preserving the temporal constraints of these embedded systems. Reducing energy, especially, affect not only the battery lifetime, but also aim to reduce the heat generated by real-time embedded controller in various products or even to decrease the conditions of cooling and the costs, in the large scale, of giant multiprocessor computers. To assess the behavior and performance of the strategy of scheduling a flexible multiprocessor scheduling simulation and evaluation platform is needed. This paper puts forth the claim that the STORM simulator improves application quality both in terms of execution time and energy consumption for a high performance mobile computing embedded system design

    Scheduling Techniques for Operating Systems for Medical and IoT Devices: A Review

    Get PDF
    Software and Hardware synthesis are the major subtasks in the implementation of hardware/software systems. Increasing trend is to build SoCs/NoC/Embedded System for Implantable Medical Devices (IMD) and Internet of Things (IoT) devices, which includes multiple Microprocessors and Signal Processors, allowing designing complex hardware and software systems, yet flexible with respect to the delivered performance and executed application. An important technique, which affect the macroscopic system implementation characteristics is the scheduling of hardware operations, program instructions and software processes. This paper presents a survey of the various scheduling strategies in process scheduling. Process Scheduling has to take into account the real-time constraints. Processes are characterized by their timing constraints, periodicity, precedence and data dependency, pre-emptivity, priority etc. The affect of these characteristics on scheduling decisions has been described in this paper

    An Analytical Solution for Probabilistic Guarantees of Reservation Based Soft Real-Time Systems

    Full text link
    We show a methodology for the computation of the probability of deadline miss for a periodic real-time task scheduled by a resource reservation algorithm. We propose a modelling technique for the system that reduces the computation of such a probability to that of the steady state probability of an infinite state Discrete Time Markov Chain with a periodic structure. This structure is exploited to develop an efficient numeric solution where different accuracy/computation time trade-offs can be obtained by operating on the granularity of the model. More importantly we offer a closed form conservative bound for the probability of a deadline miss. Our experiments reveal that the bound remains reasonably close to the experimental probability in one real-time application of practical interest. When this bound is used for the optimisation of the overall Quality of Service for a set of tasks sharing the CPU, it produces a good sub-optimal solution in a small amount of time.Comment: IEEE Transactions on Parallel and Distributed Systems, Volume:27, Issue: 3, March 201
    corecore