1,302 research outputs found

    Scheduling policies and system software architectures for mixed-criticality computing

    Get PDF
    Mixed-criticality model of computation is being increasingly adopted in timing-sensitive systems. The model not only ensures that the most critical tasks in a system never fails, but also aims for better systems resource utilization in normal condition. In this report, we describe the widely used mixed-criticality task model and fixed-priority scheduling algorithms for the model in uniprocessors. Because of the necessity by the mixed-criticality task model and scheduling policies, isolation, both temporal and spatial, among tasks is one of the main requirements from the system design point of view. Different virtualization techniques have been used to design system software architecture with the goal of isolation. We discuss such a few system software architectures which are being and can be used for mixed-criticality model of computation

    Dynamic Speed Scaling Minimizing Expected Energy Consumption for Real-Time Tasks

    Get PDF
    International audienceThis paper proposes a Discrete Time Markov Decision Process (MDP) approach to compute the optimal on-line speed scaling policy to minimize the energy consumption of a single processor executing a finite or infinite set of jobs with real-time constraints. We provide several qualitative properties of the optimal policy: monotonicity with respect to the jobs parameters, comparison with on-line de-terministic algorithms. Numerical experiments in several scenarios show that our proposition performs well when compared with off-line optimal solutions and out-performs on-line solutions oblivious to statistical information on the jobs

    Control techniques for thermal-aware energy-efficient real time multiprocessor scheduling

    Get PDF
    La utilización de microprocesadores multinúcleo no sólo es atractiva para la industria sino que en muchos ámbitos es la única opción. La planificación tiempo real sobre estas plataformas es mucho más compleja que sobre monoprocesadores y en general empeoran el problema de sobre-diseño, llevando a la utilización de muchos más procesadores /núcleos de los necesarios. Se han propuesto algoritmos basados en planificación fluida que optimizan la utilización de los procesadores, pero hasta el momento presentan en general inconvenientes que los alejan de su aplicación práctica, no siendo el menor el elevado número de cambios de contexto y migraciones.Esta tesis parte de la hipótesis de que es posible diseñar algoritmos basados en planificación fluida, que optimizan la utilización de los procesadores, cumpliendo restricciones temporales, térmicas y energéticas, con un bajo número de cambios de contexto y migraciones, y compatibles tanto con la generación fuera de línea de ejecutivos cíclicos atractivos para la industria, como de planificadores que integran técnicas de control en tiempo de ejecución que permiten la gestión eficiente tanto de tareas aperiódicas como de desviaciones paramétricas o pequeñas perturbaciones.A este respecto, esta tesis contribuye con varias soluciones. En primer lugar, mejora una metodología de modelo que representa todas las dimensiones del problema bajo un único formalismo (Redes de Petri Continuas Temporizadas). En segundo lugar, propone un método de generación de un ejecutivo cíclico, calculado en ciclos de procesador, para un conjunto de tareas tiempo real duro sobre multiprocesadores que optimiza la utilización de los núcleos de procesamiento respetando también restricciones térmicas y de energía, sobre la base de una planificación fluida. Considerar la sobrecarga derivada del número de cambios de contexto y migraciones en un ejecutivo cíclico plantea un dilema de causalidad: el número de cambios de contexto (y en consecuencia su sobrecarga) no se conoce hasta generar el ejecutivo cíclico, pero dicho número no se puede minimizar hasta que se ha calculado. La tesis propone una solución a este dilema mediante un método iterativo de convergencia demostrada que logra minimizar la sobrecarga mencionada.En definitiva, la tesis consigue explotar la idea de planificación fluida para maximizar la utilización (donde maximizar la utilización es un gran problema en la industria) generando un sencillo ejecutivo cíclico de mínima sobrecarga (ya que la sobrecarga implica un gran problema de los planificadores basados en planificación fluida).Finalmente, se propone un método para utilizar las referencias de la planificación fuera de línea establecida en el ejecutivo cíclico para su seguimiento por parte de un controlador de frecuencia en línea, de modo que se pueden afrontar pequeñas perturbaciones y variaciones paramétricas, integrando la gestión de tareas aperiódicas (tiempo real blando) mientras se asegura la integridad de la ejecución del conjunto de tiempo real duro.Estas aportaciones constituyen una novedad en el campo, refrendada por las publicaciones derivadas de este trabajo de tesis.<br /

    Accounting for preemption and migration costs in the calculation of hard real-time cyclic executives for MPSoCs

    Get PDF
    This work introduces a methodology to consider preemption and migration overhead in hard real-time cyclic executives on multicore architectures. The approach performs two iterative stages. The first stage takes a cyclic executive, from which the number and timing of all preemptions and migrations for every task is known. Then, it includes this overhead by updating the worst-case execution time (WCET) of the tasks. The second stage calculates a new cyclic executive considering the new WCET of tasks. The stages iterate until the preemption and migration overhead keeps constant. © 2016 IEEE

    Temporal analysis and scheduling of hard real-time radios running on a multi-processor

    Get PDF
    On a multi-radio baseband system, multiple independent transceivers must share the resources of a multi-processor, while meeting each its own hard real-time requirements. Not all possible combinations of transceivers are known at compile time, so a solution must be found that either allows for independent timing analysis or relies on runtime timing analysis. This thesis proposes a design flow and software architecture that meets these challenges, while enabling features such as independent transceiver compilation and dynamic loading, and taking into account other challenges such as ease of programming, efficiency, and ease of validation. We take data flow as the basic model of computation, as it fits the application domain, and several static variants (such as Single-Rate, Multi-Rate and Cyclo-Static) have been shown to possess strong analytical properties. Traditional temporal analysis of data flow can provide minimum throughput guarantees for a self-timed implementation of data flow. Since transceivers may need to guarantee strictly periodic execution and meet latency requirements, we extend the analysis techniques to show that we can enforce strict periodicity for an actor in the graph; we also provide maximum latency analysis techniques for periodic, sporadic and bursty sources. We propose a scheduling strategy and an automatic scheduling flow that enable the simultaneous execution of multiple transceivers with hard-realtime requirements, described as Single-Rate Data Flow (SRDF) graphs. Each transceiver has its own execution rate and starts and stops independently from other transceivers, at times unknown at compile time, on a multiprocessor. We show how to combine scheduling and mapping decisions with the input application data flow graph to generate a worst-case temporal analysis graph. We propose algorithms to find a mapping per transceiver in the form of clusters of statically-ordered actors, and a budget for either a Time Division Multiplex (TDM) or Non-Preemptive Non-Blocking Round Robin (NPNBRR) scheduler per cluster per transceiver. The budget is computed such that if the platform can provide it, then the desired minimum throughput and maximum latency of the transceiver are guaranteed, while minimizing the required processing resources. We illustrate the use of these techniques to map a combination of WLAN and TDS-CDMA receivers onto a prototype Software-Defined Radio platform. The functionality of transceivers for standards with very dynamic behavior – such as WLAN – cannot be conveniently modeled as an SRDF graph, since SRDF is not capable of expressing variations of actor firing rules depending on the values of input data. Because of this, we propose a restricted, customized data flow model of computation, Mode-Controlled Data Flow (MCDF), that can capture the data-value dependent behavior of a transceiver, while allowing rigorous temporal analysis, and tight resource budgeting. We develop a number of analysis techniques to characterize the temporal behavior of MCDF graphs, in terms of maximum latencies and throughput. We also provide an extension to MCDF of our scheduling strategy for SRDF. The capabilities of MCDF are then illustrated with a WLAN 802.11a receiver model. Having computed budgets for each transceiver, we propose a way to use these budgets for run-time resource mapping and admissibility analysis. During run-time, at transceiver start time, the budget for each cluster of statically-ordered actors is allocated by a resource manager to platform resources. The resource manager enforces strict admission control, to restrict transceivers from interfering with each other’s worst-case temporal behaviors. We propose algorithms adapted from Vector Bin-Packing to enable the mapping at start time of transceivers to the multi-processor architecture, considering also the case where the processors are connected by a network on chip with resource reservation guarantees, in which case we also find routing and resource allocation on the network-on-chip. In our experiments, our resource allocation algorithms can keep 95% of the system resources occupied, while suffering from an allocation failure rate of less than 5%. An implementation of the framework was carried out on a prototype board. We present performance and memory utilization figures for this implementation, as they provide insights into the costs of adopting our approach. It turns out that the scheduling and synchronization overhead for an unoptimized implementation with no hardware support for synchronization of the framework is 16.3% of the cycle budget for a WLAN receiver on an EVP processor at 320 MHz. However, this overhead is less than 1% for mobile standards such as TDS-CDMA or LTE, which have lower rates, and thus larger cycle budgets. Considering that clock speeds will increase and that the synchronization primitives can be optimized to exploit the addressing modes available in the EVP, these results are very promising

    Un processus à décision de Markov en temps discret pour minimiser l'énergie sous des contraintes d'échéances

    Get PDF
    This paper proposes a Discrete Time Markov Decision Process (MDP) approach to compute the optimal on-line speed scaling policy to minimize the energy consumption of a single processor executing a finite or infinite set of jobs with real-time constraints. We provide several qualitative properties of the optimal policy: monotonicity with respect to the jobs parameters, comparison with on-line deterministic algorithms. Numerical experiments in several scenarios show that our proposition performs well when compared with off-line optimal solutions and out-performs on-line solutions oblivious to statistical information on the jobs

    Application analyses of ultra-low-energy processor

    Get PDF
    Abstract. Low energy consumption has become a critical design feature in modern systems. Internet of Things, wearables and other portable devices create increasing demand for low power design where device size is dictated by battery and low energy means longer battery life and smaller physical size. These are crucial features for wearables and especially implantable medical devices. There are several low power and energy efficient techniques which are applied at different abstraction levels of the system design. A technique usually utilizing software control and hardware features is DVFS (dynamic voltage and frequency scaling), a dynamic power management technique which decreases processor clock frequency and supply voltage. Reduction in energy consumption is achieved with the cost of reduced performance. One of the questions with DVFS is how the execution frequencies are defined. This thesis presents a method for frequency optimization for applications executed on a single core processor. Execution trace data is used to profile the application. FreeRTOS operating system is used although tracing can be implemented with any real-time operating system executing tasks as separate threads. Based on profiling and user-defined data, task execution frequencies are defined assuming that execution time scales linearly with the frequency. A near-threshold ARM Cortex M3 with integrated power management and phase-locked loop is used for measurements. The measurements show that energy savings can be achieved without affecting correct application execution. However, the reduction in energy consumption depends highly on the system used and the application execution profile. Iterative testing and frequency optimization are required to ensure adequate performance. For energy efficiency optimization, energy consumption needs to be considered in every phase of the design.Matalan energiankulutuksen prosessorin sovellusanalyysi. Tiivistelmä. Matala energiankulutus on keskeinen ominaisuus nykyisten järjestelmien suunnittelussa. Esineiden Internet ja puettava tietotekniikka luovat tarpeen yhä pienemmälle energiankulutukselle. Laitteen koko määräytyy akun koon mukana. Matala tehonkulutus tarkoittaa pidempää akunkestoa ja pienempää fyysista kokoa. Nämä ovat ratkaisevia ominaisuuksia, erityisesti implantoitaville lääkinnällisille laitteille. Energiatehokkuuteen ja matalaan energiankulutukseen tähtääviä menetelmiä voidaan soveltaa eri abstraktiotasoilla järjestelmän suunnittelussa. Dynaaminen jännitteen ja taajuuden skaalaus on menetelmä, millä pyritään alentamaan dynaamista tehonkulutusta säätelemällä käyttöjännitettä ja kellotaajuutta. Suorituskyvyn kustannuksella on mahdollista saavuttaa matalampi energiankulutus. Keskeinen kysymys on, miten käytettävät kellotaajuudet tulee määritellä. Tässä diplomityössä kehitetään menetelmä, jota voidaan käyttää optimaalisten kellotaajuuksien määrittämiseen. Suorituksen aikana kerättävää dataa käytetään ohjelman profilointiin ja optimointimallin luomiseen. Suoritusdatan kerääminen on kehitetty FreeRTOS-käyttöjärjestelmälle, mutta periaate on sovellettavissa käyttöjärjestelmille, joissa tehtävät suoritetaan erillisissä prosesseissa. Profilointidata hyödynnetään yhdessä käyttäjän syöttämän data kanssa kellotaajuuksien määrittämiseen olettaen, että suoritusaika skaalautuu lineaarisesti kellotaajuden kanssa. Suositustaajuudet määritetään jokaiselle prosessille erikseen. Mittauksissa käytettiin ARM Cortex M3 prosessoria integroidulla tehonhallinnalla ja vaihelukolla. Mittaustulokset osoittavat, että energiankulutusta voidaan pienentää vaikuttamatta sovelluksen virheettömään suoritukseen. Saavutettava hyöty tehonkulutuksessa on riippuvainen käytettävästä järjestelmästä ja sovelluksen suoritusprofiilista. Riittävä suorituskyky täytyy varmistaa iteratiivisella testaamisella ja kellotaajuuksien optimoinnilla. Tehonkulutus ja energiatehokkuus täytyy huomioida suunnitteluprosessin jokaisella osa-alueella, jotta parhaat tulokset saavutetaan

    Generalizing List Scheduling for Stochastic Soft Real-time Parallel Applications

    Get PDF
    Advanced architecture processors provide features such as caches and branch prediction that result in improved, but variable, execution time of software. Hard real-time systems require tasks to complete within timing constraints. Consequently, hard real-time systems are typically designed conservatively through the use of tasks? worst-case execution times (WCET) in order to compute deterministic schedules that guarantee task?s execution within giving time constraints. This use of pessimistic execution time assumptions provides real-time guarantees at the cost of decreased performance and resource utilization. In soft real-time systems, however, meeting deadlines is not an absolute requirement (i.e., missing a few deadlines does not severely degrade system performance or cause catastrophic failure). In such systems, a guaranteed minimum probability of completing by the deadline is sufficient. Therefore, there is considerable latitude in such systems for improving resource utilization and performance as compared with hard real-time systems, through the use of more realistic execution time assumptions. Given probability distribution functions (PDFs) representing tasks? execution time requirements, and tasks? communication and precedence requirements, represented as a directed acyclic graph (DAG), this dissertation proposes and investigates algorithms for constructing non-preemptive stochastic schedules. New PDF manipulation operators developed in this dissertation are used to compute tasks? start and completion time PDFs during schedule construction. PDFs of the schedules? completion times are also computed and used to systematically trade the probability of meeting end-to-end deadlines for schedule length and jitter in task completion times. Because of the NP-hard nature of the non-preemptive DAG scheduling problem, the new stochastic scheduling algorithms extend traditional heuristic list scheduling and genetic list scheduling algorithms for DAGs by using PDFs instead of fixed time values for task execution requirements. The stochastic scheduling algorithms also account for delays caused by communication contention, typically ignored in prior DAG scheduling research. Extensive experimental results are used to demonstrate the efficacy of the new algorithms in constructing stochastic schedules. Results also show that through the use of the techniques developed in this dissertation, the probability of meeting deadlines can be usefully traded for performance and jitter in soft real-time systems

    The Control Server Model for Co-Design of Real-Time Control Systems

    Get PDF
    The paper presents the control server, a real-time scheduling mechanism tailored to control and signal processing applications. A control server creates the abstraction of a control task with a specified period and a fixed input-output latency shorter than the period. Individual tasks can be combined into more complex components without loss of their individual guaranteed fixed-latency properties. I/O occurs at fixed predefined points in time, at which inputs are read or controller outputs become visible. The control server model is especially suited for codesign of real-time control systems. The single parameter linking the scheduling design and the controller design is the task utilization factor. The proposed server is an extension of the constant bandwidth server, which is based on the earliest-deadline-first scheduling algorithm. The server has been implemented in a real-time kernel and has also been validated in control experiments on a ball and beam process
    corecore