42 research outputs found

    Monte {C}arlo Response-Time Analysis

    Get PDF

    Turning Futexes Inside-Out: Efficient and Deterministic User Space Synchronization Primitives for Real-Time Systems with IPCP

    Get PDF
    In Linux and other operating systems, futexes (fast user space mutexes) are the underlying synchronization primitives to implement POSIX synchronization mechanisms, such as blocking mutexes, condition variables, and semaphores. Futexes allow one to implement mutexes with excellent performance by avoiding system calls in the fast path. However, futexes are fundamentally limited to synchronization mechanisms that are expressible as atomic operations on 32-bit variables. At operating system kernel level, futex implementations require complex mechanisms to look up internal wait queues making them susceptible to determinism issues. In this paper, we present an alternative design for futexes by completely moving the complexity of wait queue management from the operating system kernel into user space, i. e. we turn futexes "inside out". The enabling mechanisms for "inside-out futexes" are an efficient implementation of the immediate priority ceiling protocol (IPCP) to achieve non-preemptive critical sections in user space, spinlocks for mutual exclusion, and interwoven services to suspend or wake up threads. The design allows us to implement common thread synchronization mechanisms in user space and to move determinism concerns out of the kernel while keeping the performance properties of futexes. The presented approach is suitable for multi-processor real-time systems with partitioned fixed-priority (P-FP) scheduling on each processor. We evaluate the approach with an implementation for mutexes and condition variables in a real-time operating system (RTOS). Experimental results on 32-bit ARM platforms show that the approach is feasible, and overheads are driven by low-level synchronization primitives

    Efficiently Approximating the Probability of Deadline Misses in Real-Time Systems

    Get PDF
    This paper explores the probability of deadline misses for a set of constrained-deadline sporadic soft real-time tasks on uniprocessor platforms. We explore two directions to evaluate the probability whether a job of the task under analysis can finish its execution at (or before) a testing time point t. One approach is based on analytical upper bounds that can be efficiently computed in polynomial time at the price of precision loss for each testing point, derived from the well-known Hoeffding\u27s inequality and the well-known Bernstein\u27s inequality. Another approach convolutes the probability efficiently over multinomial distributions, exploiting a series of state space reduction techniques, i.e., pruning without any loss of precision, and approximations via unifying equivalent classes with a bounded loss of precision. We demonstrate the effectiveness of our approaches in a series of evaluations. Distinct from the convolution-based methods in the literature, which suffer from the high computation demand and are applicable only to task sets with a few tasks, our approaches can scale reasonably without losing much precision in terms of the derived probability of deadline misses

    Analysis and Simulation Tools for Probabilistic Real-Time Systems

    Get PDF
    International audienceIn this paper we present two tools meant to simulate and analyze probabilistic real-time task sets. That is, tasks sets which have their timing parameters represented by discrete probabilistic distributions. We describe the main features of each tool and provide configuration details necessary to use them. The two tools are compared, pointing out the advantages and disadvantages of each one, so that interested users can make an informed choice regarding which tool best fits their needs. Both tools are open source and freely available. One of the main objectives of this paper is to make these tools available to the real-time systems research community, which is also invited to participate in their improvements, by giving feedback and even extending the implementations

    Analysing TDMA with slot skipping

    Get PDF
    We propose a schedulability analysis for a particular class of time division multiple access (TDMA) networks, which we label as TDMA/SS. SS stands for slot skipping, reflecting the fact that a slot is skipped whenever it is not used. Hence, the next slot can start earlier in benefit of hard real-time traffic. In the proposed schedulability analysis, we assume knowledge of all message streams in the system, and that each node schedules messages in its output queue according to a rate monotonic policy (as an example). We present the analysis in two steps. Firstly, we address the case where a node is only permitted to transmit a maximum of one message per TDMA cycle. Secondly, we generalise the analysis to the case where a node is assigned a budget of messages per TDMA cycle it may transmit. A simple algorithm to assign budgets to nodes is also presented

    Flattening Hierarchical Scheduling.

    Get PDF
    ABSTRACT Recently, the application of virtual-machine technology to integrate real-time systems into a single host has received significant attention and caused controversy. Drawing two examples from mixed-criticality systems, we demonstrate that current virtualization technology, which handles guest scheduling as a black box, is incompatible with this modern scheduling discipline. However, there is a simple solution by exporting sufficient information for the host scheduler to overcome this problem. We describe the problem, the modification required on the guest and show on the example of two practical real-time operating systems how flattening the hierarchical scheduling problem resolves the issue. We conclude by showing the limitations of our technique at the current state of our research

    Annual Report, 2013-2014

    Get PDF
    Beginning in 2004/2005- issued in online format onl

    Dynamic Voltage Scaling for Energy- Constrained Real-Time Systems

    Get PDF
    The problem of reducing energy consumption is dominating the design of several real-time systems. The Dynamic Voltage Scaling (DVS) technique, provided by most microprocessors, allow to balance computational speed versus energy consumption. We present some novel energy-aware scheduling algorithms that allow to expoit this technique while meeting real-time constraints. In particular, we present the GRUB-PA algorithm which, unlike most existing algorithms, allows to reduce energy consumption on real-time systems consisting of any kind of task. We also present a working implementation of the algorithm on Linux

    Aportaciones al modelado del cálculo del WCET en entornos de memoria cache

    Get PDF
    Los sistemas de tiempo real cobran cada vez más importancia en numerosas áreas. Para lograr una buena planificación de estos sistemas se requiere un análisis preciso y seguro del peor caso de tiempo de ejecución (WCET) siendo el análisis de la jerarquía de memoria uno de los principales desafíos. En este trabajo nos centramos en mejorar la eficiencia de la jerarquía de memoria en los sistemas de tiempo realestricto en cuanto a su predictibilidad aunque también se consideran otros aspectos como el consumo energético.Este propósito se alcanza reduciendo tanto la cota del WCET como su tiempo de análisis y estudiando patrones de acceso a memoria en tareas relevantes en sistemas de tiempo real.Comenzamos analizando el impacto de la cache de instrucciones en el WCET, centrándonos en el método Lock-MS de análisis del WCET. A fin de usar este método diseñamos el algoritmo necesario para transformar el grafo de control del flujo del binario en una estructura en árbol. Este algoritmo reduce el tiempo de análisis del WCET sin perder precisión para una cache de instrucciones bloqueable. Proponemos una heurística de bloqueo dinámico basada en bucles que aplicada a este método permite obtener el contenido óptimo de cache para el WCET en cada una de las regiones determinadas por la heurística. Además de reducir el WCET, ya que explota el reuso temporal, también reduce su tiempo de análisis.A continuación, ampliamos el estudio del análisis del WCET considerando las instrucciones resultantes de la vectorización automática. Detectamos que la vectorización del código puede ser una buena opción para reducir de manera efectiva el WCET si ésta se lleva a cabo en aquellos bucles que concentran la mayor parte deltiempo ejecución. Por tanto, es conveniente invertir tiempo y recursos en una buena vectorización del código en el contexto de los sistemas de tiempo real.Para finalizar, centramos nuestro estudio en el impacto de la cache de datos estudiando el patrón de acceso a datos en la transposición de matrices y acotando su tasa ideal de aciertos en su versión tiling. De este estudio obtenemos unas expresiones con respecto a los parámetros de cache que garantizan que se alcanzará la tasaideal de aciertos. Específicamente, cuando la dimensión del tile es igual al tamaño de línea de cache la tasa ideal de aciertos se alcanza con muy pocos conjuntos y tan solo dos vías en una cache asociativa por conjuntos. Además, comparamos nuestros resultados con un algoritmo de la transpuesta «indiferente» a los parámetros de lacache (oblivious).<br /
    corecore