241 research outputs found

    Analysis of preemptively scheduled hard real-time systems

    Get PDF
    As timing is a major property of hard real-time, proving timing correctness is of utter importance. A static timing analysis derives upper bounds on the execution time of tasks, a scheduling analysis uses these bounds and checks if each task meets its timing constraints. In preemptively scheduled systems with caches, this interface between timing analysis and scheduling analysis must be considered outdated. On a context switch, a preempting task may evict cached data of a preempted task that need to be reloaded again after preemption. The additional execution time due to these reloads, called cache-related preemption delay (CRPD), may substantially prolong a task\u27s execution time and strongly influence the system\u27s performance. In this thesis, we present a formal definition of the cache-related preemption delay and determine the applicability and the limitations of a separate CRPD computation. To bound the CRPD based on the analysis of the preempted task, we introduce the concept of definitely cached useful cache blocks. This new concept eliminates substantial pessimism with respect to former analyses by considering the over-approximation of a preceding timing analysis. We consider the impact of the preempting task to further refine the CRPD bounds. To this end, we present the notion of resilience. The resilience of a cache block is a measure for the amount of disturbance of a preempting task a cache block of the preempted task may survive. Based on these CRPD bounds, we show how to correctly account for the CRPD in the schedulability analysis for fixed-priority preemptive systems and present new CRPD-aware response time analyses: ECB-Union and Multiset approaches.Da das Zeitverhalten ein Hauptbestandteil harter Echtzeitsysteme ist, ist das Beweisen der zeitlichen Korrektheit von großer Bedeutung. Eine statische Zeitanalyse berechnet obere Schranken der AusfĂŒhrungszeiten von Programmen, eine Planbarkeitsanalyse benutzt diese und prĂŒft ob jedes Programm die Zeitanforderungen erfĂŒllt. In prĂ€emptiv geplanten Systemen mit Caches, muss die Nahtstelle zwischen Zeitanalyse und Planbarkeitsanalyse als veraltet angesehen werden. Im Falle eines Kontextwechsels kann das unterbrechende Programm Cache-daten des unterbrochenen Programms entfernen. Diese Daten mĂŒssen nach der Unterbrechung erneut geladen werden. Die zusĂ€tzliche AusfĂŒhrungszeit durch das Nachladen der Daten, welche Cache-bezogene PrĂ€emptions-Verzögerung (engl. Cache-related Preemption Delay (CR-PD)) genannt wird, kann die AusfĂŒhrungszeit des Programm wesentlich erhöhen und hat somit einen starken Einfluss auf die Gesamtleistung des Systems. Wir prĂ€sentieren in dieser Arbeit eine formale Definition der Cache-bezogene PrĂ€emptions-Verzögerung und bestimmen die EinschrĂ€nkungen und die Anwendbarkeit einer separaten Berechnung der CRPD. Basierend auf der Analyse des unterbrochenen Programms prĂ€sentieren wir das Konzept der definitiv gecachten nĂŒtzlichen Cacheblöcke. Verglichen mit bisherigen CRPD-Analysen eleminiert dieses neue Konzept wesentliche ÜberschĂ€tzung indem die ÜberschĂ€tzung der vorherigen Zeitanalyse mit in Betracht gezogen wird. Wir analysieren den Einfluss des unterbrechenden Programms um die CRPD-Schranken weiter zu verbessern. Hierzu fĂŒhren wir das Konzept der Belastbarkeit ein. Die Belastbarkeit eines Cacheblocks ist ein Maß fĂŒr die Störung durch das unterbrechende Programm, die ein nĂŒtzlicher Cacheblock ĂŒberleben kann. Basierend auf diesen CRPD-Schranken zeigen wir, wie die Cache-bezogene PrĂ€emptions-Verzögerung korrekt in die Planbarkeitsanalyse fĂŒr Systeme mit statischen PrioritĂ€ten integriert werden kann und prĂ€sentieren neue CRPD-bewußte Antwortzeitanalysen: die ECB-Union und die Multimengen-AnsĂ€tze

    Anytime Algorithms for GPU Architectures

    Get PDF
    Most algorithms are run-to-completion and provide one answer upon completion and no answer if interrupted before completion. On the other hand, anytime algorithms have a monotonic increasing utility with the length of execution time. Our investigation focuses on the development of time-bounded anytime algorithms on Graphics Processing Units (GPUs) to trade-off the quality of output with execution time. Given a time-varying workload, the algorithm continually measures its progress and the remaining contract time to decide its execution pathway and select system resources required to maximize the quality of the result. To exploit the quality-time tradeoff, the focus is on the construction, instrumentation, on-line measurement and decision making of algorithms capable of efficiently managing GPU resources. We demonstrate this with a Parallel A* routing algorithm on a CUDA-enabled GPU. The algorithm execution time and resource usage is described in terms of CUDA kernels constructed at design-time. At runtime, the algorithm selects a subset of kernels and composes them to maximize the quality for the remaining contract time. We demonstrate the feedback-control between the GPU-CPU to achieve controllable computation tardiness by throttling request admissions and the processing precision. As a case study, we have implemented AutoMatrix, a GPU-based vehicle traffic simulator for real-time congestion management which scales up to 16 million vehicles on a US street map. This is an early effort to enable imprecise and approximate real-time computation on parallel architectures for stream-based timebounded applications such as traffic congestion prediction and route allocation for large transportation networks
    • 

    corecore