8 research outputs found

    Classification and analysis of predictable memory patterns

    Get PDF
    The verification complexity of real-time requirements in embedded systems grows exponentially with the number of applications, as resource sharing prevents independent verification using simulation-based approaches. Formal verification is a promising alternative, although its applicability is limited to systems with predictable hardware and software. SDRAM memories are common examples of essential hardware components with unpredictable timing behavior, typically preventing use of formal approaches. A predictable SDRAM controller has been proposed that provides guarantees on bandwidth and latency by dynamically scheduling memory patterns, which are statically computed sequences of SDRAM commands. However, the proposed patterns become increasingly inefficient as memories become faster, making them unsuitable for DDR3 SDRAM. This paper extends the memory pattern concept in two ways. Firstly, we introduce a burst count parameter that enables patterns to have multiple SDRAM bursts per bank, which is required for DDR3 memories to be used efficiently. Secondly, we present a classification of memory pattern sets into four categories based on the combination of patterns that cause worst-case bandwidth and latency to be provided. Bounds on bandwidth and latency are derived that apply to all pattern types and burst counts, as opposed to the single case covered by earlier work. Experimental results show that these extensions are required to support the most efficient pattern sets for many use-cases. We also demonstrate that the burst count parameter increases efficiency in presence of large requests and enables a wider range of real-time requirements to be satisfied

    Monte {C}arlo Response-Time Analysis

    Get PDF

    Classification and Analysis of Predictable Memory Patterns

    Full text link
    The verification complexity of real-time requirements in embedded systems grows exponentially with the number of applications, as resource sharing prevents independent verification using simulation-based approaches. Formal verification is a promising alternative, although its applicability is limited to systems with predictable hardware and software. SDRAM memories are common examples of essential hardware components with unpredictable timing behavior, typically preventing use of formal approaches. A predictable SDRAM controller has been proposed that provides guarantees on bandwidth and latency by dynamically scheduling memory patterns, which are statically computed sequences of SDRAM commands. However, the proposed patterns become increasingly inefficient as memories become faster, making them unsuitable for DDR3 SDRAM. This paper extends the memory pattern concept in two ways. Firstly, we introduce a burst count parameter that enables patterns to have multiple SDRAM bursts per bank, which is required for DDR3 memories to be used efficiently. Secondly, we present a classification of memory pattern sets into four categories based on the combination of patterns that cause worst-case bandwidth and latency to be provided. Bounds on bandwidth and latency are derived that apply to all pattern types and burst counts, as opposed to the single case covered by earlier work. Experimental results show that these extensions are required to support the most efficient pattern sets for many use-cases. We also demonstrate that the burst count parameter increases efficiency in presence of large requests and enables a wider range of real-time requirements to be satisfied

    Safe and Optimal Scheduling for Hard and Soft Tasks

    Get PDF
    We consider a stochastic scheduling problem with both hard and soft tasks on a single machine. Each task is described by a discrete probability distribution over possible execution times, and possible inter-arrival times of the job, and a fixed deadline. Soft tasks also carry a penalty cost to be paid when they miss a deadline. We ask to compute an online and non-clairvoyant scheduler (i.e. one that must take decisions without knowing the future evolution of the system) that is safe and efficient. Safety imposes that deadline of hard tasks are never violated while efficient means that we want to minimise the mean cost of missing deadlines by soft tasks. First, we show that the dynamics of such a system can be modelled as a finite Markov Decision Process (MDP). Second, we show that our scheduling problem is PP-hard and in EXPTime. Third, we report on a prototype tool that solves our scheduling problem by relying on the Storm tool to analyse the corresponding MDP. We show how antichain techniques can be used as a potential heuristic

    Aportaciones al modelado del cálculo del WCET en entornos de memoria cache

    Get PDF
    Los sistemas de tiempo real cobran cada vez más importancia en numerosas áreas. Para lograr una buena planificación de estos sistemas se requiere un análisis preciso y seguro del peor caso de tiempo de ejecución (WCET) siendo el análisis de la jerarquía de memoria uno de los principales desafíos. En este trabajo nos centramos en mejorar la eficiencia de la jerarquía de memoria en los sistemas de tiempo realestricto en cuanto a su predictibilidad aunque también se consideran otros aspectos como el consumo energético.Este propósito se alcanza reduciendo tanto la cota del WCET como su tiempo de análisis y estudiando patrones de acceso a memoria en tareas relevantes en sistemas de tiempo real.Comenzamos analizando el impacto de la cache de instrucciones en el WCET, centrándonos en el método Lock-MS de análisis del WCET. A fin de usar este método diseñamos el algoritmo necesario para transformar el grafo de control del flujo del binario en una estructura en árbol. Este algoritmo reduce el tiempo de análisis del WCET sin perder precisión para una cache de instrucciones bloqueable. Proponemos una heurística de bloqueo dinámico basada en bucles que aplicada a este método permite obtener el contenido óptimo de cache para el WCET en cada una de las regiones determinadas por la heurística. Además de reducir el WCET, ya que explota el reuso temporal, también reduce su tiempo de análisis.A continuación, ampliamos el estudio del análisis del WCET considerando las instrucciones resultantes de la vectorización automática. Detectamos que la vectorización del código puede ser una buena opción para reducir de manera efectiva el WCET si ésta se lleva a cabo en aquellos bucles que concentran la mayor parte deltiempo ejecución. Por tanto, es conveniente invertir tiempo y recursos en una buena vectorización del código en el contexto de los sistemas de tiempo real.Para finalizar, centramos nuestro estudio en el impacto de la cache de datos estudiando el patrón de acceso a datos en la transposición de matrices y acotando su tasa ideal de aciertos en su versión tiling. De este estudio obtenemos unas expresiones con respecto a los parámetros de cache que garantizan que se alcanzará la tasaideal de aciertos. Específicamente, cuando la dimensión del tile es igual al tamaño de línea de cache la tasa ideal de aciertos se alcanza con muy pocos conjuntos y tan solo dos vías en una cache asociativa por conjuntos. Además, comparamos nuestros resultados con un algoritmo de la transpuesta «indiferente» a los parámetros de lacache (oblivious).<br /

    Cyber Security and Critical Infrastructures 2nd Volume

    Get PDF
    The second volume of the book contains the manuscripts that were accepted for publication in the MDPI Special Topic "Cyber Security and Critical Infrastructure" after a rigorous peer-review process. Authors from academia, government and industry contributed their innovative solutions, consistent with the interdisciplinary nature of cybersecurity. The book contains 16 articles, including an editorial that explains the current challenges, innovative solutions and real-world experiences that include critical infrastructure and 15 original papers that present state-of-the-art innovative solutions to attacks on critical systems
    corecore