5 research outputs found

    Task-Set Generator for Schedulability Analysis using the TACLeBench benchmark suite

    Get PDF
    ABSTRACT Current real-time embedded systems evolve towards complex systems using new state of the art technologies such as multi-core processors and virtualization techniques. Both technologies requires new real-time scheduling algorithms. For uniprocessor scheduling, utilization-based evaluation methodologies are common and well-established. For multicore systems and virtualization, evaluating and comparing scheduling techniques using the tasks' parameters is more realistic. Evaluating these different scheduling techniques requires relevant and standardised task sets. Scheduling algorithms can be evaluated on three evaluation levels: 1) by using the mathematical model of the scheduling algorithm, 2) by simulating the scheduling algorithm and 3) by implementing the algorithm on the target platform. Generating task sets is straightforward in case of the first two levels; only the parameters of the tasks are required. Evaluating and comparing scheduling algorithms on the target platform itself, however, requires real executable tasks matching the predefined standardised task sets. Generating those executable tasks is not standardized yet. Therefore, we developed a task-set generator that generates reproducible, standardised task sets that are suitable at all levels. Beside generating the tasks' parameters, it includes an executable generator methodology that generates executables by combining publicly available benchmarks with know execution times. This paper presents and evaluates this task-set generator. The executables approximate the wanted execution time on the hardware platform

    Task-set generator for schedulability analysis using the TACLeBench benchmark suite

    Get PDF
    ABSTRACT Current real-time embedded systems evolve towards complex systems using new state of the art technologies such as multi-core processors and virtualization techniques. Both technologies requires new real-time scheduling algorithms. For uniprocessor scheduling, utilization-based evaluation methodologies are common and well-established. For multicore systems and virtualization, evaluating and comparing scheduling techniques using the tasks' parameters is more realistic. Evaluating these different scheduling techniques requires relevant and standardised task sets. Scheduling algorithms can be evaluated on three evaluation levels: 1) by using the mathematical model of the scheduling algorithm, 2) by simulating the scheduling algorithm and 3) by implementing the algorithm on the target platform. Generating task sets is straightforward in case of the first two levels; only the parameters of the tasks are required. Evaluating and comparing scheduling algorithms on the target platform itself, however, requires real executable tasks matching the predefined standardised task sets. Generating those executable tasks is not standardized yet. Therefore, we developed a task-set generator that generates reproducible, standardised task sets that are suitable at all levels. Beside generating the tasks' parameters, it includes an executable generator methodology that generates executables by combining publicly available benchmarks with know execution times. This paper presents and evaluates this task-set generator. The executables approximate the wanted execution time on the hardware platform

    Task-set generator for schedulability analysis using the TACLebench benchmark suite

    Get PDF
    Currently, real-time embedded systems evolve towards complex systems using new state of the art technologies such as multi-core processors and virtualization techniques. Both technologies require new real-time scheduling algorithms. For uniprocessor scheduling, utilization-based evaluation methodologies are well-established. For multi-core systems and virtualization, evaluating and comparing scheduling techniques using the tasks' parameters is more realistic. Evaluating such scheduling techniques requires relevant and standardised task sets. Scheduling algorithms can be evaluated at three levels: 1) using a mathematical model of the algorithm, 2) simulating the algorithm and 3) implementing the algorithm on the target platform. Generating task sets is straightforward in the case of the first two levels; only the parameters of the tasks are required. Evaluating and comparing scheduling algorithms on the target platform itself, however, requires executable tasks matching the predefined standardised task sets. Generating those executable tasks is not standardized yet. Therefore, we developed a task-set generator that generates reproducible, standardised task sets that are suitable at all levels. Besides generating the tasks' parameters, it includes a method that generates executables by combining publicly available benchmarks with known execution times. This paper presents and evaluates this task-set generator
    corecore