7 research outputs found

    Scheduling task dependence graphs with variable task execution times onto heterogeneous multiprocessors

    Get PDF
    ABSTRACT We present a statistical optimization approach for scheduling a task dependence graph with variable task execution times onto a heterogeneous multiprocessor system. Scheduling methods in the presence of variations typically rely on worst-case timing estimates for hard real-time applications, or average-case analysis for other applications. However, a large class of soft real-time applications require only statistical guarantees on latency and throughput. We present a general statistical model that captures the probability distributions of task execution times as well as the correlations of execution times of different tasks. We use a Monte Carlo based technique to perform makespan analysis of different schedules based on this model. This approach can be used to analyze the variability present in a variety of soft real-time applications, including a H.264 video processing application. We present two scheduling algorithms based on statistical makespan analysis. The first is a heuristic based on a critical path analysis of the task dependence graph. The other is a simulated annealing algorithm using incremental timing analysis. Both algorithms take as input the required statistical guarantee, and can thus be easily re-used for different required guarantees. We show that optimization methods based on statistical analysis show a 25-30% improvement in makespan over methods based on static worst-case analysis

    Hybrid dynamic energy and thermal management in heterogeneous embedded multiprocessor SoCs

    Full text link

    Scheduling task dependence graphs with variable task execution times onto heterogeneous multiprocessors

    No full text

    Performance Estimation of Task Graphs Based on Path Profiling

    Get PDF
    Correctly estimating the speed-up of a parallel embedded application is crucial to efficiently compare different parallelization techniques, task graph transformations or mapping and scheduling solutions. Unfortunately, especially in case of control-dominated applications, task correlations may heavily affect the execution time of the solutions and usually this is not properly taken into account during performance analysis. We propose a methodology that combines a single profiling of the initial sequential specification with different decisions in terms of partitioning, mapping, and scheduling in order to better estimate the actual speed-up of these solutions. We validated our approach on a multi-processor simulation platform: experimental results show that our methodology, effectively identifying the correlations among tasks, significantly outperforms existing approaches for speed-up estimation. Indeed, we obtained an absolute error less than 5 % in average, even when compiling the code with different optimization levels

    Exploring resource/performance trade-offs for streaming applications on embedded multiprocessors

    Get PDF
    Embedded system design is challenged by the gap between the ever-increasing customer demands and the limited resource budgets. The tough competition demands ever-shortening time-to-market and product lifecycles. To solve or, at least to alleviate, the aforementioned issues, designers and manufacturers need model-based quantitative analysis techniques for early design-space exploration to study trade-offs of different implementation candidates. Moreover, modern embedded applications, especially the streaming applications addressed in this thesis, face more and more dynamic input contents, and the platforms that they are running on are more flexible and allow runtime configuration. Quantitative analysis techniques for embedded system design have to be able to handle such dynamic adaptable systems. This thesis has the following contributions: - A resource-aware extension to the Synchronous Dataflow (SDF) model of computation. - Trade-off analysis techniques, both in the time-domain and in the iterationdomain (i.e., on an SDF iteration basis), with support for resource sharing. - Bottleneck-driven design-space exploration techniques for resource-aware SDF. - A game-theoretic approach to controller synthesis, guaranteeing performance under dynamic input. As a first contribution, we propose a new model, as an extension of static synchronous dataflow graphs (SDF) that allows the explicit modeling of resources with consistency checking. The model is called resource-aware SDF (RASDF). The extension enables us to investigate resource sharing and to explore different scheduling options (ways to allocate the resources to the different tasks) using state-space exploration techniques. Consistent SDF and RASDF graphs have the property that an execution occurs in so-called iterations. An iteration typically corresponds to the processing of a meaningful piece of data, and it returns the graph to its initial state. On multiprocessor platforms, iterations may be executed in a pipelined fashion, which makes performance analysis challenging. As the second contribution, this thesis develops trade-off analysis techniques for RASDF, both in the time-domain and in the iteration-domain (i.e., on an SDF iteration basis), to dimension resources on platforms. The time-domain analysis allows interleaving of different iterations, but the size of the explored state space grows quickly. The iteration-based technique trades the potential of interleaving of iterations for a compact size of the iteration state space. An efficient bottleneck-driven designspace exploration technique for streaming applications, the third main contribution in this thesis, is derived from analysis of the critical cycle of the state space, to reveal bottleneck resources that are limiting the throughput. All techniques are based on state-based exploration. They enable system designers to tailor their platform to the required applications, based on their own specific performance requirements. Pruning techniques for efficient exploration of the state space have been developed. Pareto dominance in terms of performance and resource usage is used for exact pruning, and approximation techniques are used for heuristic pruning. Finally, the thesis investigates dynamic scheduling techniques to respond to dynamic changes in input streams. The fourth contribution in this thesis is a game-theoretic approach to tackle controller synthesis to select the appropriate schedules in response to dynamic inputs from the environment. The approach transforms the explored iteration state space of a scenario- and resource-aware SDF (SARA SDF) graph to a bipartite game graph, and maps the controller synthesis problem to the problem of finding a winning positional strategy in a classical mean payoff game. A winning strategy of the game can be used to synthesize the controller of schedules for the system that is guaranteed to satisfy the throughput requirement given by the designer

    Semi-automated parallel programming in heterogeneous intelligent reconfigurable environments (SAPPHIRE)

    Get PDF
    In recent years, as we come closer to approaching physical limits in making smaller (and faster) computer processors, focus has instead been turned toward including multiple processor cores in each device. While this technically allows for more computational power as compared with only one traditional processor core, conventional software can typically only make use of a single processor. Furthermore, we see an increasing number of stream programs that process streams of data such as a stream of images or audio. For stream programs to effectively utilize multi-core processors, multithreading is the key, but it may be difficult to implement in practice depending on the complexity of the programs. We present SAPPHIRE: Semi-Automated Parallel Programming in Heterogeneous Intelligent Reconfigurable Environment, a middleware and SDK for developing multithreaded stream programs. In this middleware, we implement our semi-automated program construction technique which is designed to aid in writing multithreaded software by reducing needed complexity and lines of code written by software developers. We also present a novel static task-scheduling algorithm for stream programs with heterogeneous implementation choices. Our algorithm is capable of scheduling stream programs with provably near-optimal results given a specific set of assumptions, without requiring the unrolling of the task graph. Unrolling the task graph greatly increases the size of the input to the NP-Complete part of the task-scheduling problem as in related work. Finally, we present two case study programs implemented using SAPPHIRE. One case study, EM-Capture, has analyzed over 50 billion frames of endoscopy video in real-time in a real hospital, discerning over 71,000 unique endoscopy procedures. The other case study, EM-Feedback-RT, is a collaborative extension to EM-Capture, and is an attempt to provide real-time quality analysis feedback to physicians during a colonoscopy exam
    corecore